Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Microsoft beats Intel, AMD to market with CPU/GPU combo chip (arstechnica.com) similar stories update story
22.0 points by Halienja | karma 1638 | avg karma 8.81 2010-08-24 05:47:23+00:00 | hide | past | favorite | 17 comments



view as:

Fun excerpt - "It would have been easier and more natural to just connect the CPU and GPU with a high-bandwidth, low-latency internal connection, but that would have made the new SoC faster in some respects than the older systems, and that's not allowed."

"Given the unique requirement of consoles -- the system must perform exactly like the original Xbox 360"

I just found it amusing as a design spec requirement. Usually newer systems have to go faster and do more things. Looks like they went out of their way to develop an on chip fsb like interface to slow things down.

Yeah, it's an interesting design challenge. It seems like "design a SoC that performs exactly like discrete chips connected by a bus" is a task that'd have a lot of sneaky pitfalls. Partly depends on how close "exactly" has to be.

Though I'm far from an expert, I would think they could even out performance in software to some extent.

Well, figure that game publishers will have some of their animation/game timing routines keyed to the specifications of the original model. If the new one has more juice and programmers start taking advantage of it, Microsoft would probably get the blame for those titles' failure to run well on the older models.

I was actually surprised at how smoothly Sony managed to phase out the ps/2 backward compatibility (to lower costs) from the PS3 after the first year or two. i guess it helps that you can pick up the older console for something ridiculous like $75 if there's some classic old title you can't give up. Incidentally, word is that development has already begun on PS4 games, so you know what to ask Santa for in 2012.


> figure that game publishers will have some of their animation/game timing routines keyed to the specifications of the original model.

Any game programmer worth his salt will stay away from that though. Platforms change over time and the raw speed of a platform should not be something you tie your animations to. That will already backfire when there's more or less active objects in the game.

A more common approach is to tie the animation to the display frame rate by locking to the vertical refresh.


Even if people think they've done everything right in theory, the code will never have run (let alone gone through QA) on models with different timing, and publishers wouldn't be keen on having to go back and fix flaws.

Going back some years, AFAIK, you didn't necessarily have developers or build infrastructure for a title that shipped a few months ago. I don't know whether online gaming and the need to patch bugs/vulnerabilities has changed this.


I buy my consoles when theyre being phased out. The hardware is cheap and more importantly the catalog for the console actually exists and is usually significantly discounted too. No point buying a console early on when there are barely any games out for it. So in 2012 I will probably pick up a PS3 for ~$129.

No they don't.

First of all, we all run different compilation modes - debug, release, ship, etc. A lot even do LTCG (Link-Time Compile Generation) - e.g. full global opt.

So there is no such thing as relying.

If templates/inline members are heavily used, the difference between debug and optimized code can go somewhere between 5..10 slower.

Normally for straight "C" code is only x2-x3 times, unless AltiVec is used (but only in few tight places - skinning, dsp processing, etc.)

So no one ties anything to CPU. It's unreliable. Don't forget that it's not only your stuff that's running.

If you are on Dolby Digital, then there would be some kind of encoder running with you.

If you are doing multi-player game, then it could be that you are the one running the server (obviously made such way that no one else knows).

If you are running in different resolution - same applies - different timings.

If you are doing splitscreen - yup.. Another change

And if you want to do real 3D - one more....

And many more (HDD presence or not - e.g. caching on the HDD or not).

At the end of the day, you try to get 30fps or 60fps, and some other internal frame for the server (if multiplayer game). If you get that you are just fine.

No one relies on what you've just said.

That was done only on the really old machines, where you had to synchronize each horizontal scan line so you get some special effects (more colors, or more stuff being drawn).

Almost anything since 1996 is not like that.

Even the PS1 had different speeds, and it was TRC violation to rely on that, because Sony were smart and knew they had to emulate that in future, and exact proper emulation is not always easy.


You clearly deal with this regularly whereas I'm just observing from the sidelines, so I'm happy to be corrected. I extrapolated from the single-task game/home computers of the 80s, where a lot of those other considerations simply didn't exist and hooking into hardware clocks etc. was considered acceptable if it delivered something to the user. You can see the downsides of this with emulators, where some games run unplayably fast unless you use a command-line switch to throttle the emulator speed to the original ~30fps or similar. Sinclair Spectrum games are particularly prone to this. Now I feel old :-)

I didn't realize today's console manufacturers enforced such strict discipline on programmers/publishers; I guess breaking the terms means losing access to the development platform completely?

On a side note, since you're in the profession, why is it that consoles have such small amounts of RAM? IIRC the ps3 has 256mb each of main and GPU RAM, which is less than the (cheap) graphics card in my PC. I'm perplexed as to why one would go with less than a gigabyte, given the low cost of memory.


I don't think breaking TRC would mean taking development rights from you. All you had to do is fix them, and resubmit (each submit to the manifacturer costs some money).

Violations are usually user interface problems - not handling correctly if a game controller was pulled, or Disk I/O error happened, or network was disconnected. Some are strict (for example you should have something on the screen no less than N seconds once the disc is put in), some are suggestions for better integration with the system.

You should also display how big your save games would be :) (Not sure if this is still requirment everywhere), but it used to be for PS1/PS2.

As for RAM - well I guess when the console was first produced, RAM costs were really high. I'm not a hardware guy, but eventually the manifacturers are squeezing that expensive console to produce to something very cheap, and having less memory makes it easier. Just a guess. Not much knowledge in this area.

But having only 256MB of RAM is not that bad - we often don't use malloc/free new/delete, or have them working on specific memory blocks (for example to keep recent cache of collisions), and usually allocated with the same sizes.

Not having much RAM means you have to be more inventive - you can find savings different ways - streaming, layering, or other techniques. ID's recent tech is just awesome in that respect.

Although I've told you that we have different compile options, the HW stays the same :) This is both good & bad - good that you can become really expert in it, bad because by the time you are - next console arrives.

Usually that's why the first game suck, and games released 2nd or 3rd year are just awesome, and the last ones released might be better than first ones released for the next generation - God Of War for PS2 was such example. There was nothing that good on the PS3 at that time from the same genre.

Metal Gear Solid for PS1 also shaked up the graphics quality. It's still an awesome looking game even for PS1. And this was on 2MB of memory, 512kb video (or was it less or little more), and I can't remmember now well 256kb or 512kb audio.

I worked on the port of MGS to PC (mgspc.com), and was amazed of how good the japanese coders were, and the way the music was composed - it was midi/mod like but really really well done (we lied on the PC and released it as .wav file - no mixing).

What I'm saying here, is that if you have little digital samples and some notes/tracks you can compose music pieces that otherwise would take you megabytes of memory (APDCM compression was just 1:4 with low quality, unlike 1:10 or more for mp3 and others).


Thanks a lot - I learned more from that than from 6 months of reading console blogs!

So, how and when did they phase PS2 backward compatibility?

I'm asking out of interest as a programmer, electrical engineer and wannabe game designer. I never owned a game console.


The first models had some ps/2 chips in them (as opposed to software emulations; the CPU etc. are totally different). In later revisions they just left those chips out and stopped advertising the functionality. so if you put a ps3 disk ina later-model ps3 it will just spit it back out again with an error message.

I think 1st-gen PS3s are still selling at a premium on eBay for this reason.


Thanks for explanation.

It's a pity that they let games obsolete that quickly and get away with that.


Legal | privacy