shady, u just seem to be contradicting what every site out there says. It is actually fact that AMD has come out and said that there cpus were built from the ground up with multi-cores (which is multiple cores on a single package, just like it sounds) in mind. It was part of the reason for the development of Hyper Transport.
I'm actually not contradicting it. Multi-cores were not the reason this was done. Scaleability was. This
includes multiple cores by design but wasn't the consideration at the time.
I also have manufacturer spec sheets from before the CPUs were actually released. It wasn't limited to just multiple cores and there was actually another architecture that was being researched that would have benefited as well.
Marketing can claim all they want now. That's how the business works.
Of course, they were also considering multi-cpu systems as well, and the fact that HT allows for much lower latencies between RAM, and north and south bridges and other peripheral devices.
There is no separate/true northbridge here - that's the whole point of the design.
As for advertising, I have seen AMD commercials that have promoted the "64-bit" processor. I mean, it is called the Athlon 64. Shurely that doeshn't rufer to juh 64-bitts, doesh it?
I indicated
I hadn't seen any. I didn't say there wasn't any. But if you choose to lose civility I won't bother continuing this one. There are desktop benefits currently, but they are limited for now.
You are aware that there are consumer level 64-bit processors from AMD and Intel right? For example, I'm running a venice core 1.8 ghz Athlon 64 right now.
I am absolutely aware. I am aware of far more AMD and Intel products than you likely are by virtue of NDA. That's my field (I'm an engineer) and I know products being designed for release in 2009.
And pipelining does have a lot to do with the ghz myth. It's because too many pipelines (ie the 30 or so the Prescott core has)
The Prescott core doesn't have 30+ pipelines. It has a depth of 30+ stages.
add extra latency in the processor. A pipeline is exactly as it sounds; it is a queue for instructions for the cpu.
Pipelines actually decrease latency - that is their entire purpose. Instruction prefetching is done by a completely separate unit to perform memory->cpu loads prior to the needs of the math, logic and execution units.
Every cpu has a pipeline and most are rather deep in current-day processors.
The more stages a pipeline has, the longer it takes.
Huh? The whole point of the pipeline is to load instructions and data
early. If the pipeline did not exist then the logic and execution units would stall and need to wait for instruction and data loads. Increasing the depth of the pipeline also enables independent units (such as the FPU and IPU in typical x86 design) to operate
in parallel. This
reduces latency, not increases it. Initial load takes longer to fill the pipeline when the cpu is first booted - do you really think you'll notice the extra microsecond delay when you first turn on your PC?
We all know higher latencies allow for higher clock speeds. In addition, if the branch predictor (a unit that predicts which data will need the processor first) screws up, it has to reset the pipe, additionally slowing it down.
And if there was no pipeline then you'd be at the same place. The whole point is that the prefetching is done while the rest of the CPU is busy so that the other units don't need to wait. If there were no pipeline these units would be waiting already. The additional overhead of refilling the pipeline is actually "low" since it happens "rarely"
Luckily the P4 has great branch prediction and so therefore was able to cover up for it for awhile (until recently with the Prescott).
Thank you for making my point.
Hyper Threading (not AMD's hypertransport, a completely different technology) has been shown to make apps faster, but only when you're running the concurrently though. So, say for example that I'm running an mp3 encoder and playing Doom 3, it would be more doable than with a processor without HT.
That's actually a terrible example. Both the MP3 encoder and the game require extensive use of the FPU, of which Intel cpus have only one. HT is only an advantage when instruction mix allows either parallel IPU/IPU or parallel IPU/FPU execution by independent threads. HT has actually been demonstrated and proven by Intel to slow down applications that were not tuned for HT. The documentation is pretty long but in short HT results in missed scheduling of threads that were waiting for the FPU when it was busy handling an integer operation that could have waited for the IPU. It turns out that the FPU becomes heavier utilized artificially which ultimately slows down applications (It isn't nearly that simple but that's a managerial summary, if you will)
Everyone keeps missing a key point. Pipelines, and deep pipelines, are not unique to Intel. Competitors like to attack Intel on that when in fact they were designing similarly (just a little behind Intel). Intel, unfortunately, thought it would be enough but AMD though out of the box to make the leap ahead of Intel. The PowerPC is perhaps one example of where deep pipelines are not used, but in moving from a four to seven to twelve stage it has made leaps in both clock speed and performance.
That is how to properly use the technology!
[sarcasm maximum]
The G5's longest pipeline is 25 stages. I guess IBM/Motorolla/Apple have it wrong too?[/sarcasm]
Keep in mind that pipelines are
not strictly serial through all the stages. Just because there's a 30-stage pipeline doesn't mean every instruction needs to traverse the entire prefetch queue and may in fact be dispatched very early on. This helps make the most efficient use of the many independent units within a cpu - you want to keep as many of them busy as possible. Without a pipeline you could only keep one unit busy while the others sat idle.
AMD started by licensing Intel designs. The tables have now turned and Intel licenses from AMD. . .
I'm an engineer. I ignore marketing. I only believe half the manufacturer propaganda. I work based on engineering documents. What gets published in the media and on web sites means nothing to me since it is pure marketing and the public will believe anything they're told even when faced with contradicting real world data.