DDR vs DDR2 Latency, How Cycles Work, and Dual Channel Marketing
I’ve noticed one thing on the Internet, that stands out above almost all others: most people on the Internet have no clue what they are talking about. Case in point, a lot of ricers and gamerz like to say that DDR is lower latency than DDR2 because DDR2 takes more cycles to do things; except they forget one important thing: cycles are not a measurement of time, they are a measurement of iterations.
That said, there is only one case where DDR actually manages to be lower latency than DDR2 (and this doesn’t mean it has higher performance, or effects benchmarks in any measurable way in favor of DDR), and that is with DDR400 memory vs. DDR2-400 memory: latency is theoretically lower, but you pay a penalty for giving up DDR2’s larger prefetch buffer and better power efficiency. Also, no one actually uses DDR2-400 memory, only 667 and 800. DD2-800 compared to DDR400, latency ends up being similar in impact, and the actual performance is at least twice as much as DDR400, probably even more.
Another thing people say is that DDR2 is slower because it takes more cycles to do things. Yet another thought that hasn’t been fully thought out, and in a similar manner to the whole latency problem (infact, they are directly related; faster timings usually decrease latency across the same memory archetecture). As I said earlier, cycles do not measure time; however, cycles combined with cycles per unit of time measure time. DDR2 in most, if not all, situtations simply performs better.
So, to anyone out there that says that DDR2 is a step backwards: You’re an idiot.
As a budding reviewer (and I say budding because after 4-years I’m still learning) I find there is so much marketing fodder in the tech industry it’s amazing anyone can decipher the prima facie specifications from those “instituites” whom establish them? While JEDEC seems to be on the up and up we know all too well the sad story of Intel’s FormFactor and the demon spawn known as Multi-Rail power supplies? Studying the history tells us Intel processors were so power hungry an additional 4-pin baseboard connector was needed at 12V to feed their power hungry CPU’s. The multi-rail debacle which did nothing for performence except produce many thosuands of cheaply built PSU’s pales in comparison to a single Rail PSU which is simply built correctly. But I digress.
I’ve come to the conclusion that DDR2 while it does have it’s benefits seems to be more of a logical progression for Intel NetBurst and now C2D with 333FSB (quad pumped DDR2 = 1333MHz) then AMD processors. If anything AMD should have skipped the AM2 socket processors with on-die memory controllers reamining at 200FSB and simply waited for DDR3 or as Intel has done, pushed it to market.
I imagine auto-programming will be both a blessing and a curse for some, of course that will depend on motherboard makers who choose to program more aggrassive timings into their memory-subsystem. All that stands betwen an “overclocking” motherboard and JEDEC approved stability are increased voltages and speeds.
The day I saw a heat-pipe on a DDR2 module was the day I saw the proverbial Third Teet.
Any comments on what Van Smith of VHJ would term Marchitecture and how it drives technologies which have less to do with performance and everything to do with more money.