DDR vs DDR2 Latency, How Cycles Work, and Dual Channel Marketing
I’ve noticed one thing on the Internet, that stands out above almost all others: most people on the Internet have no clue what they are talking about. Case in point, a lot of ricers and gamerz like to say that DDR is lower latency than DDR2 because DDR2 takes more cycles to do things; except they forget one important thing: cycles are not a measurement of time, they are a measurement of iterations.
That said, there is only one case where DDR actually manages to be lower latency than DDR2 (and this doesn’t mean it has higher performance, or effects benchmarks in any measurable way in favor of DDR), and that is with DDR400 memory vs. DDR2-400 memory: latency is theoretically lower, but you pay a penalty for giving up DDR2’s larger prefetch buffer and better power efficiency. Also, no one actually uses DDR2-400 memory, only 667 and 800. DD2-800 compared to DDR400, latency ends up being similar in impact, and the actual performance is at least twice as much as DDR400, probably even more.
Another thing people say is that DDR2 is slower because it takes more cycles to do things. Yet another thought that hasn’t been fully thought out, and in a similar manner to the whole latency problem (infact, they are directly related; faster timings usually decrease latency across the same memory archetecture). As I said earlier, cycles do not measure time; however, cycles combined with cycles per unit of time measure time. DDR2 in most, if not all, situtations simply performs better.
So, to anyone out there that says that DDR2 is a step backwards: You’re an idiot.