Data on performance:
Since we now already know that this particular timing influences the stability of the memory overclock, we now focus on the effect in performance. As you all know, the higher a memory timing is set, the less performance you have. Why? To make it simple: the higher the value, the longer you have to wait for the command (controlled by that timing) can be issued. The lower the value, the shorter the waiting period, thus the faster the command is issued.
We used Superpi 32M and Lavalys Everest to show the performance differences.
As you can see, a good B2B-setting can mean the difference between a good and a very bad 32M result.
Quite a spectacular decrease in the memory read bandwidth going from 6 to 12 or even 10 to 12.
The memory write bandwidth decrease seems to be more subtile than what we saw in the graph above, but going from 10 to 12 still has quite a big effect on the performance.
Very big decrease in performance, once again!
As you can see, this timing has nothing to do with the latency of the memory, but only with the bandwidth throughput.Findings:
When going over the different graphs, it's more than clear that this timing has a dramatic effect on the performance. More specificly, on the memory bandwidth. Underneath you find an overview of the effect of the different aspects of tuning the memory subsystem on the different benchmarks.Memory frequency: 1600CL7 versus 2000CL7
Cas Latency: 2000CL9 versus 2000CL7
Back-to-Back Cas Delay: 1600CL7_12 versus 1600CL7_6
Basicly, I calculated the effect of changing on of the three variables of the tests in above section. Each set of bars represent the effect of the three variables in a certain benchmark. The longer the bar is, the more effect a variable has in this specific test.
Another way of looking at this would be to find a match of low-clock and high-clock settings in terms of performance:
Most interesting result is of course the 'LE - copy' result as 1600CL8 can outperform 2000CL7. Who needs high frequency memory anyway?