Galaxy 8600GE Overclocking Experience

Overclocking/Overclocking Tests by massman @ 2007-11-12

A while ago yours truly had the opportunity to review Galaxy´s improved 8600 GT video card. It featured a new design and an extra power connector for higher overclocking. Today we push this card to limit at one of Belgian´s first overclocking LANs.

  • prev
  • next

Into detail

Let's go a bit more into detail:

We all know that the higher the clock frequencies of the video card are, the better your score will be. But sometimes the memory is limited by the gpu. Should the shader be linked to the gpu? What overclocks can I expect with what cooling?

Solving these questions is important if you want to get the most out of your card. The answers are different for every card as not every card overclocks the same. If you want to be thorough, the following figures are a must.

Overclock versus Voltage

During the weeks of the preparation of the final bench session, I had the chance to test the clock frequencies with stock air-cooling. I have graphs ready to show, but they are not useful as a lot depends of the temperature of the environment and the card itself. Though, I will give you a few pointers.

- The memory is very sensitive for higher voltages. Be careful when overclocking, with this card you're already ahead of the competition
- The GPU can clock very high with relatively low voltages IF the temperature keeps low. When testing in the attic (room temp around 15°C) I reached 880MHz GPU stable with 1,61v.
- Check where the limit of the shader clock lies. With my card this is 2052MHz, more Vgpu doesn't help.

GPU, shader and memory clocks

The frequency you set is not always the actual frequency! Most cards work with certain levels of frequencies. For instance, if you set 712MHz as GPU frequency, the actual frequency might be 718MHz. The same goes for shader and memory frequencies.

Madshrimps (c)


Madshrimps (c)


Note that since the release of the 16x.xx drivers and RivaTuner 2.05 the gpu clock and the shader clock can be set unlinked. Therefore we don't provide the relation between gpu and shader clock.

Madshrimps (c)


GPU, shader and memory performance scaling

Next up: how does increasing the frequencies translate into performance increase? We used 3DMark01's Nature test to calculate the performance and Rivatuner to overclock our card.

Madshrimps (c)


Gpu and shader clock are linked, as the gpu had a limit at 700MHz when keeping the shader clock at 1400MHz. Two sets of data, one with the memory clocked at 1000MHz and one with the memory clocked at 1080MHz.

The GPU/Shader overclock is pretty important as it keeps increasing the performance. Higher clocked memory means that the GPU/Shader performance scaling will be a lot higher.

Madshrimps (c)


Many people tend to hype the Shader clock as the most important of all three. In fact, this isn’t the whole truth. The shader clock is indeed important until a certain level. If you clock past this level, you won't notice a big increase. In other words: at 630MHz core, a 1,7GHz shader clock is almost as fast as a 2GHz clock.

When overclocking the card on air we noticed that the shader clock maxed out at 2052MHz. Even higher voltages didn't help us to go higher.

Madshrimps (c)


This is quite interesting as it shows why exactly the Galaxy 8600GE is so interesting for WR attempts. At 630MHz core frequency, the extra memory overclock doesn't matter, not even 10fps more. But when at 810MHz core, you'll notice that the extra memory speed really boosts the card. Imagine at 1GHz core ... the extra memory speed might gain up to an extra 40-50fps in the 3DMark01 nature test!
  • prev
  • next