OCZ returns to Graphics Card Business with 8800 GTX

Videocards/VGA Reviews by geoffrey @ 2007-03-14

OCZ has been in the review spotlight for their excellent memory modules and high end power supply units, today we get a chance to test their latest product which reintroduces them into the graphics card area with a product based on the popular NVIDIA G80 Core. Our expectations are high, let´s find out if it can deliver.

  • prev
  • next

Test setup, Benchmark methodology & oc

Test setup

Geoffreys' Intel Test Setup
Madshrimps (c)
CPUIntel E6600 @ 3,6GHz
CoolingZalman 9700 LED
MainbordAsus P5B Deluxe Wifi
Memory2x1Gb TEAMGROUP Xtreem 800MHz 4-4-4-10
Other
  • OCZ Powerstream 600W PSU
  • Maxtor 80Gb PATA HDD
  • Seagate 200GB SATA HDD
  • 20" Dell UltraSharp 2007FP TFT monitor

  • The CPU was running at 3,6GHz by setting the front side bus to 400MHz and keeping the multiplier at default (9). The memory was running @ 400MHz (800MHz DDR) with 4-4-4-10 timings 1/1 with the FSB.
  • ForceWare 97.92 drivers
  • While Windows Vista is now officially launched, and G80 are the only DX10 capable cards for now, we decided to test with a mature Windows OS (XP SP2), even if we wanted to, the lack of none-beta drivers for Vista keeps us from testing on the new platform.

    Benchmark methodology

    The game benchmarks were completed with the OCZ at stock speeds as well as maximum stable overclock setting. We added an overclocked Geforce 7950GT to the mix (GPU/MEM of that card increased to 7900GTX specifications) to see how the high end GTX from the last Geforce generation compares.

    All tests were done with a 20” LCD monitor with maximum resolution of 1600*1200. For high-end VGA cards this resolution can be quite stressful in newer games especially when higher quality settings are enabled like anti aliasing and anisotropic filtering.

    The G80 core is powerful, at 1280x1024 and lower resolutions we couldn’t properly stress them unless we cranked the AA levels beyond reasonable levels. We stuck at our monitor’s maximum native resolution of 1600*1200 and found a good balance between image quality and playability.

    Depending on what game we ran, IQ was set a bit different. You will be informed at all time at what it is actually set. Whenever possible the in-game quality options were set to highest. FRAPS was used to measure the FPS during repeated manual run-throughs of a certain part of the games tested, the minimum, maximum and average values were recorded.

  • Battlefield 2
  • Call of Duty 2
  • FEAR
  • Tomb Raider: Legend
  • TES: Oblivion
  • Trackmania Nations
  • Need For Speed Carbon
  • Quake 4
  • Futuremark 3DMark Series

    Overclocking

    We used Rivatuner to up the clocks speeds and stressed the cards with 3DMark06 to check stability. Here are our results:

    Core (MHz)
    Shaders (MHz)
    Memory (MHz)
    OCZ 8800GTX stock
    576
    1350
    900
    OCZ 8800GTX oc
    621
    1512
    1053


    The reason why we didn't pick a nice round core clock instead of 621 MHz has to do with how those cards are build up. G80 boards are based on a 27MHz crystal, and with the help of multipliers/dividers we get the reference 576MHz core clock. During our overclock tests we saw that the BIOS didn't allow for too complex clock calculations, meaning that increasing clocks per MHz was not an option. What we saw was that core and memory clocks set to certain points and that if you slide the core clock between two integers in Rivatuner the core or memory will set itself at a certain clock speed. The bad thing is that you can not see at what point the core clock might jump up to the next logical step, though there is an easy way to solve this matter. Rivatuner's build in log software made it possible to read real-time clocks at any given time, so when you slide your core clock to an higher level, you actually notice it by looking to the chart, the core clock might suddenly jump up from 621MHz to 648HMz for example.
    • prev
    • next

    No comments available.

     

    reply