Unreal Tournament 3Epic Games had quite a bit success with their Unreal Engine ; at first competing with the ID Tech engine for licensing they now seem to have pulled ahead with the latest incarnation : Unreal Engine 3. The list of games using this engine is
huge with blockbuster titles like: Bioshock, Mass Effect, Gears of Wars, Rainbow Six Las Vegas (1&2) and of course their next iteration of the UT series, UT3.
While many games share the same Unreal 3 engine, the developers can decide how high the system requirements will be, by increasing the level of detail. Rainbow Six Las Vegas for example is known for being more demanding than Bioshock on the same hardware. Unreal Tournament 3 by Epic Games provides an amazing balance between image quality and performance, rendering beautiful scenes even on lower end hardware; on high end graphics cards you can really turn up the detail which makes it picture perfect.
We used HardwareOC’s benchmark tool which does a fly-by of the chosen level, do note that performance numbers reported are higher compared to in-game. The map used was “Corruption”
We added the QAA settings for the Geforce GTX 280 as these should offer higher IQ, but to be honest, we have to look extremely hard to notice a difference between 8xAA and 16xQAA. So while these numbers are a lot lower, in a fast paced game like UT3 you might as well stick with 4x or 8xAA.
Comparing performance between XP and Vista we see that both cards are slower in Vista when no AA is used, the X2 drops by ~25% the GTX only by ~9%. When AA is enabled however the situations is different, now the GTX 280 performs pretty much the same under XP as under Vista, except for the 16xAA setting which is noticeably slower under Vista. The HD 4870 X2 fares better from the OS move, under Vista performance is consistently higher when AA is enabled.
For a detail view of the results, with AA scaling and XP -> Vista Scaling see
this table
...8xAA on ATI should be compared to 8xQAA on nV, not the 8xAA which is 4xMSAA based CSAA mode
...16xAA on ATI effectively turn the card into single chip card which can do 16xMSAA, since both chips render the same frame with different AA patterns
...16xAA on nV is 4xMSAA based CSAA mode and 16xQAA on nV is 8xMSAA based CSAA mode
So 16x and 8x comparisons in your graphs are far from being 'fair' or 'apples-to-apples', the 8xAA should have ATI 8xAA vs nV 8xQAA (8xQAA = 8xMSAA) and 16xAA shouldn't even exist since the GTX280 can't do 16xMSAA which is (practicly) what the HD4870X2 is doing by blending the same frame rendered twice with different AA patterns.