On release date of HD7770 many tech-sites announced there is huge performance gap between HD6850 and HD7770. Since this time AMD released many drivers improving various things and one of the most important of these is improving GCN architecture performance. In theory now these both GPUs are at same level of performance, but how it really is? Let’s take a look at this…
For now my testing platform maybe is not impressive but is enough to show difference between these two GPUs. CPU: AMD Phenom II x4 B35 2,9GHz, RAM: 4GB, PSU: OCZ ZS 550 80+ Bronze, Windows 7 x64 Enterprise SP1, AMD Catalyst 13.2 Beta 3. GPUs are MSI Radeon HD6850 1GB Cyclone Power Edition and Gigabyte HD7770 OC rev 2.0.
Testing procedure I started with synthetic benchmarks: 3Dmark06 and Catzilla. In moment of test there was no 3DMark (2013) yet, but anyway, after analysis of its scores I doubt if it will ever be on my list of benchmarking software, it is really difficult to compare hardware with this software, maybe with some updates Futuremark will fix their new product.
Radeon HD6850: 15365 3DMarks
Radeon HD7770: 15572 3DMarks
This one shows these both cards are very close in performance.
Catzilla on day of test been updated and I tested cards with version 1.0 beta 20.
This test shows pretty big difference.
Hard Reset, again I’ve benchmarked in few modes, with various samples of AA. Resolution: 1920×1080, ultra details.
Next game in this test is Crysis, the first. FullHD resolution (1920×1080), very high details, DX10 mode, x64 executable.
Crysis is surprise. Without AA cards are performing same good but when we turn on maximum AA available in game difference becomes really big. With 6850 game is kinda playable but at 7770 is totally unplayable.
Trine 2 is masterpiece in gaming industry. Game may seem to many people to be not demanding game but it requires not low-end configuration to run at full detail. Test done in 1920×1080, very high detail and v-sync enabled.
Extreme AA (FXAA+4x SSAA):
With Trine 2 situation is a bit twisted. Without AA both cards got fluid, high rate of fps. For HD6850 game is not so big deal and keeps 60fps but with HD7770 there were drops under this rate and even under border of 50fps what is really wondering me. With maximum preset of AA which is really demanding combination of TXAA and 4 samples of SSAA fps drops down horribly on both cards. Difference in performance with AA between these two GPUs is really low but HD7770 got more stable fps and keep play the game with almost steady average 24fps when on HD6850 fps is jumping across whole range from minimum to maximum rate.
Power consumption I measured for whole computer. In idle mode on both cards computer was taking 140W. In full load HD7770 grew to 200W and HD6850 to 250W.
It is really difficult to fairly judge these GPUs. Without AA they are really similar, sometimes HD7770 performs a bit better and sometimes HD6850, but when we want to enable AA HD6850 is better choice. If without AA these both cards perform almost the same why there is such a huge difference when we turn on AA? Because most methods of AA are using ROPs, HD7770 have 16 ROPs versus 32ROPs inside of HD6850. AMD did a lot of a good job with their drivers and HD7770 can compete with powerful HD6850. The biggest difference for you may do price and power consumption.