The Problem with Real World Benchmarking

Written by Ben Hardwidge

July 5, 2011 | 07:53

Tags: #2012 #benchmark #benchmarks #leave #real-world #resign #synthetic

Companies: #amd #bapco #nvidia #via

The Alternatives

We've now gone long past the point where Intel, AMD and VIA (or Cyrix) all manufactured x86 processors for performing similar tasks. Intel is sticking to refining its Nehalem architecture, AMD is integrating GPUs to accelerate highly parallel tasks and VIA is now concentrating on the low-power end of the market. A one-size-fits-all approach to benchmarking is clearly no longer appropriate. It might tell you that an Intel chip is faster than an AMD or VIA chip in serial compute-intensive software, which might be useful information for some people, but it could also be misleading to others.

As VIA's Richard Brown says: 'We would like to see a simplification of the benchmarking process, and a focus on realistic task-centric applications that are meaningful for users. If I look at my own video usage, for example, all I really want to know is how long it would take for me to render a short two-minute video clip so that I can get it onto YouTube as quickly as possible. If I wanted to get into the movie business then of course my requirements would be different – but then I would be in a small minority. The same applies to spreadsheets: how many people run massive calculations on a daily basis? Let’s get back to basics.'

The Problem with Real World Benchmarking Real World Benchmarks - The AlternativesNo one will buy a VIA Eden X2 system based on its performance in massive spreadsheets

What is clear is that there's no longer room for a consortium made up of all the major players, all working towards a single, representative real world benchmark. In fact, AMD's spokesperson told us that the company is currently 'exploring the options to encourage an alternative consortium; one that will deliver unbiased, representative benchmarks and promote more transparency for our industry. We are committed to working with likeminded companies that want to give consumers and business users an accurate, honest measure of what they can expect from their PCs and mobile devices.'

With BAPCo seemingly unwilling to compromise on GPGPU computing or lower workloads, the only realistic choice for the industry is to run a series of different benchmarks on each processor, to show which ones are better in which areas. If GPGPU computing does really take off in the next couple of years, then SysMark 2012 is going to look outdated very quickly, but if it doesn't then it will (probably rightly) show that Intel's CPUs are faster in its own tests. This is a shame, because BAPCo could have used a segmented scoring system that grades systems' performance in different tasks, rather than ignoring GPGPU computing and low-power compute scenarios for the most part. Of course, you can run different portions of SysMark 2012 to show the level of performance in other areas, but the overall score is the final figure that everyone uses, and this is heavily weighted in favour of serial high-compute workloads.

What this does show, however, is that 'real world benchmarking' is a misleading term in itself. Using real software over synthetic software still has its issues, which is why it's important to remember that benchmarks aren't the be all and end all, and that includes ours. Benchmarks give you a useful gauge of the performance differences between kit running certain tasks in specific software apps, but they don't tell you everything, and they never will. It all depends on how you use your computer, and you'll always need to account for this when viewing benchmark results.
Discuss this in the forums
YouTube logo
MSI MPG Velox 100R Chassis Review

October 14 2021 | 15:04

TOP STORIES

SUGGESTED FOR YOU