Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Charles Arthur On Technology

'Intel is abandoning clock speed as the sole measure of a PC's performance. It's long overdue'

Tuesday 06 April 2004 19:00 EDT
Comments

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Which is the faster chip: a Pentium 4 or a Pentium 3? You'd be right if you thought that the former should win, because it's newer and runs at a higher "clock speed" - the number of times in a second that it can carry out a calculation (say, adding or subtracting two numbers). The Pentium 4 hit 2Ghz (2 thousand million clock "ticks" per second) in 2001; the Pentium 3, or "III", went up to about 1.13Ghz. So on the face of it, the Pentium 4 is the runaway winner.

Which is the faster chip: a Pentium 4 or a Pentium 3? You'd be right if you thought that the former should win, because it's newer and runs at a higher "clock speed" - the number of times in a second that it can carry out a calculation (say, adding or subtracting two numbers). The Pentium 4 hit 2Ghz (2 thousand million clock "ticks" per second) in 2001; the Pentium 3, or "III", went up to about 1.13Ghz. So on the face of it, the Pentium 4 is the runaway winner.

OK, let's try another one. Which is faster, a Pentium 4 attached to a data bus running at 100Mhz, or a Pentium 3 with a data bus running at 200Mhz? (If that leaves you floundering, the "data bus" is the part of the computer that transfers data - that is, calculations - in and out of the microprocessor.)

The answer's not so simple now: the Pentium 4 will be able to chug through the numbers faster, but the bottleneck in communication is now outside the processor, so quite possibly the Pentium 3, attached to the faster bus, will seem quicker.

Now let's complicate matters further. Try a Pentium 4 with a 100Mhz bus attached to 640Mb of RAM lined up against a Pentium 3 with a 200Mhz bus and 128Mb of RAM. Which is going to give you the more pleasant processing experience?

Answer: it depends, but for everyday use - loading up web pages, documents, images, spreadsheets and email - the Pentium 4 will seem like it's working better because it can call in more data, more quickly, from RAM. The Pentium 3, despite having a faster data bus, will be memory-bound; if you have lots of windows open, the processor will have to swap data in and out of memory, probably to the hard disk, to make those windows visible. This assumes that you are running Windows rather than a command-line interface like DOS, but these days that's a safe assumption.

All this does not take into account the graphics card (which can carry out many of the calculations that determine how a window will look) and the hard disk's rotation speed (which affects the speed at which data can be transferred into and out of RAM).

Yet for years the only thing that most people have worried about when buying a computer is the speed of the chip. Intel, which dominates CPU supply, was happy to promote speed as an indicator of how "good" a chip was. But recently it announced that its products will now be divided into "series" - the 300, 500 and 700. In which bigger will mean better, right? Well, not necessarily. "Better" will apply only within each series, so that a Pentium 4 350 ought to to be quicker, on the same set up, than a P4 320.

The abandonment of clock speed as the sole measure of performance is long overdue. When PCs were young it was a reasonable metric, since one could expect dramatic improvements simply by getting a quicker CPU. The first IBM PC in 1981 used the Intel 8088, which had a 16-bit processor but an 8-bit data bus (so it was hobbled from the start). It ran at a mighty 4.7Mhz. The 80286 which followed in February 1982 had a 16-bit bus and ran at between 12.5 and 20Mhz.

And so it went on, with chips getting faster as a corollary of Moore's Law (which states that "the number of transistors that can be fitted into the same area doubles every 18 months"). As transistors get smaller they can be switched on and off more quickly, which is why clock speeds creep up. The demise of Moore's Law has been widely predicted, but so far it looks safe until at least 2010.

But as chip speed rises beyond the 2Ghz mark, the simple ability to chomp numbers becomes less important when compared to bus speed, RAM and even disk speed. In reality there are more contributory factors, such as the way in which the chip carries out its calculations (using "complex" or "reduced" instructions), the "width" of the bus (32- or 64-bit) and how well instructions are lined up ready to be fed into the processor (otherwise known as "pipelining"). And none of these can tell you how long your laptop's battery will last, for example, because that is influenced by other factors including CPU power consumption and the size and brightness of your display.

The company that has been most annoyed by clock speed comparisons is Apple Computer - in the main, it should be said, because its former principal chip supplier, Motorola, failed to keep up with Moore's Law in the late 1990s. It is also because, since the mid-1990s, it has used "reduced instruction set" (RISC) chips that can do more calculations than Intel's in the same number of clock ticks. These calculations may be simpler, but the overall effect is that Apple's chips get the same work done in the same time as an Intel chip with an ostensibly faster clock speed. This goes to show that the speed figure attached to a chip does not tell you how quickly it works.

Apple's new G5 systems, unveiled last summer, offer a dramatic demonstration of how performance can be influenced by more than simply clock speed. Despite running at "only" 2Ghz, the 64-bit chips in the G5 are also attached to two 64-bit data buses running at 1Ghz, and to RAM that runs at 400Mhz. There's no barrier between the chip and the RAM (so data can tear in and out) and while Apple's claim that it is the "world's fastest desktop computer" has been rejected by advertising standards authorities, it has clearly got the right stuff to be very fast: quick input, quick output, and a fast chip in between.

However, I doubt that Intel's move away from the world of megahertz and gigahertz is really going to carry over to the people who write ads for PC superstores, who love a nice big number. And I don't envy the sales staff there who will have to explain what the differences are. But perhaps they will start asking the most important question of buyers, which is not "How fast do you want it?", but "what do you want to do with it?".

network@independent.co.uk

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in