CODING ★ BENCHMARKS (COMPUTING TODAY) ★

Benchmarks (Computing Today)Menu - Soft - Basic
★ Ce texte vous est présenté dans sa version originale ★ 
 ★ This text is presented to you in its original version ★ 
 ★ Este texto se presenta en su versión original ★ 
 ★ Dieser Text wird in seiner Originalfassung präsentiert ★ 

Speed is one feature that is often used to describe the performance of a particular microcomputer. But how reliable are the tests that we use to measure this quantity/ and how can we guarantee the consistency of such tests?

In 1977, the Kilobaud magazine introduced a set of benchmark tests intended to suit microcomputers. Of late, the results given by different machines in execution of these tests has sometimes been a little confusing, and there have been suggestions that a revised set of tests might be considered. It would be interesting to learn what our readers think about this, but perhaps it would do no harm to take a closer look at the existing benchmark tests to see whether we understand what they tell us.

BENCHMARK 1

The first test is essentially a FOR loop of 1000 iterations and no internal content;

100 PRINT "S"
200 FORK = 1 to 1000
300 NEXT K
400 PRINT "E"
500 END

Execution times for the interval between the display of 'S' and the display of 'E' range from five seconds to less than a second. The execution times for a single iteration of the loop thus amount to similar numbers of milliseconds, it being assumed that the surrounding actions contribute a negligible proportion of the total time.

Looked at dispassionately, these figures are a trifle surprising. What is involved in executing the loop? Line 300 establishes the initial, current and final values of the loop index, and stores them away, with the address of the end of the line (or the beginning of the next line). Line 200 adds the step value, in this case a default of unity, to the index variable, compares the result with the end value, and if that end value has not been reached action returns to the point defined by the stored address — the start of line 300.

We are thus concerned solely with the actions of line 300, which entail an addition, a comparison, and a jump. And that can take five milliseconds, perhaps five thousand machine cycles? Evidently it does. After all, even simple operations on floating point numbers are tedious and complex. But who said anything about floating point numbers? Couldn't integers be used? And some systems run faster if the variable name is not appended to NEXT.

It begins to look as if there are a number of factors which the simple definition of the test fails to take into account. In 1977, it was reasonable to assume that floating point numbers would be involved, and that the full form of the NEXT statement would be used. That is no longer valid.

However, we have noted that the test establishes execution time for an addition, a comparison and a jump, plus perhaps some time used in scanning back through the stored loop data.

BENCHMARK 2

The second test uses a different form of loop;

100 PRINT "S"
200 K = 0
300 K = K + 1
400 IF K< 1000 THEN 300
500 PRINT "E"
600 END

This time, the loop involves two lines. Line 300 performs an increment, and line 400 checks the result against a constant value, jumping back if that value has not yet been reached. There is, as before, an addition, a comparison and a jump, but the execution times are much longer, between 3 and 9 milliseconds per iteration. Now why should that be? The difference is that the location of variable K has to be determined three times, whereas with the FOR loop it can be found more directly by reference to stacked values. And in some systems the numeric value 1000 has to be converted to binary, whereas in other systems the conversion is performed when the program line is entered. There is not enough evidence to show how much these factors contribute individually to the increase in execution time, but it appears that identifying and fetching a variable may take around a millisecond, certainly no more.

BENCHMARK 3

The third test inserts a new line into the previous listing;

310 A=K/K*K+K-K

This adds between 5 and 13 milliseconds per iteration, the time taken to perform an addition, a subtraction, a multiplication and a division. Six references to variables are needed, which accounts for a good deal of that time, and we can dimly begin to see how the extra time is split up. Remembering that multiplication and division in floating point are inherently faster than addition and subtraction, we can start a tentative allocation of individual times. By leaving out one term or another in line 310, we could obtain some fairly precise figures.

BENCHMARK 4

Next, a different line 310 is used;

310 A=K/2*3+4-5

Two references to variables are needed, the other factors come as constants from the line itself. The execution times do not differ greatly from those of the third test, but the difference is not consistent. Some machines are slower, others faster. Those which are faster usually convert numerics to floating point at the time of entry. They therefore have no need to convert from decimal at run time.

BENCHMARK 5

The fifth test uses a similar structure, but introduces a dummy

subroutine call, the added lines being;

320 GOSUB 700
700 RETURN

The previous line 310 is retained.

This is where the differences really begin to show. The added time for each iteration ranges from 400 microseconds to 4 milliseconds. Yet the actions needed are superficially the same; the end of the GOSUB statement must be marked and stored, the address of the target line must be found, and the scanning pointer must be set to point to that line. Then the RETURN command involves resetting the scanning pointer to the end of the GOSUB statement. So wide a range of times suggests that the detail of these actions is implemented in very different ways in different machines.

BENCHMARK 6

So far, the tests have proceeded rationally, provided it is clear that the significant figures are the differences between one test and the next, but we now stray into confusion. Test six adds the following lines;

250 DIM M(5)
330 FOR L= 1 TO 5
340 NEXT L

The dimensioning statement is irrelevant, since it lies outside the main loop, and is not used until the next test. The added FOR loop increases the time by between 4.6 and 43 seconds. This is more than might be expected from benchtest 1, but it must be remembered that in this case the loop is being set up a thousand times. If we care to work it out, we can separate the times for setting up and executing the loop. For a Spectrum, the iteration time is about 5 mS, so executing the loop in this benchmark should take 25 mS. The increase in time is about 43 mS, so setting up the loop takes about 18 mS. The AMSTRAD CPC464 takes about 1.14 mS to execute an iteration of the loop, or 5.7 mS for five iterations. The added time is 8.9 mS, so setting up takes 3.2 mS. The information is there if you look for it.

BENCHMARK 7 & 8

Test 7 inserts one further line;

335 M(L) = A

This adds between 7.6 and 18.5 seconds overall, or 1.5 to 3.7 milliseconds to execute line 335 once.
Benchmark 8 goes off on a different track;

100 PRINT "S”
200 K=0 300 K = K+1
330 A = K 2
340 B = LOG(K)
350 C = SIN(K)
400 IF K 1000 THEN 300
500 PRINT "E"
600 END

This clearly needs to be compated with benchmark 2 to make sense, since lines 330 - 350 are added to the earlier benchmark. The added times range from about 1.6 seconds to 30.9 seconds, which could be interpreted in two different ways; either the slower machines are making a meal of the job, or the faster machines are skimping it. The three added commands all involve the use of a mathematical series. The more terms there are in the series, the more accurate will be the result, assuming that the terms are correctly proportioned. A very short series may be accurate over a given range, but will deviate in extreme cases. That may satisfy those who only care about speed, but not those who want accurate results. To be meaningful, this test should provide an indication of accuracy as well as speed.

It is not unusual to complete a set of bench test results by giving an overall average figure. I've done it myself, but I realise now that it makes a nonsense. The absolute test figures are not what is important. It is necessary to look at differences between one test and another.

All this could be taken to mean that the value of the existing tests is at least dubious. That is not so, providing they are properly understood and interpreted. However, it does seem that improvement should be possible, if only to separate out the different individual functions more clearly.

There are those who have expressed the view that such tests are all nonsense, anyway. If you are concerned about the speed of BASIC, you shouldn't be using it. There are a number of other factors which are more significant in assessing the value of a machine, such as available store space and screen control flexibility. These would be difficult to codify into anything approaching a figure of merit, and there would probably be endless arguments about the relevance of each and every factor.

What do you think? How do you judge one machine against another? Let us have your views, and I will try to analyse them and combine the ideas which seem most productive. If you think bench tests are a load of nonsense, say so. That's the kind of thing we need to know.

BENCHMARKS ANALYSED
BM1BM2BM3BM3-BM2BM4BM4-BM2BM5BM5-BM4BM6BM6-BM5BM7BM7-BM6BM8BM8-BM2
ZX Spectrum4.99.021.912.920.711.725.24.568.243.086.718.525.116.1
Dragon 321.29.117.78.619.210.122.23.031.18.944.713.610.81.7
Commodore 641.29.317.68.319.510.221.01.529.58.547.518.011.32.0
Osborne I1.54.612.17.511.97.312.91.036.112.249.613.56.21.6
Sirius I1.85.310.65.310.95.612.61.723.410.835.912.511.36.0
Amstrad CPC4641.13.39.25.99.66.310.20.619.29.030.311.134.230.9
BBC Micro0.83.18.35.28.75.69.10.413.74.621.37.65.32.2
The benchmark times quoted were taken from the table published in Microchoice, less the Newbrain and with the addition of the AMSTRAD CPC464. Note that BM8 for the CPC464 does not agree with the figure published in a recent CT: This has been rechecked, as it was clearly too small to be true, no larger than the time for BM2. This alone shows the value of taking differences between benchmarks. All Computing Today benchmarks are performed using 1000 iterations.

Computing Today (1985)

★ YEAR: 1985
★ AUTHOR: Don Thomasson
 

★ AMSTRAD CPC ★ A voir aussi sur CPCrulez , les sujets suivants pourront vous intéresser...

Lien(s):
» Coding » SID Basic
» Coding » Ali Gator - 06. Les Fichiers Basic (MicroMag)
» Coding » 664 Vous avez dit « compatibles » ? (MICROSTRAD)
» Coding » Basic Memorisez l'Ecran du CPC
» Coding » Basic - Tout sur les Fichiers (CPC Revue)
» Coding » Le Bug du DEC$ sur 464
Je participe au site:

» Vous avez remarqué une erreur dans ce texte ?
» Aidez-nous à améliorer cette page : en nous contactant via le forum ou par email.

CPCrulez[Content Management System] v8.7-desktop/c
Page créée en 507 millisecondes et consultée 377 fois

L'Amstrad CPC est une machine 8 bits à base d'un Z80 à 4MHz. Le premier de la gamme fut le CPC 464 en 1984, équipé d'un lecteur de cassettes intégré il se plaçait en concurrent  du Commodore C64 beaucoup plus compliqué à utiliser et plus cher. Ce fut un réel succès et sorti cette même années le CPC 664 équipé d'un lecteur de disquettes trois pouces intégré. Sa vie fut de courte durée puisqu'en 1985 il fut remplacé par le CPC 6128 qui était plus compact, plus soigné et surtout qui avait 128Ko de RAM au lieu de 64Ko.