• Create Account

## reliable pc benchmark

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

21 replies to this topic

### #1/ fir   Members

Posted 08 March 2014 - 02:47 PM

(sory if im writing unclear, my head aches a bit today)

i would like to get some reliable benchmark for the pc machines

on the historic course (386, 486, .. pentium 1, 2, 3, 4 .. sandy, ivy, etc)

is there somethink like that ? I alwaty need that but cannot find that,

thing like time of compression of the files like winrar do (vaguely remember)

could be ok, or something like that ,- though test should be fair i mean

the same binary run, or the same source run with best avaliable compiler in each cases not 'tendentious' tweaks - something fair - is there something like that?

### #2ApochPiQ  Moderators

Posted 08 March 2014 - 03:07 PM

There can't be a benchmark that fairly compares the features of a modern CPU to a 386-era CPU.

SIMD instruction sets, changes in superscalar pipeline architecture, cache sizes, and even die processing resolution can all have dramatic effects on CPU performance. You also need to consider the fact that much work in CPU design in the past decade has gone into power consumption reduction rather than outright speed of computation.

Benchmarks have changed almost as much as CPU hardware has changed in the past 20 years.
Wielder of the Sacred Wands

### #3/ fir   Members

Posted 08 March 2014 - 03:24 PM

There can't be a benchmark that fairly compares the features of a modern CPU to a 386-era CPU.

SIMD instruction sets, changes in superscalar pipeline architecture, cache sizes, and even die processing resolution can all have dramatic effects on CPU performance. You also need to consider the fact that much work in CPU design in the past decade has gone into power consumption reduction rather than outright speed of computation.

Benchmarks have changed almost as much as CPU hardware has changed in the past 20 years.

If you have some binary for win32 that calculates a pi digits or something like that and you run this on whole set of pc machines i mention - i would call and understand it as fair test (and just this im searching)

it is fragmentaric and not complete (becouse newer machines offer new instruction sets and other possible optymizations),

but just search for the weaker kind of 'fair'

(the other thing you say i can call superfair and this could be

impossible but i need just the first kind (i mean where test values

are unchanged or clearly stated etc, and maybe yet described to see what fragment of the soft thing they compare)

### #4Bacterius  Members

Posted 08 March 2014 - 04:26 PM

it is fragmentaric and not complete (becouse newer machines offer new instruction sets and other possible optymizations),
but just search for the weaker kind of 'fair'

So what's the point of the benchmark then?

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

### #5/ fir   Members

Posted 08 March 2014 - 04:56 PM

it is fragmentaric and not complete (becouse newer machines offer new instruction sets and other possible optymizations),
but just search for the weaker kind of 'fair'

So what's the point of the benchmark then?

made more complete view using a whole set of fragmentaric benchmarks or just made a fragmentaric view

### #6Nypyren  Members

Posted 08 March 2014 - 05:00 PM

Testing only a single thing such as computing digits of pi won't satisfy many people's needs. I could have a superfast CPU, and a really crappy bus, and it would compute pi fine but suck at everything else.

Composite benchmarks have the opposite problem - your score can be nerfed by a single slow result.

I'm interested in benchmarks which perform all combinations - individual and composite tests. This gives you much more insight to how the computer behaves in all usage patterns.

Edited by Nypyren, 08 March 2014 - 05:03 PM.

### #7/ fir   Members

Posted 08 March 2014 - 05:48 PM

Testing only a single thing such as computing digits of pi won't satisfy many people's needs. I could have a superfast CPU, and a really crappy bus, and it would compute pi fine but suck at everything else.

Composite benchmarks have the opposite problem - your score can be nerfed by a single slow result.

I'm interested in benchmarks which perform all combinations - individual and composite tests. This gives you much more insight to how the computer behaves in all usage patterns.

Im interested too, so if someone know some links i would like to see it (i did not extensive search in google recently but ,maybe someone was interested in the thing before and know some thing,

there are some kind of benchmarks seen on old system info software or in magazines but their point results were hard to

interpretation, I would like just some c or asm code benchmarks maybe)

### #8Norman Barrows  Members

Posted 08 March 2014 - 05:52 PM

benchmarks should test the type of performance you're interested in (processor, I/O, graphics, etc).

going back to machines as old as the 386, about the only benchmarks you'll find data for are MIPS and FLOPS, bus speeds and size, and hard drive seek times - not much to go on. Moving to newer more parallel architectures, these benchmarks are not as solid as they once were. Once you get into 486's you might find old graphics benchmark software data, but its unlikely you'll find the same benchmark software data for newer PCs. And then there's stuff like WinMark for later PCs, which may or may not test what you'e interested in. PCs have sort of evolved from apples into oranges, making comparisons of older and newer architectures difficult.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

### #9Hodgman  Moderators

Posted 08 March 2014 - 09:04 PM

POPULAR

Some links if you want to research this:

http://www.cpubenchmark.net/common_cpus.html

http://queue.acm.org/detail.cfm?id=2181798

http://3dfmaps.com/CPU/cpu.htm

http://www.tomshardware.com/reviews/mother-cpu-charts-2005,1175.html

### #10HScottH  Members

Posted 09 March 2014 - 12:13 AM

ApochPiQ is correct: no meaningful programatic comparison exists.  You could, for example write code and then make several builds, such as:

* Unoptimized

* Generally optimized (such as by the compiler)

* Highly optimized (by a human skilled in the CPU's technology)

But, several questions would frustrate you:

1) How can you be sure about the skill of the optimizer

2) What compilers should you use (they have changed too)

In reality, for what you want, a theoretical comparison will be more accurate than anything practical; you can do a better job with spec sheets, a bit of rudimentary knowledge and a pencil and paper than you can with anything in code.

### #11/ fir   Members

Posted 09 March 2014 - 04:12 AM

alright tnx, this fortran thing (3rd link) is close * to what im searching for

(* almost exactly but would like some more such test to

build more solid view on this)

but is this reliable?

it contains some fortran binary and source, - i ran binary on my old core2duo and got 0.3 this is not much better than the p4 results listed - (i could compile the source in gcc if someone maybe will hint me how to quick compile this fortran stuf with gcc)

as to this result list the question is:

if this is reliable this is reliable but if this is nonreliable this is nonreliable, is this reliable? ;/

looks like reliable but i am not sure how far i can belive this

it shows for example if p4 run it 0.3 second pentiums 100 runs

it about 13 seconds (it is about 40 times longer - this is more

than i thought, incidentaly i got pentium 100 mhz years ago as home computer then p4 as a home computer too, p100 was

sh*t, p4 was quite pleasurable machine, but not sure if this was

whole 40 x faster, hard to say)

yet more difference is when comparing 386/25 this takes about 300 sekonds this is quite slow (would be about 26x slower than pentium 100 * this is also very big diff - i get no 386 in my life but also it seem to me more difference then i thought, maybe this is becouse this models had no fpu (?, if not i dont know and this test is fpu)

(* and about 900 x between p4 and 386 and p4)

486/33 looks about 3 times better than 386/25 and about 7-8 times slower than p100 (pentium 100 was a weak shit, this is the worse machine i got in my life, though i remember i can run quake 1 on it and it runs fluid and my school pal/colegue got 486 there and about 1/3 of fpe in quake so maybe this 7-8x may be reliable here

has some one maybe some more results like this so i could verify this estimations? (i know this is partial but as i said i want to build estimated view)

### #12Norman Barrows  Members

Posted 09 March 2014 - 12:08 PM

just curious, what are you trying to figure out with all this benchmarking? and why?

there may be other ways to determine the desired information.  perhaps a theoretical calculation, as mentioned above.

as seen in the posts above, benchmarks are not always numbers its safe to bet on.

in the end, writing you own benchmarks is about the only sure way test exactly what you're interested in - unless someone else has already written one, and then you need access to the hardware.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

### #13/ fir   Members

Posted 09 March 2014 - 12:37 PM

made answer but error deleted it - shortly i like the estimtes

9wanted to know how many times 386 is weaker than present pc)

Edited by fir, 09 March 2014 - 12:49 PM.

### #14King Mir  Members

Posted 09 March 2014 - 04:07 PM

ApochPiQ is correct: no meaningful programatic comparison exists.  You could, for example write code and then make several builds, such as:
* Unoptimized
* Generally optimized (such as by the compiler)
* Highly optimized (by a human skilled in the CPU's technology)

But, several questions would frustrate you:
1) How can you be sure about the skill of the optimizer
2) What compilers should you use (they have changed too)

In reality, for what you want, a theoretical comparison will be more accurate than anything practical; you can do a better job with spec sheets, a bit of rudimentary knowledge and a pencil and paper than you can with anything in code.

I agree, that A theoretical comparison is an easier approach then designing a benchmark. I think in principle, you can have a great benchmark if you design it right, but the error in how representative your benchmark is of real code is likely to be much greater than the benefit of real benchmarking. In the worst case, your benchmark could bottleneck somewhere you didn't think of and that isn't where real programs bottleneck.

I did want to point out that if you did benchmark it wouldn't hurt to take improvements in compiler technology as part of the comparison; you could use versions gcc that were around a certain time after benchmarked CPU release.

### #15Hodgman  Moderators

Posted 09 March 2014 - 04:52 PM

made answer but error deleted it - shortly i like the estimtes
9wanted to know how many times 386 is weaker than present pc)

As has already been explained, that question doesn't make sense.
In order to answer it, you need to specify a specific task/program.
It migh be 100x slower at program A, 10x slower at program B and 10000x slower at program C.

### #16/ fir   Members

Posted 09 March 2014 - 05:57 PM

made answer but error deleted it - shortly i like the estimtes
9wanted to know how many times 386 is weaker than present pc)

As has already been explained, that question doesn't make sense.
In order to answer it, you need to specify a specific task/program.
It migh be 100x slower at program A, 10x slower at program B and 10000x slower at program C.

I doub it if "It migh be 100x slower at program A, 10x slower at program B and 10000x slower at program C." Besides, it does not make it 'no sense' - I do not want use benchmark to strictly reason about my program timings on those machines, benchmark is used to test benchmark code and it makes 100% sense

### #17Hodgman  Moderators

Posted 09 March 2014 - 06:41 PM

What is faster, a boat or a car? This question is unanswerable, is nonsense, because you need more information. Is the race taking place in an ocean or a city?

If you specify a particular situation - e.g. 3km down along a flat road - then it's answerable (the car will win).

Likewise, if you're benchmarking a program based on floating-point numbers, then you'll see a huge leap in performance at the point in time where hardware FPU's became popular. Comparing a CPU without an FPU and a modern CPU is just like comparing a boat and a car.

There are many, many more hardware components that have been added over time to address particular problems, just like the FPU.

### #18frob  Moderators

Posted 09 March 2014 - 09:45 PM

So, why?

The key detail I note is:

the same binary run, or the same source run with best avaliable compiler in each cases not 'tendentious' tweaks - something fair - is there something like that?

So comparing an identical executable...

On the one hand you have a 25MHz 386. Released in 1985, common in 1986. No floating point. No sound card. No concurrency. No SIMD. 16kB cache, 2MB total memory was normal.

On the other hand you have a 2.5GHz, 8 CPU cores. New releases every month. Automatic concurrency through an OOO core, automatically vectorizing and parallelizing compilers. 8MB on-die cache.

Any kind of trivial benchmark is going to see a 1000 times improvement just because of the nature of the chips, probably 10,000x because the internal design of the x86 processor has been completely replaced.

But then again, the work we do today is also thousands of times more complex.

The "thumbnail" quality images on this site, including the little 'partners' logo at the bottom of the page, are larger and more detailed than you would find almost anywhere in 1986; perhaps there were corporate logos that companies would embed in their apps, but most customers would have complained because of the size. Today we think nothing of a 1MB advertisement, but back then such an image would require a "large" floppy disk (people often bought the cheaper 720KB disks). We routinely handle files that would not physically fit on a computer from that era. The maximum size of a HDD partition back in 1986 was 32 megabytes; you could send an entire HDD from that era as an email attachment, and perhaps as an afterthought wonder if the other person's ISP might reject it due to size constraints. A single 5-minute high quality audio file that you use for background music today quite likely you could not physically fit on a single PC from 1986.

If you are going to make era-to-era comparisons, do it completely. A high-end expensive monitor had 640x350 resolution with an amazing 16-color ADJUSTABLE palette. You could pick any of 64 colors to fill the 16 color slots. That was a splurge, most people of the era had text mode of 80x25 with 16 color text, or graphics mode of 320x200, four color palette. (To be fair you could choose your palette. Magenta+Cyan+White+Custom, or Green+Red+Yellow/Brown+Custom. LOTS of choices) Disk drives were small, and most install disks were shipped on 320kB floppies or 360kB floppies, depending on which kind of floppies were used. If you had a modem you could get 1200 bps usually, 2400 on nice equipment. Yes, that is about 100 characters per second. That kind of computer cost around $3,000 in 1986, adjusted for inflation Google says that is about$6000 in today's money. Not cheap.

So considering that personal computing has fundamentally changed many times over the years, what possible use would such a benchmark give? I've read old reports where people make broad claims about what we 'should' be seeing because they compare how programs like Windows 386 perform on more current hardware, only to watch them explode in a fireball when people point out the comparison is useless. Would you expose that computers are 10,000x faster today and also that they do 10,000x more work?

Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I occasionally write about assorted stuff.

### #19/ fir   Members

Posted 10 March 2014 - 02:13 AM

What is faster, a boat or a car? This question is unanswerable, is nonsense, because you need more information. Is the race taking place in an ocean or a city?

If you specify a particular situation - e.g. 3km down along a flat road - then it's answerable (the car will win).

Is not nonsense.. I asked about the whole set of specyfic situations.. (and its ok)

### #20/ fir   Members

Posted 10 March 2014 - 02:28 AM

So, why?

The key detail I note is:

the same binary run, or the same source run with best avaliable compiler in each cases not 'tendentious' tweaks - something fair - is there something like that?

So comparing an identical executable...

On the one hand you have a 25MHz 386. Released in 1985, common in 1986. No floating point. No sound card. No concurrency. No SIMD. 16kB cache, 2MB total memory was normal.

On the other hand you have a 2.5GHz, 8 CPU cores. New releases every month. Automatic concurrency through an OOO core, automatically vectorizing and parallelizing compilers. 8MB on-die cache.

Any kind of trivial benchmark is going to see a 1000 times improvement just because of the nature of the chips, probably 10,000x because the internal design of the x86 processor has been completely replaced.

But then again, the work we do today is also thousands of times more complex.

The "thumbnail" quality images on this site, including the little 'partners' logo at the bottom of the page, are larger and more detailed than you would find almost anywhere in 1986; perhaps there were corporate logos that companies would embed in their apps, but most customers would have complained because of the size. Today we think nothing of a 1MB advertisement, but back then such an image would require a "large" floppy disk (people often bought the cheaper 720KB disks). We routinely handle files that would not physically fit on a computer from that era. The maximum size of a HDD partition back in 1986 was 32 megabytes; you could send an entire HDD from that era as an email attachment, and perhaps as an afterthought wonder if the other person's ISP might reject it due to size constraints. A single 5-minute high quality audio file that you use for background music today quite likely you could not physically fit on a single PC from 1986.

If you are going to make era-to-era comparisons, do it completely. A high-end expensive monitor had 640x350 resolution with an amazing 16-color ADJUSTABLE palette. You could pick any of 64 colors to fill the 16 color slots. That was a splurge, most people of the era had text mode of 80x25 with 16 color text, or graphics mode of 320x200, four color palette. (To be fair you could choose your palette. Magenta+Cyan+White+Custom, or Green+Red+Yellow/Brown+Custom. LOTS of choices) Disk drives were small, and most install disks were shipped on 320kB floppies or 360kB floppies, depending on which kind of floppies were used. If you had a modem you could get 1200 bps usually, 2400 on nice equipment. Yes, that is about 100 characters per second. That kind of computer cost around $3,000 in 1986, adjusted for inflation Google says that is about$6000 in today's money. Not cheap.

So considering that personal computing has fundamentally changed many times over the years, what possible use would such a benchmark give? I've read old reports where people make broad claims about what we 'should' be seeing because they compare how programs like Windows 386 perform on more current hardware, only to watch them explode in a fireball when people point out the comparison is useless. Would you expose that computers are 10,000x faster today and also that they do 10,000x more work?

I had not such intention (even it does not come in my mind)

Just need a set of specyfic benchmarks, This fortran stuff

was interesting - I would also find something like that but for

more ram intensive benchmark, and int intensive arthimetic benchmark,

ps. i like to do my own benchmarking but i got no machines

avaliable to test, (got some p100 machine in the trash but it has no usb port and got a trouble to run) for example this is some benchmark i would like to run on variety of the machines

(this is a win32 program not sure if it would be possible to run on 386)

Could maybe someone here run it on some old or new machine and give me some results? (not malware just memset test in winapi)

https://www.dropbox.com/s/d0epr8d1drsa4bs/ramset.zip

If so thnx it would be informative for me...

Edited by fir, 10 March 2014 - 02:30 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.