reliable pc benchmark

Started by
20 comments, last by Norman Barrows 10 years, 1 month ago

alright tnx, this fortran thing (3rd link) is close * to what im searching for

(* almost exactly but would like some more such test to

build more solid view on this)

but is this reliable?

it contains some fortran binary and source, - i ran binary on my old core2duo and got 0.3 this is not much better than the p4 results listed - (i could compile the source in gcc if someone maybe will hint me how to quick compile this fortran stuf with gcc)

as to this result list the question is:

if this is reliable this is reliable but if this is nonreliable this is nonreliable, is this reliable? ;/

looks like reliable but i am not sure how far i can belive this

it shows for example if p4 run it 0.3 second pentiums 100 runs

it about 13 seconds (it is about 40 times longer - this is more

than i thought, incidentaly i got pentium 100 mhz years ago as home computer then p4 as a home computer too, p100 was

sh*t, p4 was quite pleasurable machine, but not sure if this was

whole 40 x faster, hard to say)

yet more difference is when comparing 386/25 this takes about 300 sekonds this is quite slow (would be about 26x slower than pentium 100 * this is also very big diff - i get no 386 in my life but also it seem to me more difference then i thought, maybe this is becouse this models had no fpu (?, if not i dont know and this test is fpu)

(* and about 900 x between p4 and 386 and p4)

486/33 looks about 3 times better than 386/25 and about 7-8 times slower than p100 (pentium 100 was a weak shit, this is the worse machine i got in my life, though i remember i can run quake 1 on it and it runs fluid and my school pal/colegue got 486 there and about 1/3 of fpe in quake so maybe this 7-8x may be reliable here

has some one maybe some more results like this so i could verify this estimations? (i know this is partial but as i said i want to build estimated view)

Advertisement

just curious, what are you trying to figure out with all this benchmarking? and why?

there may be other ways to determine the desired information. perhaps a theoretical calculation, as mentioned above.

as seen in the posts above, benchmarks are not always numbers its safe to bet on.

in the end, writing you own benchmarks is about the only sure way test exactly what you're interested in - unless someone else has already written one, and then you need access to the hardware.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

made answer but error deleted it - shortly i like the estimtes

9wanted to know how many times 386 is weaker than present pc)

ApochPiQ is correct: no meaningful programatic comparison exists. You could, for example write code and then make several builds, such as:
* Unoptimized
* Generally optimized (such as by the compiler)
* Highly optimized (by a human skilled in the CPU's technology)

But, several questions would frustrate you:
1) How can you be sure about the skill of the optimizer
2) What compilers should you use (they have changed too)

In reality, for what you want, a theoretical comparison will be more accurate than anything practical; you can do a better job with spec sheets, a bit of rudimentary knowledge and a pencil and paper than you can with anything in code.

I agree, that A theoretical comparison is an easier approach then designing a benchmark. I think in principle, you can have a great benchmark if you design it right, but the error in how representative your benchmark is of real code is likely to be much greater than the benefit of real benchmarking. In the worst case, your benchmark could bottleneck somewhere you didn't think of and that isn't where real programs bottleneck.

I did want to point out that if you did benchmark it wouldn't hurt to take improvements in compiler technology as part of the comparison; you could use versions gcc that were around a certain time after benchmarked CPU release.

made answer but error deleted it - shortly i like the estimtes
9wanted to know how many times 386 is weaker than present pc)

As has already been explained, that question doesn't make sense.
In order to answer it, you need to specify a specific task/program.
It migh be 100x slower at program A, 10x slower at program B and 10000x slower at program C.

made answer but error deleted it - shortly i like the estimtes
9wanted to know how many times 386 is weaker than present pc)

As has already been explained, that question doesn't make sense.
In order to answer it, you need to specify a specific task/program.
It migh be 100x slower at program A, 10x slower at program B and 10000x slower at program C.

I doub it if "It migh be 100x slower at program A, 10x slower at program B and 10000x slower at program C." Besides, it does not make it 'no sense' - I do not want use benchmark to strictly reason about my program timings on those machines, benchmark is used to test benchmark code and it makes 100% sense

What is faster, a boat or a car? This question is unanswerable, is nonsense, because you need more information. Is the race taking place in an ocean or a city?

If you specify a particular situation - e.g. 3km down along a flat road - then it's answerable (the car will win).

Likewise, if you're benchmarking a program based on floating-point numbers, then you'll see a huge leap in performance at the point in time where hardware FPU's became popular. Comparing a CPU without an FPU and a modern CPU is just like comparing a boat and a car.

There are many, many more hardware components that have been added over time to address particular problems, just like the FPU.

So, why?

The key detail I note is:

the same binary run, or the same source run with best avaliable compiler in each cases not 'tendentious' tweaks - something fair - is there something like that?

So comparing an identical executable...

On the one hand you have a 25MHz 386. Released in 1985, common in 1986. No floating point. No sound card. No concurrency. No SIMD. 16kB cache, 2MB total memory was normal.

On the other hand you have a 2.5GHz, 8 CPU cores. New releases every month. Automatic concurrency through an OOO core, automatically vectorizing and parallelizing compilers. 8MB on-die cache.

Any kind of trivial benchmark is going to see a 1000 times improvement just because of the nature of the chips, probably 10,000x because the internal design of the x86 processor has been completely replaced.

But then again, the work we do today is also thousands of times more complex.

The "thumbnail" quality images on this site, including the little 'partners' logo at the bottom of the page, are larger and more detailed than you would find almost anywhere in 1986; perhaps there were corporate logos that companies would embed in their apps, but most customers would have complained because of the size. Today we think nothing of a 1MB advertisement, but back then such an image would require a "large" floppy disk (people often bought the cheaper 720KB disks). We routinely handle files that would not physically fit on a computer from that era. The maximum size of a HDD partition back in 1986 was 32 megabytes; you could send an entire HDD from that era as an email attachment, and perhaps as an afterthought wonder if the other person's ISP might reject it due to size constraints. A single 5-minute high quality audio file that you use for background music today quite likely you could not physically fit on a single PC from 1986.

If you are going to make era-to-era comparisons, do it completely. A high-end expensive monitor had 640x350 resolution with an amazing 16-color ADJUSTABLE palette. You could pick any of 64 colors to fill the 16 color slots. That was a splurge, most people of the era had text mode of 80x25 with 16 color text, or graphics mode of 320x200, four color palette. (To be fair you could choose your palette. Magenta+Cyan+White+Custom, or Green+Red+Yellow/Brown+Custom. LOTS of choices) Disk drives were small, and most install disks were shipped on 320kB floppies or 360kB floppies, depending on which kind of floppies were used. If you had a modem you could get 1200 bps usually, 2400 on nice equipment. Yes, that is about 100 characters per second. That kind of computer cost around $3,000 in 1986, adjusted for inflation Google says that is about $6000 in today's money. Not cheap.

So considering that personal computing has fundamentally changed many times over the years, what possible use would such a benchmark give? I've read old reports where people make broad claims about what we 'should' be seeing because they compare how programs like Windows 386 perform on more current hardware, only to watch them explode in a fireball when people point out the comparison is useless. Would you expose that computers are 10,000x faster today and also that they do 10,000x more work?

What is faster, a boat or a car? This question is unanswerable, is nonsense, because you need more information. Is the race taking place in an ocean or a city?

If you specify a particular situation - e.g. 3km down along a flat road - then it's answerable (the car will win).

Is not nonsense.. I asked about the whole set of specyfic situations.. (and its ok)

So, why?

The key detail I note is:

the same binary run, or the same source run with best avaliable compiler in each cases not 'tendentious' tweaks - something fair - is there something like that?

So comparing an identical executable...

On the one hand you have a 25MHz 386. Released in 1985, common in 1986. No floating point. No sound card. No concurrency. No SIMD. 16kB cache, 2MB total memory was normal.

On the other hand you have a 2.5GHz, 8 CPU cores. New releases every month. Automatic concurrency through an OOO core, automatically vectorizing and parallelizing compilers. 8MB on-die cache.

Any kind of trivial benchmark is going to see a 1000 times improvement just because of the nature of the chips, probably 10,000x because the internal design of the x86 processor has been completely replaced.

But then again, the work we do today is also thousands of times more complex.

The "thumbnail" quality images on this site, including the little 'partners' logo at the bottom of the page, are larger and more detailed than you would find almost anywhere in 1986; perhaps there were corporate logos that companies would embed in their apps, but most customers would have complained because of the size. Today we think nothing of a 1MB advertisement, but back then such an image would require a "large" floppy disk (people often bought the cheaper 720KB disks). We routinely handle files that would not physically fit on a computer from that era. The maximum size of a HDD partition back in 1986 was 32 megabytes; you could send an entire HDD from that era as an email attachment, and perhaps as an afterthought wonder if the other person's ISP might reject it due to size constraints. A single 5-minute high quality audio file that you use for background music today quite likely you could not physically fit on a single PC from 1986.

If you are going to make era-to-era comparisons, do it completely. A high-end expensive monitor had 640x350 resolution with an amazing 16-color ADJUSTABLE palette. You could pick any of 64 colors to fill the 16 color slots. That was a splurge, most people of the era had text mode of 80x25 with 16 color text, or graphics mode of 320x200, four color palette. (To be fair you could choose your palette. Magenta+Cyan+White+Custom, or Green+Red+Yellow/Brown+Custom. LOTS of choices) Disk drives were small, and most install disks were shipped on 320kB floppies or 360kB floppies, depending on which kind of floppies were used. If you had a modem you could get 1200 bps usually, 2400 on nice equipment. Yes, that is about 100 characters per second. That kind of computer cost around $3,000 in 1986, adjusted for inflation Google says that is about $6000 in today's money. Not cheap.

So considering that personal computing has fundamentally changed many times over the years, what possible use would such a benchmark give? I've read old reports where people make broad claims about what we 'should' be seeing because they compare how programs like Windows 386 perform on more current hardware, only to watch them explode in a fireball when people point out the comparison is useless. Would you expose that computers are 10,000x faster today and also that they do 10,000x more work?

I had not such intention (even it does not come in my mind)

Just need a set of specyfic benchmarks, This fortran stuff

was interesting - I would also find something like that but for

more ram intensive benchmark, and int intensive arthimetic benchmark,

ps. i like to do my own benchmarking but i got no machines

avaliable to test, (got some p100 machine in the trash but it has no usb port and got a trouble to run) for example this is some benchmark i would like to run on variety of the machines

(this is a win32 program not sure if it would be possible to run on 386)

Could maybe someone here run it on some old or new machine and give me some results? (not malware just memset test in winapi)

https://www.dropbox.com/s/d0epr8d1drsa4bs/ramset.zip

If so thnx it would be informative for me...

This topic is closed to new replies.

Advertisement