• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
fir

reliable pc benchmark

21 posts in this topic

(sory if im writing unclear, my head aches a bit today)

 

i would like to get some reliable benchmark for the pc machines

on the historic course (386, 486, .. pentium 1, 2, 3, 4 .. sandy, ivy, etc)

is there somethink like that ? I alwaty need that but cannot find that,

 

thing like time of compression of the files like winrar do (vaguely remember)

could be ok, or something like that ,- though test should be fair i mean 

the same binary run, or the same source run with best avaliable compiler in each cases not 'tendentious' tweaks - something fair - is there something like that?

 

-1

Share this post


Link to post
Share on other sites

There can't be a benchmark that fairly compares the features of a modern CPU to a 386-era CPU.

SIMD instruction sets, changes in superscalar pipeline architecture, cache sizes, and even die processing resolution can all have dramatic effects on CPU performance. You also need to consider the fact that much work in CPU design in the past decade has gone into power consumption reduction rather than outright speed of computation.

Benchmarks have changed almost as much as CPU hardware has changed in the past 20 years.

 

If you have some binary for win32 that calculates a pi digits or something like that and you run this on whole set of pc machines i mention - i would call and understand it as fair test (and just this im searching)

 

it is fragmentaric and not complete (becouse newer machines offer new instruction sets and other possible optymizations),

but just search for the weaker kind of 'fair' 

 

(the other thing you say i can call superfair and this could be

impossible but i need just the first kind (i mean where test values

are unchanged or clearly stated etc, and maybe yet described to see what fragment of the soft thing they compare)

0

Share this post


Link to post
Share on other sites


it is fragmentaric and not complete (becouse newer machines offer new instruction sets and other possible optymizations),
but just search for the weaker kind of 'fair' 

 

So what's the point of the benchmark then?

2

Share this post


Link to post
Share on other sites

 


it is fragmentaric and not complete (becouse newer machines offer new instruction sets and other possible optymizations),
but just search for the weaker kind of 'fair' 

 

So what's the point of the benchmark then?

 

 

made more complete view using a whole set of fragmentaric benchmarks or just made a fragmentaric view

0

Share this post


Link to post
Share on other sites
Testing only a single thing such as computing digits of pi won't satisfy many people's needs. I could have a superfast CPU, and a really crappy bus, and it would compute pi fine but suck at everything else.

Composite benchmarks have the opposite problem - your score can be nerfed by a single slow result.

I'm interested in benchmarks which perform all combinations - individual and composite tests. This gives you much more insight to how the computer behaves in all usage patterns. Edited by Nypyren
2

Share this post


Link to post
Share on other sites

Testing only a single thing such as computing digits of pi won't satisfy many people's needs. I could have a superfast CPU, and a really crappy bus, and it would compute pi fine but suck at everything else.

Composite benchmarks have the opposite problem - your score can be nerfed by a single slow result.

I'm interested in benchmarks which perform all combinations - individual and composite tests. This gives you much more insight to how the computer behaves in all usage patterns.

 

Im interested too, so if someone know some links i would like to see it (i did not extensive search in google recently but ,maybe someone was interested in the thing before and know some thing,

there are some kind of benchmarks seen on old system info software or in magazines but their point results were hard to

interpretation, I would like just some c or asm code benchmarks maybe)

0

Share this post


Link to post
Share on other sites

ApochPiQ is correct: no meaningful programatic comparison exists.  You could, for example write code and then make several builds, such as:

* Unoptimized

* Generally optimized (such as by the compiler)

* Highly optimized (by a human skilled in the CPU's technology)

 

But, several questions would frustrate you:

1) How can you be sure about the skill of the optimizer

2) What compilers should you use (they have changed too)

 

In reality, for what you want, a theoretical comparison will be more accurate than anything practical; you can do a better job with spec sheets, a bit of rudimentary knowledge and a pencil and paper than you can with anything in code.

1

Share this post


Link to post
Share on other sites

 

alright tnx, this fortran thing (3rd link) is close * to what im searching for 

 

(* almost exactly but would like some more such test to 

build more solid view on this)

 

 

but is this reliable?

 

it contains some fortran binary and source, - i ran binary on my old core2duo and got 0.3 this is not much better than the p4 results listed - (i could compile the source in gcc if someone maybe will hint me how to quick compile this fortran stuf with gcc)

 

 

as to this result list the question is: 

 

if this is reliable this is reliable but if this is nonreliable this is nonreliable, is this reliable? ;/

 

looks like reliable but i am not sure how far i can belive this

 

it shows for example if p4 run it 0.3 second pentiums 100 runs

it about 13 seconds (it is about 40 times longer - this is more

than i thought, incidentaly i got pentium 100 mhz years ago as home computer then p4 as a home computer too, p100 was

sh*t, p4 was quite pleasurable machine, but not sure if this was

whole 40 x faster, hard to say)

 

yet more difference is when comparing 386/25 this takes about 300 sekonds this is quite slow (would be about 26x slower than pentium 100 * this is also very big diff - i get no 386 in my life but also it seem to me more difference then i thought, maybe this is becouse this models had no fpu (?, if not i dont know and this test is fpu)

 

(* and about 900 x between p4 and 386 and p4)

 

486/33 looks about 3 times better than 386/25 and about 7-8 times slower than p100 (pentium 100 was a weak shit, this is the worse machine i got in my life, though i remember i can run quake 1 on it and it runs fluid and my school pal/colegue got 486 there and about 1/3 of fpe in quake so maybe this 7-8x may be reliable here

 

has some one maybe some more results like this so i could verify this estimations? (i know this is partial but as i said i want to build estimated view)

-1

Share this post


Link to post
Share on other sites

just curious, what are you trying to figure out with all this benchmarking? and why?

 

there may be other ways to determine the desired information.  perhaps a theoretical calculation, as mentioned above.

 

as seen in the posts above, benchmarks are not always numbers its safe to bet on.

 

in the end, writing you own benchmarks is about the only sure way test exactly what you're interested in - unless someone else has already written one, and then you need access to the hardware.

2

Share this post


Link to post
Share on other sites

made answer but error deleted it - shortly i like the estimtes

9wanted to know how many times 386 is weaker than present pc)

Edited by fir
0

Share this post


Link to post
Share on other sites

ApochPiQ is correct: no meaningful programatic comparison exists.  You could, for example write code and then make several builds, such as:
* Unoptimized
* Generally optimized (such as by the compiler)
* Highly optimized (by a human skilled in the CPU's technology)
 
But, several questions would frustrate you:
1) How can you be sure about the skill of the optimizer
2) What compilers should you use (they have changed too)
 
In reality, for what you want, a theoretical comparison will be more accurate than anything practical; you can do a better job with spec sheets, a bit of rudimentary knowledge and a pencil and paper than you can with anything in code.

I agree, that A theoretical comparison is an easier approach then designing a benchmark. I think in principle, you can have a great benchmark if you design it right, but the error in how representative your benchmark is of real code is likely to be much greater than the benefit of real benchmarking. In the worst case, your benchmark could bottleneck somewhere you didn't think of and that isn't where real programs bottleneck.

I did want to point out that if you did benchmark it wouldn't hurt to take improvements in compiler technology as part of the comparison; you could use versions gcc that were around a certain time after benchmarked CPU release.
1

Share this post


Link to post
Share on other sites

 

made answer but error deleted it - shortly i like the estimtes
9wanted to know how many times 386 is weaker than present pc)

As has already been explained, that question doesn't make sense.
In order to answer it, you need to specify a specific task/program.
It migh be 100x slower at program A, 10x slower at program B and 10000x slower at program C.

 

 

I doub it if "It migh be 100x slower at program A, 10x slower at program B and 10000x slower at program C." Besides, it does not make it 'no sense' - I do not want use benchmark to strictly reason about my program timings on those machines, benchmark is used to test benchmark code and it makes 100% sense

0

Share this post


Link to post
Share on other sites

What is faster, a boat or a car? This question is unanswerable, is nonsense, because you need more information. Is the race taking place in an ocean or a city?

If you specify a particular situation - e.g. 3km down along a flat road - then it's answerable (the car will win).

 

Likewise, if you're benchmarking a program based on floating-point numbers, then you'll see a huge leap in performance at the point in time where hardware FPU's became popular. Comparing a CPU without an FPU and a modern CPU is just like comparing a boat and a car.

There are many, many more hardware components that have been added over time to address particular problems, just like the FPU.

0

Share this post


Link to post
Share on other sites
So, why?

The key detail I note is:

the same binary run, or the same source run with best avaliable compiler in each cases not 'tendentious' tweaks - something fair - is there something like that?

So comparing an identical executable...

On the one hand you have a 25MHz 386. Released in 1985, common in 1986. No floating point. No sound card. No concurrency. No SIMD. 16kB cache, 2MB total memory was normal.

On the other hand you have a 2.5GHz, 8 CPU cores. New releases every month. Automatic concurrency through an OOO core, automatically vectorizing and parallelizing compilers. 8MB on-die cache.

Any kind of trivial benchmark is going to see a 1000 times improvement just because of the nature of the chips, probably 10,000x because the internal design of the x86 processor has been completely replaced.

But then again, the work we do today is also thousands of times more complex.

The "thumbnail" quality images on this site, including the little 'partners' logo at the bottom of the page, are larger and more detailed than you would find almost anywhere in 1986; perhaps there were corporate logos that companies would embed in their apps, but most customers would have complained because of the size. Today we think nothing of a 1MB advertisement, but back then such an image would require a "large" floppy disk (people often bought the cheaper 720KB disks). We routinely handle files that would not physically fit on a computer from that era. The maximum size of a HDD partition back in 1986 was 32 megabytes; you could send an entire HDD from that era as an email attachment, and perhaps as an afterthought wonder if the other person's ISP might reject it due to size constraints. A single 5-minute high quality audio file that you use for background music today quite likely you could not physically fit on a single PC from 1986.

If you are going to make era-to-era comparisons, do it completely. A high-end expensive monitor had 640x350 resolution with an amazing 16-color ADJUSTABLE palette. You could pick any of 64 colors to fill the 16 color slots. That was a splurge, most people of the era had text mode of 80x25 with 16 color text, or graphics mode of 320x200, four color palette. (To be fair you could choose your palette. Magenta+Cyan+White+Custom, or Green+Red+Yellow/Brown+Custom. LOTS of choices) Disk drives were small, and most install disks were shipped on 320kB floppies or 360kB floppies, depending on which kind of floppies were used. If you had a modem you could get 1200 bps usually, 2400 on nice equipment. Yes, that is about 100 characters per second. That kind of computer cost around $3,000 in 1986, adjusted for inflation Google says that is about $6000 in today's money. Not cheap.

So considering that personal computing has fundamentally changed many times over the years, what possible use would such a benchmark give? I've read old reports where people make broad claims about what we 'should' be seeing because they compare how programs like Windows 386 perform on more current hardware, only to watch them explode in a fireball when people point out the comparison is useless. Would you expose that computers are 10,000x faster today and also that they do 10,000x more work?
0

Share this post


Link to post
Share on other sites

What is faster, a boat or a car? This question is unanswerable, is nonsense, because you need more information. Is the race taking place in an ocean or a city?

If you specify a particular situation - e.g. 3km down along a flat road - then it's answerable (the car will win).

 

Is not nonsense.. I asked about the whole set of specyfic situations.. (and its ok)

0

Share this post


Link to post
Share on other sites

So, why?

The key detail I note is:

the same binary run, or the same source run with best avaliable compiler in each cases not 'tendentious' tweaks - something fair - is there something like that?

So comparing an identical executable...

On the one hand you have a 25MHz 386. Released in 1985, common in 1986. No floating point. No sound card. No concurrency. No SIMD. 16kB cache, 2MB total memory was normal.

On the other hand you have a 2.5GHz, 8 CPU cores. New releases every month. Automatic concurrency through an OOO core, automatically vectorizing and parallelizing compilers. 8MB on-die cache.

Any kind of trivial benchmark is going to see a 1000 times improvement just because of the nature of the chips, probably 10,000x because the internal design of the x86 processor has been completely replaced.

But then again, the work we do today is also thousands of times more complex.

The "thumbnail" quality images on this site, including the little 'partners' logo at the bottom of the page, are larger and more detailed than you would find almost anywhere in 1986; perhaps there were corporate logos that companies would embed in their apps, but most customers would have complained because of the size. Today we think nothing of a 1MB advertisement, but back then such an image would require a "large" floppy disk (people often bought the cheaper 720KB disks). We routinely handle files that would not physically fit on a computer from that era. The maximum size of a HDD partition back in 1986 was 32 megabytes; you could send an entire HDD from that era as an email attachment, and perhaps as an afterthought wonder if the other person's ISP might reject it due to size constraints. A single 5-minute high quality audio file that you use for background music today quite likely you could not physically fit on a single PC from 1986.

If you are going to make era-to-era comparisons, do it completely. A high-end expensive monitor had 640x350 resolution with an amazing 16-color ADJUSTABLE palette. You could pick any of 64 colors to fill the 16 color slots. That was a splurge, most people of the era had text mode of 80x25 with 16 color text, or graphics mode of 320x200, four color palette. (To be fair you could choose your palette. Magenta+Cyan+White+Custom, or Green+Red+Yellow/Brown+Custom. LOTS of choices) Disk drives were small, and most install disks were shipped on 320kB floppies or 360kB floppies, depending on which kind of floppies were used. If you had a modem you could get 1200 bps usually, 2400 on nice equipment. Yes, that is about 100 characters per second. That kind of computer cost around $3,000 in 1986, adjusted for inflation Google says that is about $6000 in today's money. Not cheap.

So considering that personal computing has fundamentally changed many times over the years, what possible use would such a benchmark give? I've read old reports where people make broad claims about what we 'should' be seeing because they compare how programs like Windows 386 perform on more current hardware, only to watch them explode in a fireball when people point out the comparison is useless. Would you expose that computers are 10,000x faster today and also that they do 10,000x more work?

 

 

I had not such intention (even it does not come in my mind)

Just need a set of specyfic benchmarks, This fortran stuff 

was interesting - I would also find something like that but for

more ram intensive benchmark, and int intensive arthimetic benchmark,

 

ps. i like to do my own benchmarking but i got no machines

avaliable to test, (got some p100 machine in the trash but it has no usb port and got a trouble to run) for example this is some benchmark i would like to run on variety of the machines

(this is a win32 program not sure if it would be possible to run on 386)

 

Could maybe someone here run it on some old or new machine and give me some results? (not malware just memset test in winapi)

 

https://www.dropbox.com/s/d0epr8d1drsa4bs/ramset.zip

 

If so thnx it would be informative for me...

Edited by fir
0

Share this post


Link to post
Share on other sites


wanted to know how many times 386 is weaker than present pc

 

start with clock speed, buss speed, memory speed, and something like number of clock cycles for an add , mul,  or floating point div. fetch times from memory might (or indirect addressing clock times) might be useful too.

 

then you have to figure in multi-core and cacheing on the new pc. good luck with that one. if you know how the chip and cache work, you could probably pencil and paper figure out some test scenarios and get some numbers that way (IE by theoretical derivation).

 

and then there's the whole issue of expanded instruction sets on the newer processor. it may be able to do things more efficiently whit newer instructions such as mmx, simd, etc. this is where exactly what you want to measure comes into play. so you'll need to measure a number of things to get a general idea of how much faster the new pc is at different things. there may even be a difference in the number of instructions (as well as their clock times) required for something as simple as a fetch, add, and store.

 

unless its mission critical, you might just want to say the answer is "its boatloads faster!" and get back to building games. <g>

 

 

 

if you have access to both pc's, simply decide exactly what you want to measure, exactly how to measure it on both pc's (this will probably be a compromise at best), and write a little benchmark.   that will take into account all the new stuff on the newer pc, except perhaps for using special instruction sets.  and if what you want to measure can use new instruction sets, then you can simply add a test using the new instruction set to your benchmark, which will give you an idea of the performance of the new vs old instruction set..

0

Share this post


Link to post
Share on other sites

unfortunately, without hardware access to test for yourself using benchmarks you design to test exactly what you want, odds are you'll never get the info you're after. 

 

"boatloads faster!" is sounding better and better eh? <g>

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0