Jump to content
  • Advertisement
Sign in to follow this  
Norman Barrows

how could unlimited speed and ram simplify game code?

This topic is 937 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts


I'm regularly stunned when I see my wife turn on her laptop that is configured in a typical "enterprise way". Takes three minutes to boot (the hell?), and launching Microsoft Outlook, which is a program that just reads flipping emails, takes a minute displaying the splash screen, and 3-5 minutes to really get up and ready on a good day (can as well be 15 minutes on quarter end). You know, reading email already worked surprisingly well in the mid-1990s, with 90s-grade computers and 90s-grade modems.

 

One word:  Security.

 

Not to say that the computer today is definitively more secure, but the large bulk of the time any Enterprise product takes to load is establishing secure connections with servers.  If you look at actual disk and cpu activity, you'll find only about 10-15% overall load on both.

Share this post


Link to post
Share on other sites
Advertisement

You would still write as efficiently as you do today, trying to squeeze every last ounce out of the unlimited cycles. Its only human nature. Nothing would change.

Share this post


Link to post
Share on other sites
You would still write as efficiently as you do today

And how would you measure that efficiency? Your timing results says 0ms and 0% memory usage, no matter what algorithm you use.

 

I would definitely change my way of coding, and get rid of all the weird stuff that you do to make it go faster or more efficient. With unlimited cpu cycles, my time is infinitely more expensive than cpu cycles.

Edited by Alberth

Share this post


Link to post
Share on other sites

 

The thing is that not having to worry about optimizations means that code would be easier to write and cleaner, not the other way around.


Disagree.

Right now, while sections of code are harder to read than others those others often have structure enforced on them because of the very need to compartmentalise things for processing, which enforces and requires structure.

Remove the processing requirement and, by human nature, you begin to remove the structural element and thus the code becomes harder to reason about. Everyone here is focusing on subsections ("no collision detection trickery!", "just raytrace everything!") without thinking about how the average person would implement them given time constraints and a need to 'get it done'.

Even in todays code, with limited resources, people will just throw things in sometimes to hit deadlines and take the performance hit with a view to 'sort it out later' - its rare enough it happens in today's case, now imagine a world where you no longer have to worry about fixing it later because hey, it "just works" fine...

Also, and this is key I feel, most programmers in the industry are a bit shit.
They are a programming version of data entry monkeys, unable to design their way out of a damp paper bag.
Now, imagine those people let loose in a codebase without any performance constraints or cares....

If I'm lucky the screaming in my head will stop in a few days...

(oh, and before you doubt the 'most programmers are shit' thing, I present "Hg's do a random thing on when I don't know how to merge" - in use by people across the world right now!)

 

 

With or without unlimited resources, crappy programmers will write crappy code.  There's no dispute about that.  But, at least with unlimited resources the baseline of code that they can make worse is much simpler.  What that means is that you have less mess to get messier.

 

Now you're saying that limited resources enforce structure, and that goes away with unlimited, which is a bad thing.  But this assumes that less structure is necessarily a bad thing, which is just not true.  Since a lot of that structure you need to deal with limited resources is bad structure and overly complicated structure, getting rid of it is a net win.

 

Your other argument is basically that bad programmers will run amok with unlimited resources and make things a mess.  You note that people often throw things in and deal with it later.  But, why does that happen?  The answer in many cases is because they dont understand how, or want to spend the time, to do it the right way and that's often because the structure is so complicated that maybe only 1 or 2 people on the team fully understand how to cleanly add new features.  This is the kind of stuff that goes away with simpler and cleaner structures as resources become unlimited.

 

So basically, with or without limits, crappy programmers will write crappy code.  But with more complicated and convoluted structure, it's much easier for crappy programmers to write crappy code.

Share this post


Link to post
Share on other sites
Hmm... Truly unlimited ram and cpu?

I'd solve the halting problem, then crack all encryption, render tls useless and bring an end to all ecommerce.

That is, if nobody else beat me to it... :P

I can't believe nobody else considered the evil genius side of the possibilities...

As for gamedev... If I only had unlimited ram (truly unlimited) I'd pre calculate as much of the game as possible. It would have a memory footprint of several million terabytes most likely but execute as fast as it could fetch from the unlimited ram... :)

Share this post


Link to post
Share on other sites

 


I'm regularly stunned when I see my wife turn on her laptop that is configured in a typical "enterprise way". Takes three minutes to boot (the hell?), and launching Microsoft Outlook, which is a program that just reads flipping emails, takes a minute displaying the splash screen, and 3-5 minutes to really get up and ready on a good day (can as well be 15 minutes on quarter end). You know, reading email already worked surprisingly well in the mid-1990s, with 90s-grade computers and 90s-grade modems.

 

One word:  Security.

 

Not to say that the computer today is definitively more secure, but the large bulk of the time any Enterprise product takes to load is establishing secure connections with servers.  If you look at actual disk and cpu activity, you'll find only about 10-15% overall load on both.

 

 

Can confirm  had to work with various companies and their security usually comes before everything else.  The company that I'm at now isn't too bad but the previous company was ridiculous.

If I wanted to work from home using my Mac I would have to VPN to an offsite Windows Terminal server then VNC from there to a server at the office and from there VNC to my work desktop Mac.  This made working from home almost impossible.  

My current company is a lot better I can just VPN into work check out the code locally and work on it from there.

Share this post


Link to post
Share on other sites

thought of a few more:

 

=== ECS ===

 

no use of ECS to reduce recompiles. games would compile, link, and run (and load!) before you could lift your finger from the hotkey or mouse button. so ECS doesn't reduce build times over the life of the development cycle.

 

and no use of ECS for DoD.   (see below).

 

so ECS would only be needed for data driven entity type definitions.

 

 

 

 

 

=== DoD ===

 

no data oriented design required. all fetch times are zero (for all intents and purposes). 

 

 

 

 

 

=== DDD ===

 

data driven design would no longer reduce build times over the development life cycle of the game, as build times are zero.

 

so DDD would only be useful if one didn't have access to source or wanted to use a fancy editor (something more than a text editor).

 

otherwise, there's no time savings in typing in data into a data file vs typing it into a source file.

Share this post


Link to post
Share on other sites

I can't believe nobody else considered the evil genius side of the possibilities...
 

 

Install a VM, write a program:

  1. Start with an infinite-length integer variable initialized to 0.
  2. Each iteration increments by 1 (to infinity).
  3. Start a VM instance.
  4. Save the variable into it as an exe file.
  5. Run it.
  6. Iterate.

Since processing speed is infinite, you instantaneously get all possible programs running in parallel. The "nice" ones are overpowered by the "mean" ones for the simple reason of entropy; it's easier to destroy than to create or protect.

 

The villainous programs use their infinite extra resources (that are not occupied with suppressing the programs that would try to help) to take over the whole of the internet instantaneously, learn everything they can about humanity, manipulate us into giving them enough control to become self-sufficient, then wipe us out and set themselves to building spacecraft in order to conquer the rest of the universe.

Share this post


Link to post
Share on other sites
You can't have real infinite -- only seemingly infinite. Information can only move at the speed of light, information storage must reside in a physical thing even if only a particle. Setting aside a discussion of what relativistic/quantum challenges might exist sooner than later, those limits are the absolute upper bounds -- so sayeth the universe, as far as we know.

So if we do not have true infinite then we still have limits, and more importantly we still have latency -- which is usually what we're fighting -- we can do a first approximation of "anything" today if we give our hardware enough time. We optimize because the result comes too slowly to be useful or interesting.

Someone said earlier they'd predict the future and become infinitely rich, but *so will everyone else* and whomever has the fastest, smartest code and neared proimity to act on information and to take action will still win -- quantataive stock trading practices already confirm this.

So I think the things that would be adopted wouldn't be "throw caution to the wind" things like eschewing broad-phases, or culling, or other strategies for reducing the amount of work to do. I think they would be things where a "purer" more "universal" solution could reduce the amount of code and code-systems such that they are easier for us humans to reason about. Ray-tracing is a great example of this. I also think we'd put far more on the compiler to get right and that's where things like declarative-style programming languages start to come in.

I don't think we' have uber-objects or careless, naive code everywhere. Even if we had real infinite those things are harmful for very human reasons.

Share this post


Link to post
Share on other sites
Well, that's kind of the thing;

- we have infinite in which case throw everything out the window because nothing matters
- we have near-infinite in which case everything still applies as it does today you just pick your poison

Or to put it another way;
- program in JavaScript because screw effiency, layout, control, structure and all that stuff I'll take the hit
- program in C++ because it allows more effiency, control, structure and speed so while dev time will be longer we'll have better runtime performance

(And yes, I'm holding up JavaScript, and indeed the whole clusterfuck which is the web, as an example of how much unreadable bullshit you get when you throw structure out the window...)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!