Can game development excel with abstract concepts?

Started by
22 comments, last by TheComet 7 years, 11 months ago

Even if i caught the bug 1/10th down the cycles thats still 160,000 steps/clicks. Just thinking of stepping that many times exhausted my brain immediately. So it meant i would have solve it without data or visual feedback - which is close to (though not quite) solving an abstract problem. Well i never did find the bug.

Well if your assumptions are correct, you have technically already found the bug, but haven't found the cause or fix. I do not understand why debugging through this problem would require so many steps. So you should be able to attach a break point at certain locations so you don't have to walk through so much boil plate code. You may want to consider your tools as part of the issue as well.


The question i was asking myself afterwards (and like to ask others) was would game developers be able to develop successful algorithms if the concepts are abstract? This should still be valid for all ends of the development spectrum in fact moreso for experienced developer

I don't think video games is the place for theoires and abstract algorithms unless it is your hobby. On a professional level I believe this idea would be scrapped for something more sound and realistic to bring in revenue.


For instance recently i was iterating through the pixels of some bitmaps (400X800) using nested loops and doing some calculations during the cycles. During one of the cycle something broke To find the root cause I would have put at least 5 break points and to step through with the debugger. That meant i would have to click 400x800x5 (1,600,000) times or less!!!

I believe you would have to click "400x800x5(1,600,00) times or more". You would most likely miss this issue the first time you debug this. Debugging isn't about locating the issue the first time but reproducing the issue then fixing which usually takes more than one try.


And do real abstract problems exist in game development?

I imagine they do, but why would you want to do that? Programming isn't about being abstract... excepting the OOP sense of abstract. Software either it be video games, web severs, or mobile apps should be easy to read, perform well, and be scalable. Coming up with an abstract concept and putting it into simple development steps, well that is probably as close as your going to get.


I can see these debugging methodologies clearly now. The disadvantage of a lone wolf like me is i am developing a software that a in a company a team of probably 5-10 programmers develop(where ideas are shared, burden is distributed but i don't have that luxury), so one person doing the job of ten developers, the consequences are that a lot of times, under intense pressure, my brain is exhausted, my eyes are clouded and i see less.

Well you learn everyday

I wouldn't compare your skills to 5-10 programmers for many reasons. I don't think you understand how big a project a team of ten developers can create.

Advertisement

My conclusion: the difference is having feedback data (a lot of this is visual) to analyse. When i have feedback data, particularly visual, i do very well, otherwise i mostly fail.

Yep, humans are doing fine with visual information. :)

For instance recently i was iterating through the pixels of some bitmaps (400X800) using nested loops and doing some calculations during the cycles. During one of the cycle something broke To find the root cause I would have put at least 5 break points and to step through with the debugger.  That meant i would have to click 400x800x5 (1,600,000) times or less!!!

I fully agree that a debugger here is not the answer. When I hit a problem like this, and I don't know the next step, I stop trying (and do other things).

I ponder about it under the shower, while doing the dishes, at 6:40 in the morning when you wake up, but the alarm clock is still silent, while walking the dog. Usually in a few days, I get a new idea how to catch the little bugger, and I try that. Repeat until problem solved :)

This may look counter-productive, but if you leave it for a few days, and then look again, you have a more fresh view, and you see new/different things, and get new ideas.

The question i was asking myself afterwards (and like to ask others) was would game developers be able to develop successful algorithms if the concepts are abstract?

I think the concepts are abstract already. An AI has to work for every situation it may encounter, all puzzles that the game generates must be solvable. Did you try to solve every possible puzzle of your program in every possible way that a user may use to solve it? My guess is that you didn't. So how do you know it will work?
The only way I know is by reasoning about it in terms of abstract concepts.

Heck, game software in the end only produces abstract interactive animated pictures, nothing more. There are never tanks, airplanes, and fellow soldiers in my apartment when I play a FPS. It's all imagination going wild :p

And do real abstract problems exist in game development?

Sure, it does. Something like path-finding is not solved for each case separately, the A* algorithm has no doubt been invented based on an class of route-problems.


How did Leonard Euler visualise and formulate ei? + 1 = 0 ( i, e and ? being weird numbers impossible to visualise)

Can you visualize 1? can you visualize 1.5 ? can you visualize 3.14? If you can you have visualized a pretty close approximation to ?. The only reason you consider it a 'weird number' is because our decimal number system cannot copy with it, but you can just as well blame the number system for being weird :)


However, this is not how Euler worked. He read and studied math. He read and studied geometry. ? is not a weird number there, it's the key value in the entire theory. It pops up everywhere in relations between different concepts in geometry. It's like the power of 2 in computers. You find it everywhere, since it's fundamental in how bits are organized.

How did physicist formulate the mathematics of 10-11 extra dimensions when its practically impossible to imagine more than 4 dimensions?

How do you handle
- Scalability
- Speed
- Maintainability
- Extensibility
- Modularity
- Code size
- Memory usage
- Persistence
- Upgrading/updating
- Version migration
of your code? It's practically impossible to think in more than 4 dimensions!

The answer is, of course, you don't. You don't do all the above at the same time, and neither do physicists. You use the fact that the different dimensions are loosely coupled, and add one dimension at a time, and verify it against all already existing dimensions. Repeat until all dimensions done.

How did Nicolas Tesla visualise that AC was the solution to cheap, efficient electricity?

He probably made a few calculations what would happen if electricity would go nation-wide. How much power would I need, how much would I loose in the cables, that sort of calculations.


I believe there is not one 'concrete' and 'abstract'. They come into existence as soon as you draw a line, call one side 'concrete' and the other side 'abstract'. You can draw infinitely many different lines, and thus there are infinitely many forms of 'concrete' and 'abstract'.

You think images displayed at the screen are concrete? Sorry, but they don't exist, they are just a bunch of lights that are turned on and off by a program. A sort function is concrete? Sorry, but "statement" and "variable" doesn't exist, both are just values in memory space. Hmm, memory space doesn't exist either wires split the space to various blobs of silicon chips. Values don't exist either, they are just a bunch of bits that we interpret as "belonging together". Hmm, bits are just a bunch of wires, and an electrical load that we interpet as 1 or 0. I don't know more levels, but I am sure a physicist would give you a few more. Depending how you look at things, you create a concrete reality for yourself to work in.

What you do with source code and data structures is the same as what these smart people above do. They draw lines. One side of the line they understand and assume to work, and then they think about the part above the line, using ideas from below the line. Everybody has several lines. When you write a new function, the new function first lives above the line. After you finished, you draw a new line, where the just created function is assumed to be available and working below the new line, and you can make a new function on top of it.
If you find a bug in the function, you step back to the previous line, and find out how the function fails.

The other way around works too. I can speed up a routine, but what happens in the surrounding code then? You look at the local function, creating a new line for the faster version, then go up a line, to check how this works in the bigger picture.


Most people make and switch lines without realizing it. Scientists however are very good at this game, and consciously switch between lines, draw new lines, and consider consequences. Each line gives a new way to look at a problem.

A second thing that everybody does, is to throw away clutter. All programming languages basically do the same thing, they create CPU instructions. I just think in CPU instructions instead, and drop all languages. I'll figure out how to express my solution in language X later.
Hmm, I am not quite interested in the precise order of CPU instructions, let's just say it can move data and compute data. I don't need to consider the CPU at all, I can just think in data values and structures and how they relate and change. I'll figure out how to tell the CPU what to do later.

You stop doing trowing away clutter when you arrive at a level where you can comfortably think about what you want to achieve. We typically stop at the level of C++ or data structures, as that's where we normally operate.
If you invent an algorithm, you typically stop when you have the smallest number of relevant concepts left (which is also "the point where you can comfortable think"). Eg path finding is something like "I have a landscape (of tiles) and bunch of things that need a route from some point to another point in the landscape."

Scientists typically drop everything but the core elements that they need. Guys like Euler think in e and ?, as it's the core of their theory.

Quote
How did Leonard Euler visualise and formulate ei? + 1 = 0 ( i, e and ? being weird numbers impossible to visualise)
Can you visualize 1? can you visualize 1.5 ? can you visualize 3.14? If you can you have visualized a pretty close approximation to ?. The only reason you consider it a 'weird number' is because our decimal number system cannot copy with it, but you can just as well blame the number system for being weird

you're missing the point here. Its not so much about the individual numbers... but its the whole formula ei? + 1 = 0

ei? = -1 is a very unique state. That e, i and ? are also special numbers just adds to the mystery

of your code? It's practically impossible to think in more than 4 dimensions!

The answer is, of course, you don't. You don't do all the above at the same time, and neither do physicists. You use the fact that the different dimensions are loosely coupled, and add one dimension at a time, and verify it against all already existing dimensions. Repeat until all dimensions done.

You're missing the point again. Its not about modularity, its genuinely impossible to visualise 11 dimensions

I can represent 1D by drawing a line on a paper

I can represent 2D by drawing a rectangle on a paper

I can represent 3D by making a cuboid box out of my cadboard

I can represent 4D by showing how the cuboid changes with time

So 'Alberth, now continue with the 5th, 6th..... 11th representations

Quote
How did Nicolas Tesla visualise that AC was the solution to cheap, efficient electricity?
He probably made a few calculations what would happen if electricity would go nation-wide. How much power would I need, how much would I loose in the cables, that sort of calculations.

Once again you are missing the point.

Its not about some calculations, anyone can do that. Its about AC vs DC

Particularly in those days, it was more intuitive to see DC as the way forward and very counter-intuitive to visualise AC (a current system that flip-flops changing directions every time) as being able to provide a stable current.

Nicholas Tesla's brilliance was to see through the foley of a direct current (remember simplicity is usually more efficient) and prove that the more complicated system could be more efficient

can't help being grumpy...

Just need to let some steam out, so my head doesn't explode...

 

You're missing the point again. Its not about modularity, its genuinely impossible to visualise 11 dimensions

I can represent 1D by drawing a line on a paper
I can represent 2D by drawing a rectangle on a paper
I can represent 3D by  making a cuboid box out of my cadboard
I can represent 4D by showing how the cuboid changes with time

So 'Alberth, now continue with the 5th, 6th.....  11th representations


I fully agree it's impossible, so what is your point in asking me to the finish your list that we both know to be infeasible?


What I argued is that they don't handle all 11 dimensions at the same time, they work in aspects, just like you don't think of all aspects of your code at the same time. You also take a few (one or two) dimensions of your code, and check whether any problem occurs.
You test every pair, and if everything is ok, you're done.

If your problem is deterministic (does it repeatedly with same input data) then just do some basic logging of the loop numbers (log for each 100s or 1000s or 10000s etc.. - can even just be print statement to screen or if no visual then to a file with it flushing every print) to get the rough number where it happens and then refine it within the range after the last number printed with a smaller logging increment until you are close enough to then add a break point (insert a value test statement and break on it 'hitting'...) just before the failure event where you can then use the debugger with minimal stepping.

You are lucky if its deterministic and not something that varies or is inconsistant (due to random data in memory where even changing the program to include logging instructions suddenly nullifies the particular memory pattern case).

Ive actual, for certain problems, gone as far as having a SAVE of the entire program state done on a particular manual input key (and a corresponding LOAD to get back to that state immediately). Some game I had made would run for many minutes before barfing and with similar manual inputs would recreate the same logic path traversal issue and die. It was generally recreatable and I would frequently SAVE manually (like once a second) while it still ran and eventual hit the bug. THEN I ciould start from that last SAVE image and try to spot the game activity leading to the problem. Repeating a few times and getting the last SAVE closer to the failure and then start the debugging/tracing (the old 'Print Statement' method of finding approximately where the problem was in the code and THEN being close enough (knowing where in the code it was happening) to use the interactive debugger to find the failure, and then the cause. (usual it was a non initialized pointer or a array limit issue).

---

As far as 'abstract' - it may not be visible in the game, (whatever this procesing is) but you often would use testing visualization to KNOW that your algorythm is doing what you designed it to do (visualization of the working data and its flow/process against specific test cases ...).

It IS eventually going to do SOMETHING in the game that is displayed/output and at that point it isnt really 'abstract' and the logic and processing to do whatever also isn't really abstract -- or why would you be using it? (unless you use someone elses library and its interior workings are hidden from you, but even then you can at least see and know what to expect from it (you test it to make sure it works in all the ways you want to use it... does it give you the results you wanted for the cases you have and you do 'visualization' of that at least during your 'proofing' phase (and you should have it designed to turn taht visualization back on if needed.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
In my experience, bugs such as what you've described can be avoided if you write some solid unit tests. If such a bug appears, then I will write more unit tests to try and pin point the cause of the bug. I've found it's the most efficient way of finding the cause quickly, as unit tests allow you to prototype changes to small parts of your program without the overhead of having to relaunch your entire application every time.
"I would try to find halo source code by bungie best fps engine ever created, u see why call of duty loses speed due to its detail." -- GettingNifty

Well, now, the three main modes of problem solving are empirical, analogical, and analytical.

By trying to single-step through an iterative problem, you're engaging empirical problem solving. You observe the problem until the solution is found (sometimes called 'brute force').

If you were to break the problem down into smaller sub-problems until you have a know solution to each one, you're being analytical. You reduce the problem until the solution is found (analysis).

If you were to build a (mental) model in which you understood each part, you're being analogical. You understand the problem in terms of known solutions until an appropriate one (an analog) is found.

All three modes have been used very successfully in the past, and often combined at different points. For example, empirical analysis was used to create an mathematics of rotational symmetry, which was analysed to develop agebraic field theory which was later treated analogically to develop generalized linear algebra of arbitrary dimensions. Technically, your favourite 3-D shooter got started as a game of spin-the-bottle.

Most people favour one mode over the others.

Stephen M. Webb
Professional Free Software Developer

>>> How did Nicolas Tesla visualise that AC was the solution to cheap, efficient electricity?
>> He probably made a few calculations what would happen if electricity would go nation-wide. How much power would I need, how much would I loose in the cables, that sort of calculations.[/quote]

> Once again you are missing the point.

> Its not about some calculations, anyone can do that. Its about AC vs DC

> Particularly in those days, it was more intuitive to see DC as the way forward and very counter-intuitive to visualise AC (a current system that flip-flops changing directions every time) as being able to provide a stable current.

> Nicholas Tesla's brilliance was to see through the foley of a direct current (remember simplicity is usually more efficient) and prove that the more complicated system could be more efficient



This answer popped up yesterday evening. My answer is in essence correct, if you want money to build a powergrid, you have to do the math, and yes the math is simple.

However it needs a little electrical engineering context, to understand how the math changed between DC and AC.

Electrical devices use power, and their relation is P = U * I, where P is the power delivered, U is the voltage, and I is the current. Cables have resistance (not much, but it's non-zero at room temperatures) as U = I * R. U is the voltage that you loose, I is again the current, and R is the resistance of the cable. It's constant, and decreases with diameter of the cable.

At the powerplant, you generate Upower and I, and send it down the cable. I stays the same everywhere. From the powerplant to the house (and back, but let's keep things simple) there is cable and you loose Uloss = I * R. At the house, you get P = (Upower - Uloss) * I power.
If you add more houses, you must push more power into the cable. Increasing voltage at the house is not really an option, as high voltages are dangerous. The only option left is to increase I. However, doubling the amount of current means Uloss doubles too. You have to increase Upower at the powerplant too, to compensate, or send even bigger current (cable losses are smaller than power consumption by devices).

Another option is to add more cable. Current in each cable gets lower, and you reduce losses.


Now scale this up to a city of London, or nation-wide, and with a simple calculation you can show you need loads and loads of copper cable, or you need to build a powerplant with each group of houses.


Now AC has an extra feature. You can change the voltage with a transformer. These things are very efficient, and have very little power losses. So Tesla could raise the voltage between the powerplant and the houses, and at the 'houses' side, lower the voltage back to a safe level. Since P = U * I holds, higher voltage means less current. Less current means less cable losses.

Tesla could just show that he needed only a few cables between the powerplant and the houses, and that he could serve a lot more houses than you could with DC for the same price.


As for how did he see it, just look at U = I * R. U must decrease. Therefore R must decrease, but you need more cable to do that, or I must decrease. How do you move power with decreased I. P = U * I. I decreases, P remains the same, U must go up. Can you do that with AC. Yep, you can.


Electrical devices use power, and their relation is P = U * I, where P is the power delivered, U is the voltage, and I is the current. Cables have resistance (not much, but it's non-zero at room temperatures) as U = I * R. U is the voltage that you loose, I is again the current, and R is the resistance of the cable. It's constant, and decreases with diameter of the cable.

Power loss is proportional to the square of the current (P = I^2 * R -- Ohm's law, arrived at empirically). That's the key to AC superiority for a commercial electrical power distribution grid. The result was arrived at through analysis.

Stephen M. Webb
Professional Free Software Developer

What started with a question about debugging and phylosophy ended with a lot of people speaking of very precise matters. What you could learn from this is that game development is hard, and that there exists real abstract problems there waiting to be solved. You became fixated with a very precise way of working and failed to see obvious alternatives that the other developers soon told you. If the "usual way" doesn't work the second time around, try to get completely out of the box and look at it diferently, if you focus on the visual input from the debugger you definitely wouldn't see the dump file and scan approach, asking is fine, just try to be a little nicer, as for solo teams or a 100 persons teams, the scales are different, so the problems are different as well, when working in big teams, very big problems arise, problems that neither you nor me can even fathom, try to sharpen up your communication skills, as they are equally important as being a badass programmer.

This topic is closed to new replies.

Advertisement