Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 11 Nov 2006
Offline Last Active May 23 2016 11:47 PM

#5291483 Operating System Questions in Assembly

Posted by on 13 May 2016 - 10:03 PM

Yeah if you're trying to run exe files you're just going to make a windows/dos clone.  Whats the point in that?

The only Windows clone being actively developed is ReactOS, and it's unstable as hell.
Windows is still the de facto standard OS for PC.

This is like telling hardware guys back in the '80s that developing an IBM clone was a waste of time.

As braindigitalis said, you need an awful lot of work to get to the point of displaying graphics. Even writing your own 2D framebuffer drawing code is a considerable challenge, but you also need to write your own graphics driver.

#5290440 Is it real?

Posted by on 06 May 2016 - 10:53 AM



I made all the enviroment models and character models, alongside sound. Does it still take years to code everything?

First, you have to learn programming.
#1 takes about 4 years.

And how do you learn? Through practicing? But to practice you firstly need to know the syntax. And to know the syntax you have to read a book usually of 1000 pages, where i will just assume, you need somewhere like 300 pages for gamedev, which are scattered through the book.


Tutorials on the web are a good place to start if you need to self-teach.

#5290346 Time - the most important factor.

Posted by on 05 May 2016 - 06:45 PM

I primarily work on 2D and intentionally retro-styled 3D games.

I develop on a PC I bought for $150 in 2013. I test for low-end performance on a mid-range PC from 2005. In a few years I'll probably retire the older PC and use my current machine for low-end performance testing. I've never paid for developer tools, either.
Spending a small fortune on development PCs and software is a waste of funds better spent on paying someone for work (once you get to that point)

Game development takes an incredibly long time to learn to do well. Individual games take a long time even for the experienced.

#5289835 Procedural Universe: The illusion of infinity

Posted by on 03 May 2016 - 12:01 AM

Most celestial bodies orbit their star(s)/galaxy/etc in a disk. This means that from a distance, a galaxy or star system would appear to be a 2-dimensional object. Even inside a galaxy or star system, they're so relatively 2 dimensional that a quadtree probably makes more sense than an octree. Since objects with fixed orbits on the disk should rarely collide, you only need to worry about objects that have been shifted from their orbit (IE an asteroid the player pulled with their ship) and objects with orbits that cross other orbits (IE comets), maybe this is a better case to optimize for as far as collision goes.

Also, you'll need to adaptively load/unload nodes as the player approaches/leaves them. The number of stars in a galaxy, and the number of objects in a star system, it would be impossible to keep them all in memory. You need to keep as few as possible in memory. Ideally you'd use a PRNG to generate them, and then they can be reliably stored as just the PRNG seed. Then to show the node at a great distance (size <= 1px), you just need a cached hue and absolute magnitude.

About Oort clouds, Kuiper Belts and Astroid Belts - I'd generate them with a PRNG each time the player approaches, separate from the rest of the system. Too big and too much stuff to keep loaded when the player's nowhere near them.

I know that in order to overcome the floating point precision i would have to somehow scale the surrounding cuboids (spaces) in order to keep the depth buffer happy, this should be pretty easy, at least in theory.

You actually have several "space systems". You don't need to account for all of them for every object. Ignoring the player for a second:
1) Satellites (moons, orbiting ships, etc) only need to worry about their location relative to their planet. This can probably be on a 100km scale. Justification: accuracy of IEEE single precision ("float") is 7 digits. Mean distance ISS to earth is 400km = 4.0; mean distance moon to earth is 370,300km = 3703.0; mean distance Callisto to Jupiter = 1,833,000km = 18330.0 [Jupiter has satellites 10x more distant than Callisto, but they're all less than 200km diameter and boring]. This leaves a couple accurate digits behind the decimal point too.
2) Planets and inner solar system interplanetary objects only need to worry about their location relative to their star(s). This can probably be on a .1AU scale. Justification: same as above. Mean distance Mercury to Sun is .39AU = 3.9. Mean distance from Neptune to sun is 30.1 AU = 301.0. Fits very well in single precision.
3) Oort cloud constrained objects only need to worry about their location relative to local objects. This is probably best accomplished using a bounded volume heirarchy, where the bottom-level bounded volume contains some maximum of N objects, and you only compare each of those N objects against objects in the same bounds.
4) Kuiper belt constrained objects only need to worry about local objects. They also need to keep track of their orbit, but this can be a single value (eg in Radians).
5) Oort cloud and kuiper belt objects that enter the inner solar system during their orbits need to worry about inner solar system objects *ONLY* when they're in the inner solar system. There's numerous ways to implement this behavior.
6) If you want high def views of the planets surface from low orbit/high altitude, objects on the planet might also need to keep track of their latitude and longitude, OR you'll need to rig up some clever way to project surface geometry on a sphere.
7) On the galactic scale, you can probably get away with a 10 parsec scale (milky way is 30,000parsecs across = 3000.0; the nearest star to the sun is Proxima Centary, 1.3parsec = 0.13).

Adding in the player, you ALSO need to translate every object visible to the player into "player space" for rendering, culling, etc. Player space while in a ship will probably have a scale of ~10m (the USS Enterprise NC-1701 from the original Star Trek was supposedly 289 meters long=28.9, the T-65B X-wing from the original Star Wars trilogy was 12.5 meters long = 1.25).

Of course I'm assuming here you're going for a hard sci-fi game. If you're looking for mass appeal, using realistic coordinates is counter productive: nobody wants lightyears of nothingness before they hit their next star. So instead, you can probably use double precision and a 10km scale for a solar system which only has a diameter slightly larger than Jupiter (140,000km = 14000.0). For interstellar coordinates, you could just use the same scale, and add an int64_t in front of it (maximum size of galaxy = 9.2*10^18 jupiters), and for intergalactic coordinates just use a single int32 representing n*maximum_galaxy_size = n*9.2*10^18 jupiters = already crazy huge.

#5289662 Should i learn Win32 or UWP?(C++, Directx)

Posted by on 01 May 2016 - 08:46 PM


It depends - are you willing to lose a significant portion of your potential customer base by using a newer technology (UWP requires Windows 10 - less than 15% of the internet marketshare for PCs)?

Microsoft also tends to create abandon rarely used technology after a few years. If UWP doesn't catch on, in 5 years it might be worthless knowledge. But they can't really abandon Win32 - it's core to their platform.


15% of share (a lot more if we speaking about gamers) is not bad at all if we consider that Windows 10 was released less than a year ago.


When Microsoft decided to abandon a technolgy, usually it continues to work without any issue on future versions of Windows. Keep in mind that UWP on C++\CX and WinRT, are actually all based on COM and Win32. This mean you can target UWP with "pure" C++ if you want, however this need actually a lot of work and you can also use the WRL template library to mitigate that.

On Windows platform Win32 is like the CRT: they are both the meaning of life for the Windows platform itself, you cannot kill Win32 or the CRT without killing the entire Windows platform.


A guy called Kenny Kerr started writing a pure C++ library called "Modern" to target WinRT without C++\CX and to replace WRL. It also joined later Microsoft, so hopefully Microsoft will release in the future a complete version of "Modern" to target WINRT and create UWP application using pure C++ without the proprietary component extension (C++\CX) or the usage of WRL. A library like that would solve all kind of issues regarding targeting both UWP and Win32 desktop application.


15% - you're right, it's not bad, it was just a point. And it's likely to go up quite quickly. I hear it's a pretty solid OS, but I'm sticking with Win7 for now. You're also right about marketshare among gamers probably being higher - but that's only because it's the version of Windows that's already installed on their gaming rig (I imagine DirectX 12 might be another reason. Was there some technical reason for Microsoft not backporting DX12 to Windows 8 and 7, or is this just an "upgrade already" thing?)

As for Microsoft abandoned technology continuing to function properly on new versions of the OS - try playing an old game using DirectDraw on Windows 8 or 10. Heck, they messed with DWM in Vista and 7.

#5289548 Is making game with c possible?

Posted by on 01 May 2016 - 03:38 AM

I use C quite a bit, personally. It doesn't really provide much benefit over C++. I like it because not having classes handy can push me to think about different ways to design code. I don't think I've figured out anything novel yet, though. There's only two reasons why I would recommend anyone else use C over C++:

1) You don't want to, or can't, link to the normal libraries.
2) You're considering working with a C purist.

The second reason might come up in game development. There are some C purists in the hobbyist crowd. Usually their reasons for being C purists are pretty pointless, but in my experience they're usually crazy good and worth working with just to learn a thing or two.

The first reason is not likely to come up in game development *anymore*. Even cheap low end computers have enough hard drive space and memory to make saving 1-2mb fairly pointless (this was not the case 15 years ago).

The first reason *might* come up if you're doing other sorts of programming. For example, the C runtime isn't available in an OS kernel, or on a custom OS. The C runtime might not be available in certain embedded environments. If you're writing web services using CGI (common gateway interface), cutting out some of the pre-main work might be worth it.

Like Norman, a lot of my code winds up looking like C code even when I'm programming in C++. The "C way" of doing things is usually (but not always) more efficient, but you can always do the "C way" in C++, and still get all the advantages of C++.

#5289503 Should i learn Win32 or UWP?(C++, Directx)

Posted by on 30 April 2016 - 07:57 PM

It depends - are you willing to lose a significant portion of your potential customer base by using a newer technology (UWP requires Windows 10 - less than 15% of the internet marketshare for PCs)?

Microsoft also tends to create abandon rarely used technology after a few years. If UWP doesn't catch on, in 5 years it might be worthless knowledge. But they can't really abandon Win32 - it's core to their platform.

#5289381 Question about Open World Survival Game Engines

Posted by on 29 April 2016 - 11:53 PM

1) If you're not a developer, and you don't have startup capital to pay any livable wage, what do you bring to the table? And not the BS about industry experience and customer base knowledge - I mean actual work that you can do.

2) If you're not paying, and you're not developing, and you're not a lawyer/accountant/etc who can manage some major aspect of the potential business by yourself, you're going to have to accept lower equity than any developer -- you're essentially getting equity for nothing, so it's still a good deal for you. Capitalists get away with making money from others' labor because they have the capital to pay wages while waiting on the returns from that labor. It's a matter of material convenience for the laborers - steady, consistent pay. That's the only way that arrangement works in practice.

3) You're asking people to spend considerable time (probably thousands of hours) on this idea of yours, for no pay, but you want 100% control? You're never going to get anyone talented to stick around with that policy.

4) About the 50hr guy and 5000hr guy - there's no accurate way to quantify time spent. If you ask the developers to log it; some will lie, otherwise will neglect to log time they spent just thinking about something. If you try to count lines; you lose time spent testing, chasing bugs, profiling for optimization, etc - all necessary, often difficult, sometimes requiring advanced knowledge - and all impossible to guess by anyone but the developer themselves. If you try to factor in communication by formal lines (IRC channel, meetings, we) you might miss a quiet, diligent worker and reward a gab.

5) If I take a day off from work to implement some feature in one sitting, I may only be investing 8 hours of work more than usual, but I'm also losing 8 hours of immediate pay to do this. How do you propose to handle this type of investment in the form of lost wages?

#5289356 Lol, IDirectDrawSurface7::Blt changes Aero scheme to Windows 7 Basic

Posted by on 29 April 2016 - 07:08 PM

Apparently this is a common problem with old games using DirectDraw, including Age of Empires and Age of Empires 2, which have sentimental value to me.

Perhaps use GDI instead? It should be fast enough.

#5287479 HINSTANCE & HWND Corrupted when passed around functions

Posted by on 18 April 2016 - 10:32 AM

I agree with fastcall22, there's no point to having the members of WinEventArgs being references, they are POD value types and you gain no benefit to pass by reference. I would keep them const, as these values should not be changed in processing.

Yes, because your EDelegate takes the IEventArgs by value, it performs a copy construction of IEventArgs from your WinEventArgs - the WinEventArgs data is lost here. This IEventArgs copy is then passed by reference to the function wrapped by EDelegate.

making IEventArgs non-copyable, as fastcall demonstrated, is the simplest way to prevent this mistake from recurring.

#5287286 Thinking of switching from a monolithic structure to a Client-Server structure

Posted by on 17 April 2016 - 03:40 AM

The downside is that you have to program for packets

 Not necessarily. Yes, you'll have to design around the idea that the game manager is a service, that the user's view of the game cannot simply reach inside the game manager to get information or make changes, and that the two sets of logic will have to communicate in a highly decoupled manner; but that's actually preferable anyway. :)

#5287285 HINSTANCE & HWND Corrupted when passed around functions

Posted by on 17 April 2016 - 03:30 AM

how is hinstance declared? Since wParam is not used by the function where the error is occurring, and hwnd appears to be valid, I think hinstance is the issue here.

also, while WPARAM is defined as a UINT_PTR, wParam should actually be treated as an integer for many messages.

#5287283 c++ should nested switch statements be avoided?

Posted by on 17 April 2016 - 03:12 AM

The only time I recall having good reason to use a nested switch statement is in handling user input, where it's common practice to make pretty liberal use of them. This usually looks similar to the example prevented by frob, and is done for the same reason he described. Of course there are ways to prevent them in even that case, but it usually serves no purpose to do so.

#5287279 Multiple bullets?

Posted by on 17 April 2016 - 02:43 AM

As previously stated, the simplest way to handle multiple bullets is to store them all in an ArrayList, and remove them when they collide with something or leave the screen.

Although Gian-Retro's advice is the optimal practice, it requires a bit of experience to implement properly, and might not apply as well to your case as it did to his. I would just use ArrayList unless you notice a performance issue. :)

#5286973 Multi thread deadlock issue with a recursive mutex. Need ideas.

Posted by on 14 April 2016 - 11:26 PM




An update on this topic, our bugs were found and fixed. However this thread was me looking in the wrong direction. The deadlock happening was a side-effect of the real bug and not the source of my issues. The bug was coming from a system was acting erratically and sending way too many messages over the network and our code could not keep up and they would stack up. Overtime, having a ton of very small memory blocks for each message would fragment the memory and then various systems would fail. The deadlock was the most common effect but we also had thread initialization failing and sometimes a straight up memory allocation failure (malloc of a big chunk returns NULL and that pointer was then used).

So, you have memory allocation failures, and most of them do not crash your program immediately? And then you're stuck dealing with other bugs that look impossible? Let me guess, you have catch (exception) or, even worse, catch (...) everywhere, with maybe a log saying "unknown exception" if your programmers are slightly less lazy than the people who just leave the catch empty?


malloc does not throw exceptions. It simply returns NULL. On some systems, there is no built-in catch for NULL dereferences, and NULL+offsetof(SomeStruct, someMember) might reasonably point to memory used by the main thread's stack, some global variable, allocation records, or even the OS itself; any of which could have very unpredictable consequences. It's entirely possible no C++ exception was ever thrown and no OS exception/signal/etc was ever triggered, and memory was silently corrupted.

Sure, that's possible, but I still think that it's relatively unlikely to have memory corruption de-referencing null pointers from failed malloc calls, instead of segmentation faults, compared to the chance this code is throwing and catching bad_alloc exceptions. The code above is clearly C++. I would guess new is being used, even with malloc mentioned earlier.

I've never worked on a program that corrupted memory through an offset null pointer. I have worked on code where memory usage would spike up and cause allocation failures, because it was filled with try/catch statements.


It's pretty common for C++ projects to interface with C libraries, a great many of which perform internal allocation and deallocation, some of which might not check for failed allocation before access. It's not unheard of for C++ programmers to use malloc for buffers, this is actually my own practice. The new operator can be overloaded, an overloaded new might not throw std::bad_alloc. It is possible to disable exceptions in C++. It is possible that this is a non-conforming C++ implementation with no exceptions - some C++ implementations intended for embedded applications lack RTTI and exceptions, along with a large number of other C++ features; possibly some lack the new operator altogether, and malloc is the only way to allocate. The implementation of malloc might be non-conforming, or a custom allocator might be used. There's any number of reasons why an allocation might not throw which are perfectly reasonable.  I understand that sloppy exception handling is all-too common, but not handling exceptions at all is even more common (I'm guilty of this), so it just seems odd to me to assume that's why the source of this bug was never caught. Also, I need to point out that on systems with NULL pointer protection, dereferencing memory near 0 does not generate a C++ exception, it generates a fatal signal - which few people know how to recover from - or an SEH exception on Windows, the typical handling of which is to quietly close the program. That is the first thing that indicated to me that something else was the problem.