It's just a table of jumps to code which actually does the work. The point being that code outside the module can just call to well-known entry points. Those entry points are jumps to the real routines. The real routines can move around in different versions of the library and the user code doesn't need to be relinked.
Aren't you supposed to do this by calling EnumDisplayDevices to get the list of video cards and then using CreateDC to create HDCs on those devices? Then you can use wglChoosePixelFormat (which takes the HDC) to get a render context of the right type on the right device?
If you're working in 3D and using 4x4 matricies as transforms, the determinant has an immediately useful value.
If you put a unit cube through the transform, then the determinant will tell you what the volume of the output cube will be; in other words it tells you the volume scaling that the matrix will do.
Hence, well behaved 3D transforms will all have positive non-zero dets; and all rotation/translations which you expect to do no scaling will have a det of 1.
This also helps explain why zero det matricies can't be inverted. In order for it to have det zero, it must shrink one of your unit cube dimensions to zero. Hence there can't be a matrix which goes the other way, because there's no way to reinflate that deleted dimension back to one. Hence there's no inverse for the matrix.
Obviously, since you HAVE to squash one of the dimensions away to go from the 3D world to the 2D render plane, this means all projection matricies must have det = 0. And hence, this is why there is no general case system for taking a pixel in the window and finding it's location in 3D space, because that would amount to being able to invert your projection matrix.
" I remember reading an article in Dr Dobbs years ago about how REXX was going to make all programmers obsolete"
I remember reading adverts in the 1990s for the languages which would replace C and C++ with languages so easy that managers would be able to write the code themselves... I'll bet no-one's ever heard of any of them these days.
Easily 20 years this conversation's been going.
It's actually getting very very hard to hire good C/C++ people. And that's a problem because Google runs on hardcore C++ stuff; KVM and Xen need good C hackers, performance libraries for Android need C and C++ coders... Just this week Google is talking about tech to run C++ code natively inside sandboxes because they know the world isn't replacing that stuff any time soon. Good. Means my pay will go up for my rarity value...
We've all been waiting 50 years for Cobol to die and it still won't.
No, they're not. They're bad if you're an academic computer scientist. Academic computer scientists never actually have to ship code. Globals work, use what works, ship code.
"really messes up my goal of keeping things modular."
Don't have a goal of being modular. Have a goal of "shipping code". What value to you is "being modular"? None. It's an **ideal**. It's not a goal. Your goal is to ship code. That means you get to sacrifice ideals sometimes. This could be one of those times.
You're over-thinking this problem. Everyone always does. Everyone thinks there's a problem with "logging" because it's always either too verbose or not verbose enough and it's never quite right and everyone thinks the way to fix it is to somehow make the logging production more complication.
The problem isn't with the logging. It's with the tool on the other end. Make your program be either "verbose" or "succinct" based on a flag. That's all you need on that front.
Then just output strings but have smarter tools looking at the output.
The UNIX world manages to handle tons and tons of logging using "syslog" which is hardly sophisticated. "syslogd", the program on the other end, that has versions which'll do everything up to handling global networks with millions of servers...
Don't try and solve your logging analysis problems during your log generation.
Instead of using an actual image format (such as TGA) which supports BGRA format, why not ship your images as plain image files already in that format. You don't need all the headers, you just need four bytes per pixel.
Lay them all out in a big block of ram, save that to disk, gzip it. When you come time to load it, you're only loading one file -- you hoof it into one giant block of ram (only 1 allocation, only 1 directory walk). Ungzipping a file is pretty easy on loading it. In addition your compression should be even better; larger files generally compress better than a set of smaller files because the compression can find more similarities to reduce.
Instead of generating the mipmaps on load every time, generate them on first run and save them out to disk. Again, do this one big block if possible. On load, pull it in, rattle down it transmitting the mipmap levels individually.
If you're on an OS which supports mapping the files, you don't need to load them or do an actual memory allocation. Map the file into your address space, provide suitable "linear access" hints to your buffer caching system so it will do read-aheads, start accessing the memory and away you go. The OS will try and load pages ahead of your accesses in the background; you don't have to wait for the "read()" call to complete. The effect of this is that you can already be generating and uploading mipmaps for texture 1 while the disk IO channels are still loading texture 20.
Instead of asking the GLU library to generate your mipmaps, you could build them yourself by using rendering of the fullsize image to pixel buffer objects and copies internal to the card to make them. That way the mipmap data doesn't have to transit the host-GPU bus.
Pango/Cairo. Drop it on a texture. Draw it to your screen. Cross platform. Renders everything from plain text up to simple HTML markup. Handles i18n. Everyone always ignores this recommendation without trying it I don't know why.
"why do tilesets have standards in the first place, like 16x16, 32x32, 64x64? Or is 50x50 just as fine?"
Well, one reason is historic; power-of-two numbers are easy to manipulate quickly. Games on 8bit computers and consoles often used 16x16 tiles because it was a compromise between detail, storage space for the tiles, and being able to access the pixel data quickly.
In addition screen sizes were also often powers-of-two. For example on the CPC, 16x16 tiles means you can have a 20x10 screen with 40 pixel lines left over for a status display at the bottom.
When hardware assist started to appear for the graphics, it was often power-of-two oriented, again for hardware simplicity reasons. Early OpenGL and DirectX (and other APIs such as Glide) similarly limited textures to 256x256 for hardware reasons, meaning that 16x16 or 32x32 tiles would fit better and not waste precious texture memory.
"It's like being in a meeting.. giving your opinion which is technically correct, and having others in the room shoot your idea or explanation down. That can happen.."
Except in anything but a very badly run meeting, ideas aren't just "shot down" without there being justifications -- particularly when the idea/explanation comes from (say) the highly experienced developer who has historically shipped good product. It's not that the intern in the room doesn't get to have an voice an opinion, it's just that don't get to say "no" without a reason. They're quite allowed to say "Ah, could I just point out this reason why this may not work...", but they don't get to just say "nope". Well, not and keep their job for very long.
Popularity is, historically in a number of fields, a poor way of choosing correctness. No-one likes an answer which is difficult and would prefer an easy one and popularity contents will select the latter; which is fine as long as you can guarantee that there will only ever be easy answers to things.
I can't see where the magic technology is. I can see that you've implemented a "hard and soft layers" architecture involving scripting to connect component parts together. Could you be more specific about what it is I'm missing?
You turned down E54k a year? E54k a year is a pretty good salary. And for games development??? Blimey. Again, most people here would kill for that sort of offer -- it's the aspirational goal of their life.
"working on technology that is more inferior than my home projects"
There's a lot more to games than the graphics engine. Look at this year's surprise hit. His characters have corners...