For debugging, a much smaller test case would be preferred. Can you export a single 4-sided poly, and does your code correctly read it in and turn it into triangles? If not, start there. In that case it's easy enough to understand what the results should be by hand, so stepping through your algorithm in a debugger should be much easier.
Hi, I know that when a variable comes into existence its constructor will be called, and its destructor will be called upon exiting scope, I just wonder what those processes look like in assembly? I've been thinking like crazy!!
It looks like a 'call' or 'branch' instruction to call the constructor at the beginning of scope, and another call to the destructor at the end of the scope. They're just functions.
Of course, there may be optimizations at play that inline the constructor or destructor, so you might not see an actual call/branch/whatever instruction
I think you can make the same argument that the diffuse lighting calculation should be the same whether it's done per pixel or per vertex. In the per-vertex case you're computing the N*L dot product at the vertex and interpolating that to the each pixel. In the per pixel case you're interpolating the normal and computing the dot product per pixel but the dot product is a linear operation so it interpolates the same.
With specular lighting you have a non-linear term (a value raised to the specular exponent) which does not interpolate the same. Which is why vertex-lit meshes usually have weird looking specular highlights.
Definitely not. You're talking about the difference between vertex lighting and per-pixel lighting. Calculating NdotL at each vertex and interpolating the result is not the same as interpolating the normal and calculating NdotL at each pixel.
Think of a quad with vertex normals pointing away from the center, and a point light source directly above the center of the quad.
Generally speaking, I'm a proponent of spending a majority of your time doing documentation. I've done my fair share of development, both for companies, on my own, or for my own company, games and non-games a like. Mainly, if you're doing a game, I'd suggest things such as Case Diagrams, Flowcharts, UML. If the operation is smaller, this would be an ideal time, to do detailed storyboards, wireframes of user interfaces. Whiteboards are your friend. Frankly, a lot of your common software development methodologies easily applied to games.
I recommend the opposite. Especially in the early stages, spend as little time in documentation as possible. Instead, get your game design ideas prototyped and into a working version as quickly as possible so that you can start iterating on them. This goes for 1 man projects through "AAA" games. You're not going to get breakthroughs writing words about your game - you need to iterate hundreds (thousands?) of times on something you can actually play to discover something great. The really good stuff is unintuitive at first, so you need to be a little reckless sometimes and just try things. You will throw away most of it, and that's OK.
One of the central design principles of C++ is "don't pay for what you don't use", so I don't know what you are all on about "C++ overhead". C++ programmers are very adamant to the language committee that we don't want features to impact performance if they aren't used.
C++ *is* commonly used in "low level" code, and you can write operating systems with C++ just as you can with C.
Comparing a popular version control system with an unpopular one and claiming the performance difference is due to using a C++ compiler vs a C compiler is just silly.
You have to loop through and delete each object the pointers point to. Vectors simply destruct the objects they contain. In this case it contains pointers, which have no destructor, so destroying one doesn't do much.
Or use a type to hold your pointer that deletes the object in its destructor (a "smart pointer")
Also, negative sizes or indices don't make sense, so use unsigned types for sizes/indexes. Preferably std::size_t so things like porting 32bit->64bit are far less painful.
Most shader debugging i've done is with "printf debugging" - that is, exiting early from the shader and returning some debug value that you can visualize to figure out what's up. Then just work your way through the shader until you figure out which calculation isn't returning what you expect, and go from there
This works better if you can hot-reload the shader while the game is running
He's fine to have those beliefs, but game development is "extreme performance computing", and it's not acceptable to sit back and "throw more hardware at it".
For consoles, there is no "more hardware" - you have what you're given, forever. Also, most consoles don't support JIT compilation.
For PC's, it means your competitors will automatically have all of the compute power you gave up to be able to use a managed language and at least be able to run on lower spec machines or at higher frame rates, or both. With garbage collected languages, it requires significant amount of effort to make sure the garbage collectors don't kick in at "the wrong time" and freeze your game for unacceptable periods of time.
I don't agree with the argument that managed languages solve many problems in game engine runtime development.