A bit of clarification on the terms used: a checksum is not a term that is ever used to refer to a security check. When an author uses the word 'checksum', he or she always refers to protection against accidental data corruption, never to protection against malicious or deliberate data corruption.
When talking about protection against deliberately forged or tampered data, one uses the terms hash function, message authentication code and/or a digital signature. Based on these, if you are looking to build a system that is cryptographically strong, these will involve private-public key pairs and/or server-client communication.
Protections that don't use any of the above, become reverse-engineering challenges, instead of cryptographic challenges for the attacker (not to say that reverse-engineering challenges would be any easier in practice, if performed well).
Dropping your current code, and trading it for something else you copy-pasted off someone else might work, but what did you learn in the end?
I recommend you go back to your OBJ loader, and take it as an exercise on how to use a debugger, to hunt down what the problem is with your loader. What is the attitude with first spending several days of trying to write your own OBJ loader, then hitting a problem, realizing "oh, I guess this isn't gonna work.", and dropping it altogether to evade the real issue.
In any serious piece of code you write, especially in C++, you will have crash bugs again and again. Sounds like what you need is someone to tell you to get a hold of yourself, pick up that goddamn debugger, and start stepping through the code. Figure out where it crashes, what causes the crash, and why. Then fix it up. Because that is what you will always be doing as a programmer, and you have to learn to do it without dying inside.
If you are not willing to do that, do not want to learn C++, or file I/O, or pointers, but still want to do games, then I recommend stepping back and trying to learn a higher-level engine. For example, Unity3D is very popular nowadays.
This problem can be solved in linear time to the number of the input points.
As iMalc pointed, the problem is separable to two applications of 1D scans through the input points, to first solve for X, and then for Y. This is the case since you don't allow diagonal movements. Taxicab distance makes things simpler!
To solve the problem in 1D:
- You are given an array of input points N_1, N_2, ..., N_k, integers, and you need to find an integer K such that
a) Max | N_i - K | is minimized, or
b) Sum | N_i - K | is minimized.
I am unsure which one you were after, so for both:
In the case A, take the average of the minimum and maximum value, round to integer.
In the case B, first sort the points into ascending order (can be done in linear time with radix or bucket sort), then perform a linear pass through the array. At each iteration, you can compute the "penalty" caused by leaving the past points behind, and the gain received by moving the center point forward. At some point, these two balance, and you know you have the optimal center point. (In fact, I'm thinking it could perhaps be converted into log-time binary search, but I'd have to implement it to see if that's the case).
In our system, the entities do own the lifetime of the components they aggregate. A factory creates different types of components based on a type id, and the ownership of the components is given to the entities. When the entity dies, it cleans up after its components.
In Real-time Rendering, page 307, there is a sentence---'The primary limitation of environment maps is that they only represent distant objects'.
I think what this means is that even if you dynamically re-rendered the environment map each frame (ignoring any performance hit), you are still left with an approximation,since the reflection color that is returned by the cube map lookup does not take into account the position P on the surface of the model where the reflection of the light ray occurs, but instead it's assumed/computed as if the reflection occurred at the pivot/origin O of the model (the center of the frustum position that was used to generate the environment cube map).
If we estimate that the distance D of the reflected object to the point P is much larger than |P - O|, or "tending to infinity", then the error caused by not taking into account |P - O| tends to zero.
Perhaps a similar idea when explaining when it's ok to treat sunlight as a positionless directional light instead of a point light source - since it is so far away, the size of the sun, and the difference in the direction is minimal, so it's ok to use a directional light.
The simplest solution is really to utilize all that existing math and constructs, meaning that you should go either with a standard perspective projection frustum, or an orthographic projection frustum, depending on which projection you prefer more for your simulation. Trying to come up with your own "non-math" way just to avoid getting your head around the standard formulas will most likely end up in you having wasted more time trying to find a shortcut, and coming back to do the real thing eventually.
Your math is wrong in so many ways.. To rotate a point around an axis, you cannot just pointwise multiply it by another vector. It does not produce a point rotated by anything.
There does not exist such a mathematical term as "a rotation vector from a set of rotation around 3 axes (x,y,z) with z".
To represent a rotation operation in 3D space, you want to be looking for:
axis-angle representation of rotation. This is a normalized vector (x,y,z) representing the axis of rotation, and a scalar angle r that represents the amount to rotate around that axis.
Euler angle representation of rotation. These are the triplets of form (a,b,c).
3x3 rotation matrices (if rotating around an axis passing through the origin).
3x4 rotation matrices with a translation (if rotating around an axis passing through an arbitrary point).
Unitary quaternions. (4D extensions of the complex numbers, of form r + xi + yj + zk). These rotate around an axis through the origin.
The two first representations are only representations. There does not exist simple formulas to rotate points using those data structures. The three latter representations are effective computational representations. You can use 3x3 matrices, 3x4 matrices and unitary quaternions to rotate points in 3D space. The two first representations are convenient for humans, the three latter are effective for arithmetic.
For example C++ code snippets on how to convert between these different representations, you can try browsing through the MathGeoLib codebase (in my sig).
always overestimate the bounding box size, so that even if the sprite increases its size, it will still be fully inside, or only partially outside the physical bounding box so that it does not look distracting even if sprites slightly overlap.
Disallow the game mechanic or visual effect altogether that causes sprites to resize.
Link the game mechanic/visual effect that resizes the sprite to the physics subsystem and disallow the sprite from changing size if it would result in an overlap. I.e. only allow sprite resize if it has free room for it.
Link the sprite resizing to the physics subsystem, and perform a physical reaction to when the sprite increases the size and collides, e.g. push its position away from the obstacle so that it does not penetrate any more, or give it a velocity away from the obstacle to resolve the penetration (or do both, i.e. adjust the position *and* velocity at the same time).
[code=auto:0] Now the output from the matrix created with an angle axis rotation is 36.36 | 275.61 | -239.8 | 0.00 153.04 | 128.61 | -1722.2 | 0.00 -331.73 | -1706.9 | 2286.80 | 0.00 0.00 | 0.00| 0.00 | 1.00
So am i doing something wrong? Or is this just a scaling issue?
This is very wrong. Creating a rotation matrix from axis-angle rotations should produce an orthonormal matrix, which is a matrix which has the length of all its column (and row) vectors one, (additionally the column and row vectors are orthogonal). The magnitudes of the columns or rows of this matrix are much greater than one.
Perhaps you passed in an unnormalized direction vector to the code that creates a rotation matrix based off an angle and a axis direction? For reference, here is the code from my library (though currently not used, atm I route axis-angles to quaternions to matrices).
The thing is, I'd like to be able to draw the 'backfacing' line segments in a different/muted color. I know the obvious solution would be to sort the list and build up two index buffers - one for one color, and another index buffer for the second color. However, that requires re-building the index lists every frame. Is there a better way?
If you converted a triangle list of an object to a line list, to render a wireframe display of the same object, you can simply use the vertex normals from the triangle list for the lines as well, and use a (vertex normal) dot (camera direction) >= 0 test to determine which color to pass out from the vertex shader to pixel shader for drawing.
If that was not the case, and your line lists come from somewhere else (and you can't build a notion of front-facing vs back-facing with normals like that), you can give each vertex in the line list a vertex color to specify what color to draw it in. This is probably better than maintaining two sets of index buffers, though if the colors dynamically change, in both cases (e.g. using an index buffer, or using a vertex color) you will have to discard-lock and reupdate the full vertex/index buffer.
For best performance, the desired method is to try to provide the vertex shader with the appropriate data to make the color choice, so you do not need to lock vertex or index buffers at render time each frame (only small-sized constant buffers). Line lists are not any special compared to triangle lists, so you can feed in as per-vertex attributes whatever parameters you find to be relevant for the computation.
one of the more common topics on the math forums here at GD is about 3D mathematics, geometry manipulation or primitive testing. A lot of people have written their own libraries for doing vector/matrix/quaternion math and geometric primitive intersection testing. There does not seem to exist a single library that would be specifically tuned for these purposes.
The result is now called MathGeoLib, which is a C++ library for games-oriented 3D mathematics and geometry manipulation:
I am not overly interested in "preaching" people to start using this library, since I am not that big fan of open source development in general and I have little time to do free maintenance work or tech support on the library ("you are on your own"). Instead, I hope that the documentation and online source code will function as a useful repository to anyone looking for code snippets on how certain intersection tests or mathematical constructs are often implemented.
Any thoughts? Spot obvious bugs? Criticism? Am I good at drawing diagrams or what?
GPGPU techniques are most often used to perform generic (non-rendering) computation, not for rendering.
Though, there is some amount of research that has been done to implement rendering using GPGPU methods. For a very recent one, see a paper by Laine and Karras: High-Performance Software Rasterization on GPUs (in High-Performance Graphics 2011). Their measure is that the traditional rendering APIs beat a CUDA-implemented renderer by a factor of 2-8x.
Also, other sources exist. Perhaps a more interesting one is this: Alternative Rendering Pipelines in CUDA. The prospects for implementing a renderer in CUDA might be to solve the order-independent-transparency problem (see e.g. this), or to enable real-time raytracing.
I have no experience with GPGPU but the argument put forward was, that GPGPU will always be faster because the API layer is thinner while accessing the exact same hardware and because of shared memory also.
In the light of what I linked to above, I don't think this statement is true at all.
If someone knows about published papers similar to Laine and Karras, please share a link. This is an interesting topic to me as well.