How to work with vertex correctly

Started by
4 comments, last by Vincent_M 12 years, 7 months ago
Hi all,


Im new here (just to post, not to read :P), and i come you with a doubt that its driving me nuts. Im actually doing some classes for Java/Android to draw entities. At this point, i have all methods ready, and started with "transformation" methods. And here is where my doubt comes.

I pass to vertex shader the float[] array where vertex coords are defined. And i pass the multiplication of camera matrix and projection matrix. As for example, in shader language, this:

[color="#2a00ff"][color="#2a00ff"]gl_Position = uMVPMatrix * vPosition;

[color="#000000"]Imagine now that i want to translate than entity. If i multiplicate a translation matrix with both camera and projection ones, and pass it to the shader, when i draw i got the entity in the new position, but my local vertex array wont be updated with the new position. Imagine multiple translations thought time, the entity its shown in its new positions, but i never have its real position, because i pass it to the shader as a transformation matrix, and i pass the vertex array as the initial position.

I tried modifiying the vertex array in every translation (and not multipliying translation matrix to camera/projection), and it works well, and i have the position updated, but thinking in performance/cpu cost i dont think its the best way.

Now imagine this with rotation, scale, and all transformations you can imagine. Do i have to update vertex array in every translation (in order to know the real position/size/etc the entity has every moment) , or do i have to multiply multiple transformations matrix and pass them to shader, or maybe just save new (x,y) position, angle rotation, scale % , and just apply to vertex array when i need to know the real state of the entity.

Maybe its an obvious question, but im new to opengl :)

[color="#000000"]Thanks in advance!

[color="#000000"]Regards




Advertisement
It depends on the reason why you feel that you need to know the vertex positions back on the the CPU side. Generally you'll find that you don't need them at all - just send your matrixes, set your state, issue your draw calls and out comes the rendered object.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Yea what is the reason you would want to do that. From you needing to explain it the way you did, yes we all know how it works, it sounds like you are very new. There is no reason you would need those vertex positions.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

All of this is for Android development. What about collisions between objects? or if i touch screen to interact? Maybe i just need to save them (vertex modifications) in some particular situations?

i think the problem here is that you think that your 3d rendering is a 3d rendering.
its a 2D bitmap being drawn onto a screen =)

it doesnt have any positions or values, so no you cant really "touch it", "find it" per se
however since you (should) already know where they are, some vector math should help you figure out what you did touch
personally id just use bounding boxes, or something =)

maybe like this:
1. is the object in front of the screen?

2. is the object in the frustum?

3. check if the user pushed the screen where the objects bounding box is,
probably complicated since i dont have anything with a touchscreen i couldnt tell you how it works

but im sure there are people here who could tell you
and im also pretty sure there are numerous tutorials on this aspect, since it should be a fairly common question when it comes to touchscreen devices
well, good luck, and i hope i understood you correctly =)
Multiplying your local vertex positions by your transformation matrix on the GPU is wise. Software (CPU-side) vertex calculations require you to hold two sets of vertices in main memory: source vertices, and the the final, transformed vertices, AND are slower.
Now as to collision, if they're objects like characters, items, etc, encase them in a bounding volume like a sphere or an axis-aligned bounding box (AABB). Then, all you have to do is run your collision tests against the bounding volumes instead of vertex-for-vertex stuff. When it comes to world geometry like houses and things, you want to most-likely bake all of your transformations into your geometry (which means they can't move during gameplay), or maybe use a physics engine that can handle collision detection with polygonal geometry.

Also, for the quicker collision detection, split your maps into octrees, and make an invisible, simplified version of your maps to test collisions again. That way, there's so much less you have to check!


That's your best bet, I would think.


Good luck, man!

This topic is closed to new replies.

Advertisement