# LtJax

Member

119

168 Neutral

• Rank
Member
1. ## Compute nearest-power-of-N

Shouldn't you be able to find this by looking at the highest set bit in the number? First find that bit to get the exponent, then inspect the next bit to check whether to floor or to ceil? As in: exponent++ if that bit is set. 1<<base should then be the closest power-of-two, shouldn't it? Edit: Oops, I just re-read that you need more than powers of two - anyways, the same algorithm should work with any power by adapting it to other bases. First find the rounded exponent and then check whether the remainder is more than half of the base^exponent.
2. ## C++ Lockless Queue for a Threadpool

There's an excellent article on this in the book "Game Programming Gems 6". It's the very first section: "Lock-free algorithms".
3. ## Gamedev C++ utility library open-sourced.

Hello folks, I recently open-sourced the C++ game-development utility library that I've created and used in a couple of projects over the past 5 years. Without further ado, it's hosted at: http://code.google.com/p/replay It features basic 3D math code, such as vectors, matrices and quaternions, but also implements some more advanced algorithms, such as (a robust version of) Welzl's minimal bounding sphere or minimal bounding boxes. Also, it has a simple class for loading, saving and manipulating images, and some more general container classes. It's all documented in Doxygen! Give it a try! It's proven to be very useful to me. I'm really looking forward to more peer-review. Cheers, LTJ
4. ## resizing vertex buffer on the fly

Exactly, draw from a larger buffer and use only the parts you need. If you "overflow" the buffer, double the size of the buffer and copy the old contents. The is the same as std::vector. The downside is that copying will take longer with every overflow, which will make you loose a frame or two every now and then. Such an approach has great theoretical performance, as insertion is done in armortized constant time. Alternatively, you can add more reasonably sized buffers and render them with seperate draw calls. Either way, you should start with a reasonably big buffer,e.g., 1 to 4mb.
5. ## Real world lengths vs lengths drawn in OpenGL - the conversion factor?

The most important thing you need to setup correctly to make your scales match are the near and far planes. Actually, it's mostly the near plane, but with your setup, the far plane is way to far to give you a good depth precision. Think about it, right it'd show you everything from 10 cm to 100 km. Noone can see that far unless you're very very high! I personally like to set it to something like 3cm for the near plane and about 3km for the far plane. The FOV is mostly dependent on the angle of your "eyepoint" to the edges of your screen, but most games keep it between 70 and 90 degrees. (Think of it as the window into your 3d world.) Btw, there's a bug with your aspect calculation right now as it'll give you 1.0 instead of 4 by 3. You need to cast either one of the operands to float _before_ doing the division.
6. ## newb C++ tuple(TR1) questions

Quote:Original post by Jacob Jingle Is there a way to append to a tuple? Go from two elements to three. Is there a way to remove part of it? Go from three elements to two. Yes. Check out boost::fusion, it supports push/pop on heterogenous tuple types.

No, OpenGL cannot read from 'raw' framebuffers, only from textures. Another option you have to do that is GL_EXT_framebuffer_object, which lets you render directly into a texture and doesn't require the context switch that pbuffer and render_texture need. Either way, if you think copying a single image is a problem on your laptop, the radial blur shader itself is probably way worse (because that's one heavy shader - equal to 10 copies plus shader math!).
8. ## Syncronising time and actions.

Hello, I'm pretty new to network programming and I started writing some network code based on what I read on Glenn Fiedler's website, but I feel that my lack of experience is finally catching up with me. So I got an authority managed system right now. It's a strictly 2-player coop game, so cheating is not an issue. A player just sends serialized rigid body data of entities over which it has authority to the other player. That data is then smoothly blended with the receivers data. Now I'm trying to add "actions", like melee weapon attacks etc. In a single-player environment I'd just add a start time and calculate the results of the attack based on the difference between the current time and the start time. (Is this a good idea at all?) But for this multiplayer environment I'm not sure how that will work since there's no global concept of time. A client will just use the latest data it gets no matter when it was send. Also I probably have to synchronize the global time at some time, but how do I do that with lag and is it really needed? And how do I keep it synchronized? Is transmitting the start time of an action a good idea at all? Thanks in advance!
9. ## Optimising Heightmap

While you are right that you could optimize your vertex data, it is probably easier and faster to use a view dependent algorithm to render it. Break your heightmap into blocks so that the indicies fit into 16 bit, and render it using a "coarser" index buffer (draw only every 2nd vertex) if it is further away. The only problem is that you you can get artifacts when switching to lower LODs. But you can get around that, e.g. by having 15 different coarser index buffers that have additional triangles at the borders. This is a well known technique with standard heightmaps, but it should work just as well with "curved" heightmaps.
10. ## typedef a templated class?

You could use inner classes: template <class T> class MyTemplate; template <class T> class TypedefNs { typedef MyTemplate<T> TypedefedTemplate; }; That probably won't help you if you just want to make your code prettier. Afair templated typedefs are gonna be in the next c++ standard tho.
11. ## Splitscreen rendering optimization

Optimize your vertex processing! As tomva pointed out, you aren't processing more pixels, but you are using twice the amount of vertices. So use a good vertex cache optimizer and/or simplify your models. However, this only applies if you're fill-rate limited. If you're doing animation, make sure you only apply it once per frame (i.e. if you are using hardware acceleration for that, you might want to render into a vertex buffer)! For stuff like per-frame CPU operations, you can usually do them in parallel easily, because there's no data dependency . This applies for things like visibility checks (view frustum, BSP, portals, octree, whatever you got) and scene preprocessing (realigning billboards, particles, GUI).
12. ## How to draw a polygon in OpenGL ES ?

You need to specify clearly what type of polygon you got. The one you have in the picture is simple - OpenGL cannot draw it directly. You need to triangulate it somehow. There are a few algorithms for this, see the Wikipedia page on it.

You are completely missing the translation component of the matrix. That matrix is not a 3d linear transformation, but an affine one, so you need to multiply the matrix with (x y z 1) - you are currently doing (x y z 0). In other words your glVertex calls should more like this.. glVertex3f ( D[0][0] + D[0][3], D[1][0] + D[1][3], D[2][0] + D[2][3]);
14. ## Shadow Mapping Slight Problem

I think Einar89 is correct, these just look like the usual self shadowing artifacts - unless I'm missing what you're getting at. Unfortunately, there's no good solution to this. You might be able to get them to an acceptable level by tweaking a depth offset (with glPolygonOffset) or by increasing shadow map resolution. Some people recommend turning off self shadowing, but you'll want the programmable pipeline for that!
15. ## Normal buffer troubles again... Strange... Any advice?

OpenGL does not support seperate indices for normals. If you wanna do that, you have to draw in immediate mode. You will need 8 normals, yes - but this is mostly a technical requirement when working with buffers. The sizes have to match! When you are using flat shading, only the first normal in each quad ever gets used. You can basically "abuse" this to get face normals. You will never render 2 of those 8 normals tho. So what you need to do is add 2 dummy normals and rearrage the indices so that the first index in each quad points to the vertex with the normal you want to use for the whole face.