Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 19 Aug 2012
Offline Last Active Yesterday, 01:16 PM

#5260508 Handling an open world

Posted by N1ghtDr34m3r on 04 November 2015 - 09:44 AM


Sorry for late answering, university eats all of my sparetime. :D

Here's an example of what I mean:

I want to have a Quadtree for the map, so I can easy traverse it and calculate collisions and other things.
That would be no problem, if I would have an idea for how I can positioning an object like a building on the map/on that grid -> quadtree.

#5255201 Handling an open world

Posted by N1ghtDr34m3r on 02 October 2015 - 12:31 PM

Yeah, I know that.

Sorry, if my message wasn't as clear as I thought.

What I really wanted to say by that, was that I'm interested in learning the "background programming": how an engine would handle it, how far it can be handled, where pros/cons are, where performance critical sections are, how to achieve stable performance, etc.

I already played around with Unity and Unreal. And I just don't want to click a game together. smile.png (I know, you can code your own stuff, due to the fact, that I prefer C++ style over C#, I would stick to UE instead of Unity smile.png

For me, it feels like a WYSIWYG Website Editor, just for Games. But I want to code it. I want to code something, that I'm proud off and not something, where some other devs can be proud of their engine, that she could handle it. wink.png



Back to topic:


Is it useful to use a "scene format" (XML or something like that) instead of hardcode positions in the mehs vertices?

My thoughts were, that when the meshs vertex positions are hard coded (offsetted in Blender), it could be bad due to floating point errors.

Is it something to worry about? Or should I make something like a world editor, where I can move objects around and save it to a file like a "world description"?

#5255161 Handling an open world

Posted by N1ghtDr34m3r on 02 October 2015 - 10:05 AM

Thanks for all this interesting stuff. Yeah, spacial partitioning was a thing I learned a time ago. Just forget about it, but ya, that will be definitely a thing I will implement.

Moewover I will definitely look up software occlusion culling and predicate rendering. Thanks for that. smile.png 




I would recommend using a premade engine for something like an open world game.


Something like unreal engine 4 would do the job well as it already has industry proven LOD, frustrum culling, level streaming and open world systems built in (see their kite demo).


You could spend years writing all this yourself, and if you just want to get on writing the game and know the functionality needed works fine, going the premade route is a good idea.


If you do want to do it all yourself, good luck because creating it and making sure it scales well could take you a long time...


Good luck! 


Mhmm... yeah, that's true. I can definitely use an premade engine. But what should I, as a programmer, learn, when I'm letting all the stuff done by the Unreal Engine which other programmer wrote. And where does these programmer have their knowledge from? rolleyes.gif Another engine maybe? Maybe from Unity3D? tongue.png

#5255001 Handling an open world

Posted by N1ghtDr34m3r on 01 October 2015 - 10:15 AM

Thanks at all! That made it a bit clearer, especially the frustum culling. smile.png


I would appreciate it, if I could do this. But for me, there's one detail missing...

If I have a bunch of meshs, all bound to a cube and a player with a "camera", I only have a bunch of points (the cubes) and a 4x4 matrix (the camera). How am I now able to compute the frustum culling on the CPU side?

The link you gave me is awesome! Thanks! smile.png


But what about the logical update aspect instead of just the rendering?

If I render only the objects within the previously calculated frustum, maybe I could update each objects logic?

Or is something like "update only objects within a specific radius outside of the frustum" a good approach?

#5254945 Handling an open world

Posted by N1ghtDr34m3r on 01 October 2015 - 02:42 AM

Hi all,

In the last 2 years I learned a lot about 3D, OpenGL, GLSL, Rendering techniques like instanced rendering, framebuffers and post processing, but also things like ECS vs OOP, state managing, different Frameworks (GLFW, SDL, SFML, ...),
Physics (BulletPhysics) and stuff like that.
Previously I made little 2D games in Java and C++.

Now I'm feeling ready to dig deeper.
My current project is a 3D open world game. Graphics isn't relevant, that's why it will simply consist of low poly objects with low res textures and a pixelating post processing shader with a sci fi setting.
The first idea is a city, which feels a little bit like a living world.

Therefore my goals are cutted to
these points:
- animated neon lights and general lighting (banners, "advertises", street lamps, ambient lighting, etc.)
- walking pedestrians
- flying cars on multiple height levels
- moving around in first person

I think, I can definitely achieve this in a few years with my current knowledge. But there is one simple point I can't wrap my head around it:

- How are multiple (hundreds of) objects (buildings) handled? I know how I can render anything,but I didn't think, that I need to render anything at the same time. But how I can easily achieve it, to determine if something should be rendered and something not? How can I calculate if it is visible at the current position and view direction of the player?
And that, without hitting the performance a lot?

My ideas:
I know about texture streaming but is sonething like that possible for whole game objects? Like "game object streaming"?

I read stuff about so called SceneGraphs. But I don't really get it how it can help me? When each objects position is stored relatively to its parent position, in which way it would be simplier/faster to determine, if its in the viewport or not?

I hope it is understandable what I mean. :)

Please don't answer, if you just want to tell me something like "GTA V has over 1000 members in over 3 years full time". I appreciate it.

#5209869 Issues with BulletPhysics & OpenGL

Posted by N1ghtDr34m3r on 10 February 2015 - 03:09 PM

Alright. After many, many hours of testing and stuff, I've managed to solve the problem on my own. happy.png  


The GLDebugDrawer class, I've got from the examples (GLDebugDrawer.h , GLDebugDrawer.cpp), draws stuff with old OpenGL (using glBegin()/glEnd() instead of shaders).



My shaders use a projection matrix, a view matrix and a model matrix, all three responsible for the final vertex position in the world

gl_Position = projection_matrix * view_matrix * model_matrix * local_vertex_position; 

When the GLDebugDrawer class uses their glBegin()/glEnd() directives, there's something like a fall back situation.

The model_matrix, defined by the specific model in its own model space, sums up with the model matrix internally used by old glBegin()/glEnd() like

gl_Position = projection_matrix * view_matrix * model_matrix * glBegin/glEnd-Modelmatrix * local_vertex_position

So the result will look like the image above:

  • the object is rendered with his model_matrix given by the bullet engine
  • the shapes and collision shapes are rendered with the previously uploaded model_matrix * glBegin/glEnd-Modelmatrix (so the translation/rotation is somewhat like doubled)


(Note: glPushMatrix() and glPopMatrix() in GLDebugDrawer won't help to discard the previously uploaded matrix.

I think, because these function calls don't change the shader but the old OpenGL matrix stack)



Solution is to discard the model_matrix by uploading an identity matrix, just before the draw calls with glBegin()/glEnd(). Something like:

glm::mat4 model_matrix = glm::mat4();
GLuint uniform_model_matrix = glGetUniformLocation(m_opengl->getShader("3DBasic"), "model_matrix");
glUniformMatrix4fv(uniform_model_matrix, 1, GL_FALSE, glm::value_ptr(model_matrix));

This results in something like multiplying with 1, so shader internal it's like:

gl_Position = projection_matrix * view_matrix * 1 * glBegin/glEnd-Modelmatrix * local_vertex_position

Now the OpenGL 1 (or 2) draw calls use ONLY their own model matrices.


Hope it's useful for other people.



Have a nice day,





#5208249 Issues with BulletPhysics & OpenGL

Posted by N1ghtDr34m3r on 02 February 2015 - 01:40 PM

Dear community,

I'm currently programming a 3D game in pure C++ with OpenGL, GLM, ASSIMP, SFML & BulletPhysics. smile.png

But now, I'm at a point, where I'm done with it, 'cause I couldn't get the problem or find something on the internet, which can lead into the direction of a fix. huh.png 
I've got a problem with updating my meshs model matrix with the world matrix, getting from the btMotionState of a btRigidBody, to render it via shader. unsure.png


The shader computes the vertex location like

gl_Position = projection_mat4 * view_mat4 * model_mat4 * vec4(position, 1.0);

Where projection is computed by glm::perspective(...), view by glm::lookAt(...) and model by the physics system, see below. smile.png



So far, I'm creating the whole physics world in a class called "PhysicsManager".
It's a test and so it creates the whole bullet physics system like that:

m_broadphase = new btDbvtBroadphase();
m_collisionConfiguration = new btDefaultCollisionConfiguration();
m_dispatcher = new btCollisionDispatcher(m_collisionConfiguration);


m_solver = new btSequentialImpulseConstraintSolver();
m_dynamicsWorld = new btDiscreteDynamicsWorld(m_dispatcher, m_broadphase, m_solver, m_collisionConfiguration);
m_dynamicsWorld->setGravity(btVector3(0.0f, -9.81f, 0.0f));

// Floor
m_groundShape = new btStaticPlaneShape(btVector3(0, 1, 0), 1);
m_groundMotionState = new btDefaultMotionState(btTransform(btQuaternion(0, 0, 0, 1), btVector3(0, 0, -1)));
btRigidBody::btRigidBodyConstructionInfo groundRigidBodyCI(0, m_groundMotionState, m_groundShape, btVector3(0, 0, 0));
m_groundRigidBody = new btRigidBody(groundRigidBodyCI);


// Model as a sphere in BulletPhysics
m_fallShape = new btSphereShape(1);
m_fallMotionState = new btDefaultMotionState(btTransform(btQuaternion(0, 0, 0, 1), btVector3(8, 50, 8)));
btScalar mass = 2;
btVector3 fallInertia(0, 0, 0);
m_fallShape->calculateLocalInertia(mass, fallInertia);
btRigidBody::btRigidBodyConstructionInfo fallRigidBodyCI(mass, m_fallMotionState, m_fallShape, fallInertia);
m_fallRigidBody = new btRigidBody(fallRigidBodyCI);


// GLDebugDrawer from the demos of the Bullet 3.X github page
glDebugDrawer = GLDebugDrawer();
glDebugDrawer.setDebugMode(btIDebugDraw::DBG_DrawAabb | btIDebugDraw::DBG_DrawWireframe | btIDebugDraw::DBG_DrawConstraints | btIDebugDraw::DBG_DrawConstraintLimits | btIDebugDraw::DBG_DrawNormals);

And I'm updating the world each frame with 

m_dynamicsWorld->stepSimulation(1.0f / 60.0f, 10);

After that, I'm updating the meshs model matrix with

btTransform trans;

where "m_model" is a glm::mat4 initialised as "m_model = glm::mat4()". smile.png



Then rendering is done via

glBindTexture(GL_TEXTURE_2D, mesh->tex);

GLuint uniform_model = glGetUniformLocation(p_shader, "model");

glUniformMatrix4fv(uniform_model, 1, GL_FALSE, glm::value_ptr(p_objects->at(i).m_model));
glDrawArrays(GL_TRIANGLES, 0, mesh->size);

glBindTexture(GL_TEXTURE_2D, 0);

So I bind all the stuff of the 3D mesh, draw it and unbind it.



Problem is:

Why is my rendering not aligned with the physically computed bounding boxes?
(I mean not perfectly aligned like origin/center of mass but not anywhere near the collision box?)

Can anyone give me a hint or something like that?


Here's a picture of what I mean:





#4979027 a few questions about using libgdx :)

Posted by N1ghtDr34m3r on 11 September 2012 - 01:54 PM

Hey simondid.

1. The whole library uses OpenGL, but in difference to something else, you don't have to "learn" OpenGL, due to the fact, that you could use SpriteBatch & Textures and so on. So you don't have to explicity code some openGL.
(Maybe a little bit like glClear(GL10.GL_COLOR_BUFFER_BIT), but it's not so much.)

2. Yes and no. You can structure your code like you did in slick. But you have to implement it into the libGdx-Framework (like 'Game' if your main class extends from it).

3. When you use Screens it have a render method (where you can also update), show and hide methods ("what must be done, if the window is not in the foreground?" etc.), resume and stop methods, which you have to overwrite or just implement your own class into it.

4. I don't know any, but maybe some other users do. Posted Image

PS: Excuse me for my bad english. My english teacher can't really teach and I'm too lazy doing it in my free space... ^^