Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 19 Oct 2009
Offline Last Active Today, 01:50 AM

Posts I've Made

In Topic: QuadTree: design and implementation question, (n+1)th topic

Yesterday, 02:41 PM

Or do you want to use the tree for a different purpose?



This tree can be used for selecting the visible objects by the camera or by a point light as well. But of course I'll use the same implementation for collision-detection (when I'll implement one :P)


QuadTree to manage its belonging objects. For example, a sub-system that manages the Listener (meaning a simulated hearing) component can check for spatial vicinity by utilizing a QuadTree; the Collision sub-system can manage a set of colliders by utilizing a QuadTree (where a collider itself refers to Collision component); [...]


So yea, in short, it would be better to make a separate quad tree for each component. And should it be a template class or a class with a generic "QuadTreeElement" container and statically cast when I query the components?

In Topic: QuadTree: design and implementation question, (n+1)th topic

25 May 2016 - 02:08 PM

Thanks for your reply!


In the past I implemented the QuadTree to store the objects and not components but things changed. :)


An object itself does not have bounds. And if I attach a DirLight component to an object, what would be the AABB of the object? Or if I attach multiple components to the same object, eg. a RenderableMesh and a Collider. The Collider can be "bigger" than the actual mesh which means the two component's bounds are different. This was the reason why I decided to store the components and not the objects.


What's about the 1:1 relation of components and quadtrees? I mean to create a separate quad tree for each sortable component? This solution has the highest memory cost but probably the fastest search. But the extra memory consuption is not a big deal IMHO, it's just a couple extra MB at most.

In Topic: How to implement Cascaded Shadow Map technique ( simply )?

25 May 2016 - 05:49 AM

Actually the Nvidia paper describes it pretty well on the 7th page.

if you have a directional light, you can calculate a view matrix from zero to the light direction (light position is actually irrelevant).

When you have this generic view matrix, you calculate the camera's frustum corners for each split. These corners are then transformed into the light's space (with the previous view matrix and a generic ortho projection matrix). A light-space bounding box is calculated from the transformed corners. Then the C crop matrix is calculated (as shown in the Nvidia paper) and the projection matrix becomes P = Pz * C, where Pz is an orto proj matrix with the calculated min and max z values.


So in short:

- calculate view once

- calculate proj and view*proj for each split

- render shadow map for each split with the calculated viewProj matrix

- use the shadow map textures to render the shadowed scene


But this is described better in the Nvidia paper, check it out again! You can find your answers there!

In Topic: Moving to OpenGL 3.1

19 May 2016 - 04:01 PM

Just std140 all the things. Its basically what D3D does, and look how popular it is.

I thought the same but it consumes more memory. But probably that's not a big deal.


The FPS "drop" (which actually does not happen, just forgot to comment an extra "test" clear :)) was just a sidenote.


I guess I'll go for the std140-way.

In Topic: Moving to OpenGL 3.1

19 May 2016 - 09:57 AM

Okay, so let's talk about the UBOs.


I think I have two different options and can't decide which one I should choose.


1) layout(std140)

This is probably the easier because the aligments and sizes are predefined. This way I can create a UBO everywhere (eg. before any shader program linked) and share across programs. I just have to follow the standards and align the data manually.


2) layout(shared)

This is more tricky, because first I need a program to query the parameters of the uniform block. However when two blocks are the same in the shader codes, then the layouts match as well. So at engine startup, I can compile and link a dummy shader where every available uniform block is defined. Actually I can imagine only three: one for camera and other global properties (view, projection, camera parameters, and so on) and one for the common object properties (world matrix and bones matrix array) and the last for common lighting parameters (color, intensity)


What do you think?