• Content count

  • Joined

  • Last visited

Community Reputation

664 Good

About alkisbkn

  • Rank
  1. Hi Dukus, Those are some good points, thanks. I have a flat scenegraph at the moment, but intend to go to a more spacial approach soon.   For sort bins, you suggest that I have a render queue per property batch, let's say, and sort the objects that are bined in it using their sort key? That may parallelise better, as at the moment I am sorting all the objects in one job.
  2. Hi Hogdman,   Thanks for your reply. That is how I do the Culling and sorting at the moment. DrawCalls are created before the Sort job, where 1 mesh = 1 task, if that makes sense. Would it be correct to say that it'd be the Sort job's job to cut the sorted DrawCall array into pieces and create N BuildCmdList tasks (where N = the maximum number of commandlists)? That way I could guarrantee the order of the command list building.   @Dukus That is how I have structured my renderer (or well, structuring it now :P). It is based on an article from Blizzard. However, 1 task per scene-view will not parallelise very well; that is why I am trying to add subtasks to each scene-view rendering process.
  3. This is a question that perhaps a seasoned engine programmer could asnwer for me. In a single threaded renderer, a possible approach would be something like this:   - Collect visible drawable objects - Create DrawCall instances, containing everything that the renderer needs to visually describe this object, including a sort key. - Sort those instances based on the sort key - Go through them all and render them   Roughly...     However, with a commandlist approach (D3D12/Vulkan), I can see some problems. So, we want to record commandlists from multiple threads; let's say we have a task scheduler which we feed it rendering tasks. We also have a pool of commandlists and we get the next available. So far so good, we can record our commands. How would the drawcalls be ordered?   Thanks!
  4. Is this for updating the slices of a 3d rendertarget?
  5. You could do the displacement in the vertex shader and using transform feedback to get the updated vertexbuffer cpu-side and write that to OBJ. Maybe that could work?
  6. How to get system time & date?

    time.h and gettime ?
  7. Had a quick look, this may not be the issue, but why is your w set to 2 here? gl_Position = gMVP * vec4(Position, 2.0); Also try instead of (1,1,1). vec3 DirectionalLightDirection = normalize(vec3(1.0f, 0.0f, 0.0f));
  8. Sleepwalking Takes a Dark Turn

    Man that's horrible :( My grandmother used to sleepwalk a lot, but not to that extent, she would keep herself within the house.   This may be of some help to you, although I am sure you have done your research on the matter:
  9. OpenGL tutorial for opengl 1.4?

    Links are good.
  10. Logo Feedback

    Some kind of animal, maybe a racoon.
  11. Insane comment

    So I was browsing through the code trying to fix some camera issues that we have in the game. This is what I encounter at some point, made me laugh a lot: //TODO: PUT Back IN !!!!! NAUGHTY   (I am naughtier) return 0;//(int)Mathf.Floor (t * 100);
  12. OpenGL Shadowmapping

    The transformed vertex due to the model*view*projection transformation ends up in clip space. To translate that into ndc, you need to take into account the perspective divide as such: // vertex shader: float4 p = mul(mvp, vertex); o.screenPos = p; // fragment shader: float2 screenPos = (i.screenPos.xy / i.screenPos.w) * 0.5 + 0.5; You ignore the z coord.   screenPos will be in 0..1 range.  Is this what you need?     This doesn't look right to me. You are passing a near plane of 10 (which means you will get shadow clipping when the camera is very close), and a far plane of 1000000. Also, why is aspect ratio = 2?   I'd suggest to go with an orthographic projection to begin with instead of sharing the same projection matrix between your main camera and your light shadow caster.
  13. OpenGL Shadowmapping

    Is the light a directional one? Where are you calculating the light's projection and view matrices? gluPerspectiveA(90.0, 2.0, 10.0, 1000.0 * 1000.0); What is the last parameter of gluPerspectiveA, is it the far plane? If so, it may be too big and you lose depth precision. You should also be using an orthographic projection for your directional light, as it's considered to be very very far away. Matrix44<double> SMVP = ( (ACTUAL_MODEL * shadowView) * ACTUAL_PROJECTION ) * ShadowbiasTex; The order of multiplication in opengl is projection * view * model, I think you got this the other way round. I'll have a better look at your code later, I am at work now.
  14. OpenGL Uniform Buffer Confusion

      Personally, if the shader does not contain the specific uniform buffer name (using glGetUniformBlockIndex), I ignore it/send a warning to the console.   Moreover, my renderer is a deferred renderer so I more or less know exactly what kind of shaders I have at any given time.
  15. OpenGL Uniform Buffer Confusion

    I do it in the following way:   My UniformBuffer has a usage flag, which is "Default" or "Shared". Now we have 2 scenarios on using the UBO:   1) If the UniformBuffer usage is "Shared", then the UBO send an event to the ShaderManager to register this UBO to all the loaded shaders. In my engine it's guaranteed that all the shaders are pre-loaded, so when you create a shared UBO, it's also guaranteed that all the shaders will know about it. When the renderer is binding a shader, it also binds its registered UBOs.   2) If the UniformBuffer usage is "Default", then I manually register the UBO to whichever shader I want.   I use the std140 layout as well. I am not sure if this approach is 100% correct though. Reading haegarr's approach makes me re-think of my design a bit.