DavidTheFighter

Members
  • Content count

    3
  • Joined

  • Last visited

Community Reputation

258 Neutral

About DavidTheFighter

  • Rank
    Newbie

Personal Information

  • Interests
    Design
    Programming
  1. If you're going for an "unlimited" number of lights, I'd set up a vector of point lights in C++. Something like std::vector<PointLightData>, with PointLightData being a struct holding the data for each light. You can add or remove lights from it, and update the contents whenever you want. You'd then pass the data of this vector to the shader storage buffer object each frame, with a glBufferData call. Because the array in GLSL is unsized, there's no hard limit to how many point lights you can add, except for hardware limitations (which I doubt you'll reach). As long as the SSBO is backed by data uploaded in C++ from glBufferData(), it's all valid. Some C++ pseudo-code would go something like this: struct PointLight { vec3 position; // whatever other data you want }; std::vector<PointLight> scenePointLights; void mainLoop() { updatePointLights(); // do whatever logic like physics uploadPointLightData(); // call glBufferData() with the data from scenePointLights renderScene(); } An the GLSL pseudo: struct PointLight { vec3 position; // whatever other data }; layout(std430, binding = 0) buffer pointLights { PointLight pointLightData[]; // scenePointLights.data() }; uniform int numPointLights; // scenePointLights.size() void main() { for (int i = 0; i < numPointLights; i ++) { PointLight p = pointLights.pointLightData[i]; // Do whatever lighting code } } This is obviously a little simplified, but it's the basic idea on how to do it. I'd suggest doing a bit of Googling to figure out exactly how all of these work, though. The link I posted earlier shows a few more examples, too.
  2. So I've been trying to implement a multi-threaded resource system w/ vulkan in my free time, where a thread can request a resource to be loaded, and it gets pushed into a queue. On another thread, the resource (as of right now, a mesh) gets loaded from a file, and I map the data to a staging buffer. The issue comes in where I record the command buffer to copy the data to a GPU buffer. I record a secondary command buffer w/ just the vkCmdCopyBuffer command, and push it to a queue to be executed from a primary command buffer on the main thread to a transfer-only queue. As far as I can tell, the staging works fine, and the mesh is drawn and looks perfectly fine, but my validation layers (VK_LAYER_LUNARG_standard_validation) spam tell me: "vkCmdBindIndexBuffer(): Cannot read invalid region of memory allocation 0x16 for bound Buffer object 0x15, please fill the memory before using," and the vertex buffer binding gives me an identical message. Both buffers were created with the proper bits, TRANSFER_SRC for the staging buffer, TRANSFER_DST for the gpu buffer (plus index and vertex buffer usage bits). I use Vulkan Memory Allocator from GPUOpen to handle buffer memory allocation, and I'm careful to make sure that the staging buffer is mapped properly and isn't deleted before the command finishes. The validation layers stop spamming telling me this error if I switch the copy commands to using primary buffers, even when recorded in the same way (i.e. just changing the level parameter), but everything I've seen recommends recording secondary command buffers simultaneously on worker threads, and submitting them on the main thread later. Any ideas on why my validation layers are freaking out, or did I just skip over something when reading the spec? Here's some relevant code:
  3. If you want an unlimited (well, not really, but still a LOT) number of lights, I'd look into shader storage buffers. You can specify an unsized array of light data, and just pass a uniform specifying the number of lights. Such as: uniform int pointLightNumber; struct PointLightData { // blah blah blah lighting data }; layout(std430, binding = 0) buffer pointLightArray { PointLightData pointLights[]; }; It does require OpenGL 4.3, however. Also, if you are using a deferred renderer and really want to squeeze some performance out, I'd recommend also looking into tiled deferred lighting. It essentially splits the screen into a grid of tiles, and each tile does a frustum check to see if a point light is in visible in that portion of the screen. If it is, the shader will do the calculations for it, if not, it'll skip it. This way you don't calculate lighting for every light in the scene, even if it's not visible in that portion of the screen. Although if you're not looking for extreme performance, or just want to start out simple and get a lighting system going, I wouldn't worry about this for now. Here's an article on storage buffers: https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Object