So transform feedback seems the way i need to go and has some advantages:
- No need to encode/decode positions from textures so its much easier to access particle position/velocity etc.
- No need to upload the modified positions into the VBO from the CPU - everything except initialization is done on the GPU
but also i found some drawbacks:
- Use of random seems like a real pain
- Hard to create multiple emitters, cause it works on the vertex level which processes every particle - even its active or inactive - require a completely different thinking, are there some "discard" function to skip certain vertices?
- Collisions seems to be limited maybe? (Passing array pointers via Uniforms, there are some limitation in the max length of array in GLSL right?, so i need to make a multi-pass system so that collision contacts are being splitted to passes based on the limitations)
- Is there some "Null" fragment shader i can create to ignore any rendering at all? - For multipass systems i want to render at the very end, but process a few times some sort of GLSL shaders which operates on the particles only.
But most importantly i have no idea how to detect neighbor particles. On the CPU i had used a dynamic two-dimensional hashmap to store the particle in a dynamic grid with a bucket index-list to particle in the cell, also every particle has stored its current x/y on the dynamic grid - may change once per frame.
How the hell do i integrate something like that with GLSL? I heart about radix sort and some hash based algorythmn to access neighbors, but seesm to be very complex...
What about compute shader? I heard this is now part of opengl 4.x? Are this some sort of cuda like thing, but based on GLSL only???