Corvo

Members
  • Content count

    8
  • Joined

  • Last visited

Community Reputation

357 Neutral

About Corvo

  • Rank
    Newbie

Personal Information

  • Interests
    Art
    Programming
  1. It depends on the data access pattern. If most entities need position/rotation/velocity, just put them together. struct CommonData { vec3 position; vec3 scale; quat rotation; vec3 velocity; }; CommonDataSystem { Array<CommonData> data[MAX_ENTITY_COUNT]; CommonData& Lookup(Entity); }; If only some entities need a run-time name, put those into a separate HashTable. struct Name { char Value[32]; }; NameSystem { Hash<Entity, Name> names; };
  2. Consider there are 30000 entities, only the sync part is small(1500), each system still has to process its own data (maybe some number between 0 and 30000). Scatter memory access pattern is much slower compared to linear memory access pattern.
  3. The performance gain depends on the actual data access pattern. If only a small portion(<5%) of entities are moved every frame, then the state sync between SystemA and SystemB is fast. After sync, SystemB could process its component data linearly, and multithreading SystemB is quite simple.
  4. I prefer the " many component arrays " method. More specifically, the bitsquid method, which each system manages its private component data. SystemA { Hash<Entity, int> entityToInternalIndex; vec3 positions[MaxCount]; vec3 velocities[MaxCount]; }; SystemB { Hash<Entity, int> entityToInternalIndex; mat4 worldMats[MaxCount]; Update(); }; void SystemB::Update(movedEntities) { for (entity, newPosition : movedEntities) { if (systemB.entityToInternalIndex.find(entity)) { index = entityToInternalIndex[entity]; worldMats[index].SetTranslation(newPosition); } } } MainLoop() { movedEntities = systemA->Update(); systemB->Update(movedEntities); }
  5.   Well, virtual texture can be achieved without hardware extension, use an extra texture as indirect table, just like they did it in Rage.   Also according to this article, it seems Doom still uses software virtual texture.   http://www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/
  6.   You're right, there's one missing thing - you forgot bindless textures. Bindless textures are a concept different from sparse textures - while sparse/virtual textures have potentially large textures that is only partly allocated, bindless textures will give you the same advantage as array textures. Basically, you have a texture and generate a global handle that you can use everywhere in your shaders without having to bind the texture. Before that, the texture is made resident. In your case you could generate a texture handle, save it in your global material buffer and afterwards referencing it via your material attributes, or write it to a texture of your gbuffer (not recommended). Instead of indexing into an array texture, where you can only have textures with the same size and format, you can use the handle directly and sample with the given uvs from the texture in a deferred way.   EDIT: To be clear, you could use sparse textures in combination with this approach. Even it doesn't make that much sense at first sight, you could make every single texture a sparse one. Virtual textures are nowadays mostly used for very large textures...for example you could combine all surface textures of a scene into a large one and make it sparse, with loading/unloading small pieces of it at runtime. But this brings other problems you don't want to have. I would recommend you to use bindless textures at first, I used it, it's simple to integrate if your hardware supports it and it gets the job done 100%. Performance overhead wouldnt matter that much, as I experienced similar experience as with array textures.     Thanks for your advice. I could use bindless texture extension for experiment, but my target API is opengl ES 3.0/3.1. As far as I know, ARB_bindless_texture is not supported on most mobile devices, so in the end I got to implement either Array Texture or Virtual Texture anyway.
  7. Recently, after read some articles of deferred texturing, I decided to try it out.   Here are some useful links to deferred texturing:   https://forum.beyond3d.com/threads/modern-textureless-deferred-rendering-techniques.57611/ https://mynameismjp.wordpress.com/2016/03/25/bindless-texturing-for-deferred-rendering-and-decals/ http://www.conffx.com/Visibility_Buffer_GDCE.pdf   Deferred texturing, short version:   1. Write either UV/material ID/vertex ID in to the G-Buffer. 2. Use UV/material ID/vertex ID to sample textures, then do the lighting   Step 2 requires that all textures for shading are accessible. Both Array Texture and Virtual Texture could achieve the same goal.   Virtual Texture Pros: 1. Artists can use very large textures, no need to worry about memory. 2. Already been used in some shipped games (Rage, Doom, Farcry 4, Trials Funsion etc)   Cons: 1. Extra cost by indirect table lookup, 2. Toolset needs some changes. Live level editing seems hard (requires UV re-mapping, Virtual Texture mipmap generation)    Array Texture Pros: 1. Implementation seems easier than Virtual Texture. 2. Sample from texture array might be faster than Virtual Texture( no indirect table lookup cost )   Cons: 1. All textures required be the same size (use multiple arrays or textureLod to get around this limit?) 2. Larger memory footprint than virtual texture   There are extensions like ARB_SPARSE_TEXTURE, NV_BINDLESS, but those extensions are not supported by all the hardware.   I'm not sure which method to start first, there got be some missing details. So I'd like to know more options on those methods.   Thanks.  
  8. I think in FarCry 3 they calculate the final irradiance on the CPU, then upload those irradiance values to the GPU via a 3D texture.   With PRT they could change their Sun position and radiance.   Also note that in Assassin's Creed : Black Flag, they just bake multiple light probes for different times of the day and assume that the sun position is fixed.