Tessellator

Members
  • Content count

    281
  • Joined

  • Last visited

Community Reputation

1394 Excellent

About Tessellator

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Programming
  1. I'm not sure if it's relevant but I had a similar issue that came down to the UVs I was using to sample the heights from my texture. I needed linear interpolation enabled for the sampler for some LOD blending reasons, which meant at each LOD level I needed to make sure I was hitting the texel center accurately for the relevant mip. I'm not at the PC with the code and shader to remind me of the details, but I do recall it being a total pain. Your comments about the issue appearing offset even when using a simple sine wave sounded awfully familiar :). Edit: Actually nevermind - it sounds like you're loading rather than sampling the data... I'm not sure in that case.
  2. Where have the DirectX/OpenGL specific forums gone?
  3. An alternative approach (with different positives and negatives) is to use a Immediate Mode GUI library such as this one: https://github.com/ocornut/imgui Here are a couple of posts that cover both approaches: Pro Qt: https://deplinenoise.wordpress.com/2017/03/05/why-qt-and-not-imgui/ Pro IMGUI: https://gist.github.com/bkaradzic/853fd21a15542e0ec96f7268150f1b62 FWIW I really like the IMGUI approach since there's so little friction to creating new bits of UI. However I use it for my own stuff so I'd potentially have to re-evaluate if I was making a tool that was to be worked on/in by a larger team of coders. That said, Unity also uses an IMGUI approach. T
  4. Build a texture out of vertecies

    If I understand correctly, I'd tackle this in 2 parts: 1) Store colours into a set so that each colour is stored only once. Each vertex gets an index into the set for its colour. 2) Add the set of colours to a texture, converting the vertex's colour index into a texture coordinate. For (1) I'd create a simple hash function that allows you to bin the colours together to reduce compares, or use some other mechanism to create the set. For (2), that's a case of writing the colours as pixels to a texture of a given dimension, and then storing the resulting location as a UV.   Sorry my answer is brief, my lunch break just run out!   T
  5. Separating out classes based on API like D3dSkybox and D3dLight is something I'd avoid. For example what does D3dLight have that CLight doesn't? It's a structure that has a position+/orientation, colour, type etc. In terms of interfacing with D3D, all you'll be doing is setting some constants or other buffer type - that can be abstracted efficiently at a much lower level. Abstracting buffers and entire blobs of data (buffer + state) makes porting to other platforms much easier too (and fits well with modern rendering APIs). T
  6. Vertex Data Structures

    Most of the time I use reasonably opaque vertex data as Dingleberry suggests. Any time I need to do operations on the mesh though, I tend to split it up into several streams of data e.g. Positions and UVs and tangents etc. Alternatively, I also have functions that take an e.g. Vec3* to operate on positions and a stride value that defines the size between each vertex. So to increment between each vertex you can do ((char*)pPositions) += stride.
  7. use ID or Pointers

    I use handles for pretty much every resource that needs a direct link - I have different ways of managing them, but most of them are 32bit and store an index to an array of resources along with 8 bits or so reserved for a "magic" ID so I can track resource lifetime (i.e. if the resource is alive or freed). I think this might have been the first thing I read about it years ago: http://scottbilas.com/publications/gem-resmgr/. It's evolved since then but the premise is the same.   I tend to use hashes when serialising if one thing needs to look-up another. I've started to use a unique string index that's created at build time along with a dictionary (i.e. list) of all strings used. That can be used for compares + easy string lookup for tools and debugging, and saves having to keep the hash + string pointer around. However, it requires some additional management that i'm not sure is worth the pay-off yet.   T
  8. Terrain Rendering

    I'm trying out something similar to this currently: http://casual-effects.blogspot.co.uk/2014/04/fast-terrain-rendering-with-continuous.html It's nice and easy to setup but haven't finished the implementation of materials etc so can't say how good it'll be. Initial perf seems plenty fast enough.   T
  9. D3D11CreateDevice freeze

    If you pass in the debug flag when creating the device do you get any warning/error messages?   I remember I had a bug where running the debugger with a D3D9 game (years ago!). Not closing down correctly would occasionally leave the device in an unhappy state, causing it to fail during subsequent creation calls. I've not seen anything like that since though.   T
  10. @ingramb - yes you're absolutely right of course. I've been trying to avoid a z-prepass by making use of instancing and bindless/texture arrays so that I can sort front to back as much as possible and avoid overdraw (via lots of small mesh pieces too), although I have yet to test if that's enough to skip it altogether in my case.
  11. One thing that prevents me from jumping entirely into Forward+/clustered is screen space lighting stuff (I originally had decals on that list but MJP nicely showed that clustering those works well too!). While it's perfectly possible to apply screen space AO and reflection after a forward rendering pass, you need to worry about grabbing explicit samples and also splitting ambient/direct light (or storing a ratio perhaps) so that SSAO gets applied correctly. Neither are deal breakers but it's another thing to consider. On the flipside, something that nudges us away from SSAO is maybe no bad thing ;).
  12. OpenGL Shaders

    I also do what Hodgman does :)
  13. Handling an open world

    There are few things you can implement to handle large scenes: 1) View frustum culling (as mentioned by vinterberg). With this you avoid rendering anything outside of the view frustum as a simple way to keep draw calls down. You typically do this by having objects bounded  by a simple shape (e.g. Axis Aligned Bounding Box or Bounding Sphere), which are then intersected with the camera frustum. To improve this, a hierarchy or grid is often used e.g. you first check if the bounding shape of an entire block of buildings is visible, if so, then check the individual bounding volumes within. Look into grids, nested grids, quadtrees and octrees as structures often used to accelerate this process. 2) Level-of-Detail (LOD) systems: Instead of drawing a full detail version of an object at all distances, you prepare simpler versions of the object which are then used when far enough away from the camera, so much so that the extra detail wouldn't really be visible. You can also stop drawing small or thin items at a distance too e.g. if you look all the way down a street, the rubbish bins needn't be rendered after a few hundred meters etc. For a very large scenes, people will sometimes use billboards or imposters (quads that face the camera essentially, very similar to particles) as the lowest level of detail. Generating the LOD models is often a manual process, but it can be automated in some cases.   3) Occlusion culling: Being inside a city, you can make use of the fact that buildings often obscure lots of the structures behind them. There are a number of techniques to do this: i) you can break the level into parts and precompute which parts of a level are visible from each area (search for Precomputed Visibility Sets). This technique is fairly old fashioned as it requires quite a bit of precomputing and assumes a mostly static scene, but it can still be handy today in huge scenarios or on lower spec platforms. ii) GPU occlusion queries - this is where you render the scene (ideally a much simpler version of it) and then use the GPU to determine which parts you actually need to render. Googling should provide you with lots of info, including the gotchas with this approach. iii) CPU based occusion culling - this can be done by rasterizing on CPU a small version of the scene into a tiny zbuffer-like array, which is then used to do occlusion queries much like the GPU version. This avoids the latency of the GPU version at the expense of more CPU cost. iv) A mix of both GPU and CPU approaches where you re-use the previous frame's zbuffer. There are other methods but I think these are the most common. HTH, T
  14. An alternative to the Sponza mesh for demos?

    I went looking for a similar thing when the sponza got a bit boring and I quite like the Natural History Museum model from here: http://www.3drender.com/challenges/ IIRC it might require a bit of tidy up with the materials (but it's been a long time since I grabbed it).