Jump to content
  • Advertisement

Tessellator

Member
  • Content Count

    282
  • Joined

  • Last visited

Everything posted by Tessellator

  1. A couple random ideas: 1) If you're using indices check out glPrimitiveRestartIndex to see if that'll do it (I've only just noticed that functionality exists in GL!) 2) I remember reading/watching the presentation about Ghost Recon Wildlands and I think they clip bits of their terrain out for tunnels etc by setting a vertex position to inf or NaN (i.e. some invalid float value) - any triangle using an invalid vertex would be rejected completely. Although non-standard it apparently worked on all HW they tested; perhaps it'll work similarly for line segments? T
  2. I'm not sure if it's relevant but I had a similar issue that came down to the UVs I was using to sample the heights from my texture. I needed linear interpolation enabled for the sampler for some LOD blending reasons, which meant at each LOD level I needed to make sure I was hitting the texel center accurately for the relevant mip. I'm not at the PC with the code and shader to remind me of the details, but I do recall it being a total pain. Your comments about the issue appearing offset even when using a simple sine wave sounded awfully familiar :). Edit: Actually nevermind - it sounds like you're loading rather than sampling the data... I'm not sure in that case.
  3. Where have the DirectX/OpenGL specific forums gone?
  4. An alternative approach (with different positives and negatives) is to use a Immediate Mode GUI library such as this one: https://github.com/ocornut/imgui Here are a couple of posts that cover both approaches: Pro Qt: https://deplinenoise.wordpress.com/2017/03/05/why-qt-and-not-imgui/ Pro IMGUI: https://gist.github.com/bkaradzic/853fd21a15542e0ec96f7268150f1b62 FWIW I really like the IMGUI approach since there's so little friction to creating new bits of UI. However I use it for my own stuff so I'd potentially have to re-evaluate if I was making a tool that was to be worked on/in by a larger team of coders. That said, Unity also uses an IMGUI approach. T
  5. Tessellator

    Build a texture out of vertecies

    If I understand correctly, I'd tackle this in 2 parts: 1) Store colours into a set so that each colour is stored only once. Each vertex gets an index into the set for its colour. 2) Add the set of colours to a texture, converting the vertex's colour index into a texture coordinate. For (1) I'd create a simple hash function that allows you to bin the colours together to reduce compares, or use some other mechanism to create the set. For (2), that's a case of writing the colours as pixels to a texture of a given dimension, and then storing the resulting location as a UV.   Sorry my answer is brief, my lunch break just run out!   T
  6. Separating out classes based on API like D3dSkybox and D3dLight is something I'd avoid. For example what does D3dLight have that CLight doesn't? It's a structure that has a position+/orientation, colour, type etc. In terms of interfacing with D3D, all you'll be doing is setting some constants or other buffer type - that can be abstracted efficiently at a much lower level. Abstracting buffers and entire blobs of data (buffer + state) makes porting to other platforms much easier too (and fits well with modern rendering APIs). T
  7. Tessellator

    Vertex Data Structures

    Most of the time I use reasonably opaque vertex data as Dingleberry suggests. Any time I need to do operations on the mesh though, I tend to split it up into several streams of data e.g. Positions and UVs and tangents etc. Alternatively, I also have functions that take an e.g. Vec3* to operate on positions and a stride value that defines the size between each vertex. So to increment between each vertex you can do ((char*)pPositions) += stride.
  8. Tessellator

    use ID or Pointers

    I use handles for pretty much every resource that needs a direct link - I have different ways of managing them, but most of them are 32bit and store an index to an array of resources along with 8 bits or so reserved for a "magic" ID so I can track resource lifetime (i.e. if the resource is alive or freed). I think this might have been the first thing I read about it years ago: http://scottbilas.com/publications/gem-resmgr/. It's evolved since then but the premise is the same.   I tend to use hashes when serialising if one thing needs to look-up another. I've started to use a unique string index that's created at build time along with a dictionary (i.e. list) of all strings used. That can be used for compares + easy string lookup for tools and debugging, and saves having to keep the hash + string pointer around. However, it requires some additional management that i'm not sure is worth the pay-off yet.   T
  9. Tessellator

    Terrain Rendering

    I'm trying out something similar to this currently: http://casual-effects.blogspot.co.uk/2014/04/fast-terrain-rendering-with-continuous.html It's nice and easy to setup but haven't finished the implementation of materials etc so can't say how good it'll be. Initial perf seems plenty fast enough.   T
  10. Tessellator

    D3D11CreateDevice freeze

    If you pass in the debug flag when creating the device do you get any warning/error messages?   I remember I had a bug where running the debugger with a D3D9 game (years ago!). Not closing down correctly would occasionally leave the device in an unhappy state, causing it to fail during subsequent creation calls. I've not seen anything like that since though.   T
  11. @ingramb - yes you're absolutely right of course. I've been trying to avoid a z-prepass by making use of instancing and bindless/texture arrays so that I can sort front to back as much as possible and avoid overdraw (via lots of small mesh pieces too), although I have yet to test if that's enough to skip it altogether in my case.
  12. One thing that prevents me from jumping entirely into Forward+/clustered is screen space lighting stuff (I originally had decals on that list but MJP nicely showed that clustering those works well too!). While it's perfectly possible to apply screen space AO and reflection after a forward rendering pass, you need to worry about grabbing explicit samples and also splitting ambient/direct light (or storing a ratio perhaps) so that SSAO gets applied correctly. Neither are deal breakers but it's another thing to consider. On the flipside, something that nudges us away from SSAO is maybe no bad thing ;).
  13. Tessellator

    Shaders

    I also do what Hodgman does :)
  14. Tessellator

    Handling an open world

    There are few things you can implement to handle large scenes: 1) View frustum culling (as mentioned by vinterberg). With this you avoid rendering anything outside of the view frustum as a simple way to keep draw calls down. You typically do this by having objects bounded  by a simple shape (e.g. Axis Aligned Bounding Box or Bounding Sphere), which are then intersected with the camera frustum. To improve this, a hierarchy or grid is often used e.g. you first check if the bounding shape of an entire block of buildings is visible, if so, then check the individual bounding volumes within. Look into grids, nested grids, quadtrees and octrees as structures often used to accelerate this process. 2) Level-of-Detail (LOD) systems: Instead of drawing a full detail version of an object at all distances, you prepare simpler versions of the object which are then used when far enough away from the camera, so much so that the extra detail wouldn't really be visible. You can also stop drawing small or thin items at a distance too e.g. if you look all the way down a street, the rubbish bins needn't be rendered after a few hundred meters etc. For a very large scenes, people will sometimes use billboards or imposters (quads that face the camera essentially, very similar to particles) as the lowest level of detail. Generating the LOD models is often a manual process, but it can be automated in some cases.   3) Occlusion culling: Being inside a city, you can make use of the fact that buildings often obscure lots of the structures behind them. There are a number of techniques to do this: i) you can break the level into parts and precompute which parts of a level are visible from each area (search for Precomputed Visibility Sets). This technique is fairly old fashioned as it requires quite a bit of precomputing and assumes a mostly static scene, but it can still be handy today in huge scenarios or on lower spec platforms. ii) GPU occlusion queries - this is where you render the scene (ideally a much simpler version of it) and then use the GPU to determine which parts you actually need to render. Googling should provide you with lots of info, including the gotchas with this approach. iii) CPU based occusion culling - this can be done by rasterizing on CPU a small version of the scene into a tiny zbuffer-like array, which is then used to do occlusion queries much like the GPU version. This avoids the latency of the GPU version at the expense of more CPU cost. iv) A mix of both GPU and CPU approaches where you re-use the previous frame's zbuffer. There are other methods but I think these are the most common. HTH, T
  15. Tessellator

    An alternative to the Sponza mesh for demos?

    I went looking for a similar thing when the sponza got a bit boring and I quite like the Natural History Museum model from here: http://www.3drender.com/challenges/ IIRC it might require a bit of tidy up with the materials (but it's been a long time since I grabbed it).
  16. This is really great, thanks for taking time to share this.
  17. Just throwing in another idea that's a slight change on the render key approach that I'm trying recently:   In my hobby engine I maintain a list of sorted material instances - they're sorted by state, shader, texture and hashed constant data. They don't need much sorting after being loaded (changing material params or streaming in new materials requires another small re-sort). I sort these much like you would with a render key and maintain a list of indices into the material instances. Additionally, each material instance also knows its sorted index.   When I render, rather than trying to pack lots of info into the render key, I make use of the fact that the material instances are already sorted and simply use the material instance's sorted index + mesh data (vb/ib handles munged together) to then sort the data. I can also pack depth in at this stage since I don't need much space for material index (14 bits currently and that's overkill for my needs).
  18. Tessellator

    Preserve rendering order with queues

    I do mine a little differently as I hit upon the same issue. Each time I change high-level state*, the state information gets saved off into a separate buffer. The index to that state information is what goes into the command bitfield, along with any draw calls that follow. It lives higher up in the bitfield so during sorting, those states are preserved for the appropriate draw calls.    I currently use different command buffers for different bits of the pipeline e.g. per shadow frustum, main render and post process. This keeps the number of state changes per single buffer down.   * By high level state, I mean things like blendmode, viewport etc - changing shader, texture and vertex formats get rolled into the regular bitfield. I could also have custom commands for things like perform-clear or generate-all-mips etc since the separate state buffer can encode anything I want (and isn't restricted to 64 bits).
  19. Tessellator

    UE4 IBL / shading confusion

    3) Not sure about the precomputed textures - are your inputs mapped differently to his? I've used an empircal EnvironmentBRDF for playing around at home (similar to the one in http://blog.selfshadow.com/publications/s2013-shading-course/lazarov/s2013_pbs_black_ops_2_notes.pdf) and haven't tried that approach yet (it's on my list though). The 1024 samples he takes in the sample code is (I assume) for generating the reference version so he can see how close the split-sum approximation gets.   
  20. Tessellator

    UE4 IBL / shading confusion

    Hi,   1) As far as I can tell, the metallic parameter controls how the primary colour is used i.e. whether it goes into diffuse (metallic == 0) or into specular (metallic == 1) or a mix somewhere in between. When metallic == 1, the diffuse colour is black (0), when metallic == 0, the specular colour is assumed to be 0.04 (which is typical for the majority of non-metals as they tend to lie between 0.02 and 0.05 for common things). Perhaps something similar to this in shader terms:   vec3 diffuse = mix(colour, vec3(0, 0, 0), metallic); vec3 specular = mix(vec3(0.04, 0.04, 0.04), colour, metallic);   2) I *think* the cavity map is multiplied by the resulting specular value to help darken it - light from an environment map tends to look a little too bright in small creases without proper occlusion being taken into account, so this is a reasonable work around for it. They no longer need a separate specular input (except for special cases) as it's handled by the single colour input and the metallic parameter.
  21. I've had these thoughts before and ultimately realized it was not necessary. Since you're not using virtual inheritance, you're not going to be storing all your objects in some big list of IThings (which is good), unless you implement your own runtime type identification and branch on the type (at which point you might as well use virtual).    Instead, embrace that each of your Things is different and store them in different lists so you don't need to know the type. You can always document naming/interface schemes instead. As the others have said, just start writing code and let the compiler do the work for you.   T
  22. Hi,   Regarding getting light values for things - I've had some success capturing my own with some of the light metering iOS apps (e.g. LightMeter by whitegoods). I doubt it's super accurate, but it does a good job illustrating how crazily different light values can be.   Although I've worked with engines that use tone mapping for some time, I've not dabbled with anything using real-world values. At home I'm starting from scratch and trying out something with realistic values. I was searching for some docs covering local and perceptual tone mappers, and found this fairly gentle paper: http://aris-ist.intranet.gr/documents/Tone%20Mapping%20and%20High%20Dinamic%20Range%20Imaging.pdf   T
  23. Tessellator

    Generating Cube Maps for IBL

    Hi, ATIs CubeMapGen does what you want and although it's no longer being updated the source is available: http://developer.amd.com/resources/archive/archived-tools/gpu-tools-archive/cubemapgen/ IIRC in a recent presentation on the new Killzone, one of Guerrilla's devs said that they'd modified the code to match their BRDF when doing the integration, so material roughness is treated uniformly for all light types. Oh, and this is handy for some of the background and rolling your own: http://www.rorydriscoll.com/2012/01/15/cubemap-texel-solid-angle/ thanks, T * Hmmm, not so sure it was Guerrilla now... I'll have a look and edit the post if I find different.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!