BrentMorris

Member
  • Content count

    89
  • Joined

  • Last visited

Community Reputation

1226 Excellent

About BrentMorris

  • Rank
    Member

Personal Information

  • Interests
    Art
    Audio
    Design
    DevOps
    Production
    Programming
  1. DX11 Constant buffer and names?

    You'll notice on the VS/GS/PS SetConstantBuffers function there is a start slot argument, and an input array of constant buffer pointers. When a shader is compiled it assigns a constant buffer slot to each of the cbuffers in the shader. The example shader you posted makes it easy because there is only one constant buffer, which means that it's assigned to slot/register 0. There are 16 constant buffer registers but you shouldn't need that many. I personally do not trust automatic register assignment at all! I believe it assigns them from top-to-bottom, but if you are paranoid like me you can set the constant buffer slot like so: cbuffer MatrixBuffer : register(b0) { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; where MatrixBuffer is now assigned to constant buffer slot/register 0. Otherwise, the mapping to variables in the constant buffer itself is based on the byte data that you map into the constant buffer. I suggest that you read this article about constant buffer packing rules. You have to make sure that byte data you map in matches the appropriate data type in your shader, and that your types meet the 4-byte alignment, 16-byte boundary rules. Feel free to ask more questions about this because it can cause wacky behavior if you aren't aware of it. For instance if you have a constant buffer like this: cbuffer MatrixBuffer { float2 SomeData; float4 SomeOtherData; }; You will need to map in 8 floats of data total. Two for the opening float2, two to garbage-pad the remaining 2 floats in the 16 byte (or 4 float) boundary, and then 4 floats to map to the float4. The two floats are required for padding in order for the data to be where you expect it.
  2. Has anyone tried .NET Core yet?

    The differences are all the same to me, except that .NET Core can run on Linux! In a cloud environment Linux installations are also leaner and cheaper than on Windows.
  3. Good formula for swaying grass?

    If you only care about swaying and not wind patterns in a global sense, you could probably determine swayage using the value noise that shader-toy is fond of. https://www.shadertoy.com/view/lsf3WH   You can use it to generate a cheap approximation of perlin-noise on the GPU. It's bound to look better than a trig wave, and I believe (don't quote me on this) that it's cheaper than sampling from a gradient texture too. You could probably layer a couple samples together to get erratic results that look like gusts. You can do all of the classic noise-combination tricks like sampling them at different scales, translating the sample positions at different speeds, multiplying normalized samples together, etc. There's going to be some magical franken-combination that looks good I think. Edit: I messed around with it a little out of curiosity. I think this could look pretty good. Paste this into shader toy:   // Created by inigo quilez - iq/2013 // License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. // Value Noise (http://en.wikipedia.org/wiki/Value_noise), not to be confused with Perlin's // Noise, is probably the simplest way to generate noise (a random smooth signal with // mostly all its energy in the low frequencies) suitable for procedural texturing/shading, // modeling and animation. // // It produces lowe quality noise than Gradient Noise (https://www.shadertoy.com/view/XdXGW8) // but it is slightly faster to compute. When used in a fractal construction, the blockyness // of Value Noise gets qcuikly hidden, making it a very popular alternative to Gradient Noise. // // The princpiple is to create a virtual grid/latice all over the plane, and assign one // random value to every vertex in the grid. When querying/requesting a noise value at // an arbitrary point in the plane, the grid cell in which the query is performed is // determined (line 30), the four vertices of the grid are determined and their random // value fetched (lines 35 to 38) and then bilinearly interpolated (lines 35 to 38 again) // with a smooth interpolant (line 31 and 33). // Value Noise 2D: https://www.shadertoy.com/view/lsf3WH // Value Noise 3D: https://www.shadertoy.com/view/4sfGzS // Gradient Noise 2D: https://www.shadertoy.com/view/XdXGW8 // Gradient Noise 3D: https://www.shadertoy.com/view/Xsl3Dl // Simplex Noise 2D: https://www.shadertoy.com/view/Msf3WH float hash(vec2 p) // replace this by something better { p = 50.0*fract( p*0.3183099 + vec2(0.71,0.113)); return -1.0+2.0*fract( p.x*p.y*(p.x+p.y) ); } float noise( in vec2 p ) { vec2 i = floor( p ); vec2 f = fract( p ); vec2 u = f*f*(3.0-2.0*f); return mix( mix( hash( i + vec2(0.0,0.0) ), hash( i + vec2(1.0,0.0) ), u.x), mix( hash( i + vec2(0.0,1.0) ), hash( i + vec2(1.0,1.0) ), u.x), u.y); } float fbm( in vec2 p) { mat2 m = mat2( 1.6, 1.2, -1.2, 1.6 ); float f = 0.0; f += 0.5000*noise( p ); p = m*p; f += 0.2500*noise( p ); p = m*p; f += 0.1250*noise( p ); p = m*p; f += 0.0625*noise( p ); return f; } float norm( in float f ) { return 0.5 + 0.5 * f; } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec2 p = fragCoord.xy / iResolution.xy; vec2 uv = p * vec2(iResolution.x/iResolution.y,1.0); float time = iGlobalTime; uv *= 8.0; vec2 uv1 = uv * 0.5 + vec2(time * 0.3, time * 0.7) * 0.15; vec2 uv2 = uv + vec2(time, -time) * 0.23; float f1 = norm(fbm(uv1)); float f2 = norm(fbm(uv2)); float f = f1 * f2; fragColor = vec4( f, f, f, 1.0 ); } You would just evaluate something like that at each grass position.
  4. Dealing With Quadtree Terrain Cracks

    1) Detect edge vertices when generating a mesh. 2) Detect which edge vertices are 'odd', these are the ones that mismatch with the next LOD and create holes. 3) Get the neighboring even-vertex-edge positions. 4) Set the odd-vertex position to the average of the two neighboring even positions. This should close your cracks. The higher-LOD vertices follow a straight line between the two even positions, where the more detailed mesh just has another sample point at the half-way mark. Then your next problem is knowing when to close the cracks on a side. The most straightforward approach is probably to regenerate a mesh whenever a neighboring mesh changes LOD. There is another approach where you store two positions per vertex (the main position, and an edge transition position) and a bitflag mask for which edge a vertex is on (or zero if it isn't on an edge), and then when you render the mesh you pass in a bitflag for which edges of the mesh are bordering higher LODs. In the shader you check the edge mask of the vertex against the neighboring LOD bit mask, and if the vertex is on the edge with the higher LOD you set the output position to the transition position. The extra vertex data isn't that bad in the grand scheme of things, and this lets you use the same pre-generated mesh without having to recalculate it every time a neighboring LOD changes. All you have to do is pass in the neighboring-higher-LOD bitmask.  
  5. Why didn't somebody tell me?

    Use [Alt + Print Screen] to take a screenshot of the active window instead of everything. Especially useful if you ever need to take a specific screenshot and you have 2+ monitors. Edit to fit the topic: I was caught cropping a screenshot of a specific window out of a 3 monitor Print-Screen when I was told about it.
  6. Billboard grass rendering-visual bug

    My guess is that you are doing one-bit alpha on the grass, and that the grass texture is mipmapped. The mipmap downsampling is producing results that make your 1-bit alphas disappear in the higher mip levels. In the past I have fixed this with distance field alpha instead of a 1-bit interpreted alpha channel. As long as you are using a 1 bit alpha you can adjust your threshold to make the further mips 'thinner' or 'thicker'. Both versions tend to look bad! A thinner threshold will make grass disappear like you are seeing. Here's the paper. http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf   You use that technique but for grass instead of text. A distance field is much more tolerant of the mip downsampling.
  7. Somebody correct me if I'm wrong, but if every path in a shader takes the same branch it is nearly (not entirely) the same cost as if the changes were compiled in with defines. If you are rendering an object with a specific set of branches that every thread will take it may not be a big deal. If threads take different branches you will eat the cost of all branches.
  8. Circle Circle intersection help

    Those Y's look correct mirrored vertically (over the X-axis) to me. What rendering API are you using? Your data is correct, but you might be interpreting the rendering side of it incorrectly. Most "draw circle/line" API's work where positive Y values move downwards instead of upwards; The top left of the screen is (0, 0) and the bottom left corner is (0, height). Try rendering those points at (x, imageHeight - y) and see if they match up to the circles.
  9. I understand why the ternary operator branches on the GPU if there is any in-line computation, like: // Branching on the (value+1)! float value = 3; value = (value < 4) ? (value) : (value+1); but.. if the ternary operator is literally switching between two variables should it necessarily branch? // Why should this branch? float value = 3; float valuePlus = value + 1; value = (value < 4) ? value : valuePlus; // It should be equivalent to: float value = 3; float valuePlus = value + 1; float values[2]; values[0] = value; values[1] = valuePlus; int valueIndex = (int)(value < 4); value = values[valueIndex]; // And no branching! At least in my head it should just be an address offset between two values. I know this is a nitpicky branching concern, but it's an interesting mini-optimization depending on the usage. Does anyone know if GPU drivers detect and make this sort of optimization or is this a non-concern?
  10. Procedural texture smoothing

    Can you kick that river texture up to an 8 bit texture? You could do a proper gaussian blur on the texture and have some falloff. It will still be that blocky, but at least it won't be a 1-bit abrupt dropoff.
  11. Procedural texture smoothing

    Really? I would at least expect soft blending over the span of a single texel from your river texture with any kind of *linear blending. The size of a river texel from your screenshot looks pretty huge, so it should make some kind of a difference! The blend range should be on par with the blending you see between the river/land outside of the helicopter. How are you blending the river texture with the other grass texture in your shader?   If this really can't be solved with a texture sampler, you might have to stream in high resolution textures for smaller areas. Depending on your needs you could go with a more sophisticated geomipmapping solution if you really need some draw distance. If not that maybe you could do some kind of percentage closer filtering, where you take a number of river texture samples around a sample position and get the average of them. It could look noisy depending on how you do it, though.
  12. Procedural texture smoothing

    It looks like that river texture is being sampled with a point texture sampler. The nearest river pixel value is returned, which gives it the blocky look. You need to use a texture sampler with filtering that returns a blend of nearby river pixel values. Try using a bilinear texture sampler. If that doesn't look good enough generate mipmaps (as mhagain mentioned) and use a trilinear/anisotropic texture sampler to make it so the texture values are blended together smoothly.
  13. OpenGL Vulkan is Next-Gen OpenGL

    http://vulkan-tutorial.com/assets/Khronos-Vulkan-GDC-Mar15.pdf   The events thing sounds pretty awesome. Command buffers can fire off completion events! You can query an event to see if it has been completed yet, or you can join/wait for an event to finish if you need to. We can finally do GPU->CPU readback without stalling the driver until it catches up! That's a whole lot more assuring to me than "it will probably be done in ~3 frames!"
  14. I consider an intermediate level to be when you are comfortable enough with a language that you fight with concepts instead of the language that you are working with. When your thought process becomes "How does this work, and what's the best way to make it happen?" instead of "How do I do this, and why is it not compiling?". You are 'intermediate' when you can comfortably express your thoughts in code, and you are more concerned about the concepts behind making a piece of code work the way you want it to. You will know the right questions to ask to accomplish roughly anything that you want to do.   Beyond that I consider an advanced level to be when you have specific knowledge about different types of programming. These can include graphics, networking, front/back-end web development, etc. You start to learn the best practices, interesting ways of doing things, and the specifics of a field. It's best not to put a label on how "advanced" you are, because at this point it really just depends on what you know.
  15. Clipping on a wavy surface

    I don't completely follow you! Are you saying that the wave geometry is showing up on the reflection, and you don't want it to be there? Or that the reflection from the surface isn't reflecting how you think it should be? A screenshot would be helpful to illustrate the issue.