# DX11 [DX11] Progressively adapting vertex position

This topic is 2428 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I am attempting to make a simulation of something related to what is known to some people as a "Sky Guy" (those giant things outside of a used car dealership, such as http://blog.chron.com/carsandtrucks/files/legacy/Sky%20guy.gif ). Anyway, I am finding it difficult to figure out how to change the vertex position at the top end of the object, while keeping the bottom one fixed. I was thinking that I would be able to do this in the tesselation or geometry shader stages, however, I can't seem to figure out how this might be possible. Could anyone clue me into how to do this?

##### Share on other sites
I would assign each vertex a value that corresponds to the distance from the root of the whole thing. Then use a constant value in the vertex shader to determine where to begin displacing the object into another direction. If the vertex value is less than the "strobe" value, then simply transform them as normal. If it is greater than the strobe value, then rotate it by x amount in the direction you want - but you need to make sure that the rotation is performed with respect to the point where the vertex value transitions from negative to positive.

That last part could either be done directly with trigonometry in the vertex shader, or you could generate a rotation matrix on the CPU for a given "strobe" value.

##### Share on other sites
I think I would approach this more as a physics problem than rendering. Treat the tube guy as a cloth, for example a spring system. Then fix the bottom ring of vertices and flip gravity (to make it rise). To make it move add some small sideways forces or model turbulence.

I recall that the game "Gunstringer" has a "sky guy" boss. Maybe you can look at that and get some ideas.

Edit: Found a video of the wavvy tube man in Gunstringer. Skip to to 6:15.

##### Share on other sites

That last part could either be done directly with trigonometry in the vertex shader

Would you happen to be able to provide an example of how to do this? I have been trying to figure it out in my spare time over the last few days but I can't seem to nail this down.

##### Share on other sites
Does anyone else perhaps know of another way to do this?

##### Share on other sites

[quote name='Jason Z' timestamp='1318272900' post='4871154']
That last part could either be done directly with trigonometry in the vertex shader

Would you happen to be able to provide an example of how to do this? I have been trying to figure it out in my spare time over the last few days but I can't seem to nail this down.
[/quote]

You can do it also with a simple rotation matrix - it doesn't have to be manual trigonometry. If you have h = the height of the curving point, then you can take the vertex's object space position (x,y,z) and change it to (x,y-h,z). Then apply the desired rotation (such as a 20 degree rotation about the x-axis), which produces (x',(y-h)',z'). Then just add the value of h again to push the vertex back up to the appropriate height: (x',(y-h)'+h,z').

Note that this is working in object space, so then any additional rotations or translations for the world transform can be applied next. This is very much similar to skeletal animation, except that you are defining the bone weights dynamically based on a program supplied constant value.

(sorry for the slow response...)

##### Share on other sites

You can do it also with a simple rotation matrix - it doesn't have to be manual trigonometry. If you have h = the height of the curving point, then you can take the vertex's object space position (x,y,z) and change it to (x,y-h,z). Then apply the desired rotation (such as a 20 degree rotation about the x-axis), which produces (x',(y-h)',z'). Then just add the value of h again to push the vertex back up to the appropriate height: (x',(y-h)'+h,z').

Note that this is working in object space, so then any additional rotations or translations for the world transform can be applied next. This is very much similar to skeletal animation, except that you are defining the bone weights dynamically based on a program supplied constant value.

(sorry for the slow response...)

"h" because the max height of the curving point (apex), the current height of vert or the max height of the object? I just want to be clear because I believe I may have misunderstood you (I am sure you explained it fine, this is new territory to me so I still trying to grasp everything).

I attempted with the current height of the vert, though this did not turn out as well as I initially thought it would. Though I understand why. Disregarding object space at the moment, What this will end up doing is essentially cancel out the rotation applied from the rotation matrix.

 float4 position = input.Pos; position.y -= input.InputHeight; position = mul( float4( position.xyz, 1.0f), RotationMatrix); position.y += input.InputHeight; output.Pos = mul( float4( position.xyz, 1.0f), ViewProjMatrix); 

(sorry for the slow response...)

No worries at all, it is the quality that matters. As someone who has taken a look at Hieroglyph 3 and has a copy of your book, I know that it is something I can always count on.

##### Share on other sites
From what I can tell, if your input.InputHeight variable is a constant for the whole mesh, then that code snippet should work. You are more or less trying to rotate a portion of your mesh based on its object space location (specifically its height in this case). If a vertex is above the input.inputHeight variable, then you want to rotate it by an amount relative to its distance from input.inputHeight.

After just writing that paragraph I realized that a shearing matrix might work better for the effect you are trying to get - but a rotation should do fine as well. If you can post your shader code along with a screen shot then I'm sure we can work out the kinks.

No worries at all, it is the quality that matters. As someone who has taken a look at Hieroglyph 3 and has a copy of your book, I know that it is something I can always count on.[/quote]
Thanks for the comment - I really appreciate it! If you have any feedback (good or bad), please let me know - I'm always interested to hear what works or doesn't on the book.

##### Share on other sites
I was using input.Height as just the height per vertex (so the local y value ... which I could have used position.y for this was going to be more an identifier also ... just more as general testing for the time being).

I tried making it the max height, though I just got a straight line, no matter how many sections I have in it (the image below shows three sections, but I have tested it with seven also).

Both of the these images are the same object, just from and side angle with a 20 degree x rotation.

Front: http://img818.imageshack.us/img818/1670/frontx.png
Side: http://img42.imageshack.us/img42/4139/backlm.png

My complete shader code is

 cbuffer MatrixCB : register( b0 ) { matrix WorldViewProjMatrix; matrix WorldMatrix; matrix ViewProjMatrix; matrix RotationMatrix; } cbuffer DisplayColorCB: register (b1) { float4 DisplayColor; } struct VS_INPUT { float4 Pos : POSITION; float InputHeight : INPUTHEIGHT; }; struct VS_OUTPUT { float4 Pos : SV_POSITION; }; VS_OUTPUT VS( VS_INPUT input ) { VS_OUTPUT output = (VS_OUTPUT)0; float4 position = input.Pos; position.y -= input.InputHeight; position = mul( float4( position.xyz, 1.0f), RotationMatrix); position.y += input.InputHeight; output.Pos = mul( float4( position.xyz, 1.0f), ViewProjMatrix); return output; } float4 PS( VS_OUTPUT input ) : SV_Target { return DisplayColor; } 

Thank you for the help

##### Share on other sites
The way that I am envisioning it is that InputHeight should be a constant value provided in a constant buffer, along with the DisplayColor variable. Its value should range from the minimum vertex y-value to the maximum vertex y-value. Double check to make sure that the value is within this range - wherever that value falls within your height range is where the bend should occur.

Just to debug and make sure you are properly selecting the desired vertices, you could just use a static branch to determine the color of the vertices. If it is above then color it red, and below color it green. That should let you ensure that your value scales are all set up the way they should be. Once you are sure about the selection, then applying the rotation should be easier to test out.

• ### Similar Content

• By Void
Hi, I'm trying to do a comparision with DirectInput GUID e.g GUID_XAxis, GUID_YAxis from a value I get from GetProperty
eg
DIPROPRANGE propRange;

DIJoystick->GetProperty (DIPROP_RANGE, &propRange.diph);
// This will crash
if (GUID_XAxis == MAKEDIPROP (propRange.diph.dwObj))
;

How should I be comparing the GUID from GetProperty?

• I have a problem with SSAO. On left hand black area.
Texture2D<uint> texGBufferNormal : register(t0); Texture2D<float> texGBufferDepth : register(t1); Texture2D<float4> texSSAONoise : register(t2); float3 GetUV(float3 position) { float4 vp = mul(float4(position, 1.0), ViewProject); vp.xy = float2(0.5, 0.5) + float2(0.5, -0.5) * vp.xy / vp.w; return float3(vp.xy, vp.z / vp.w); } float3 GetNormal(in Texture2D<uint> texNormal, in int3 coord) { return normalize(2.0 * UnpackNormalSphermap(texNormal.Load(coord)) - 1.0); } float3 GetPosition(in Texture2D<float> texDepth, in int3 coord) { float4 position = 1.0; float2 size; texDepth.GetDimensions(size.x, size.y); position.x = 2.0 * (coord.x / size.x) - 1.0; position.y = -(2.0 * (coord.y / size.y) - 1.0); position.z = texDepth.Load(coord); position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float3 GetPosition(in float2 coord, float depth) { float4 position = 1.0; position.x = 2.0 * coord.x - 1.0; position.y = -(2.0 * coord.y - 1.0); position.z = depth; position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float DepthInvSqrt(float nonLinearDepth) { return 1 / sqrt(1.0 - nonLinearDepth); } float GetDepth(in Texture2D<float> texDepth, float2 uv) { return texGBufferDepth.Sample(samplerPoint, uv); } float GetDepth(in Texture2D<float> texDepth, int3 screenPos) { return texGBufferDepth.Load(screenPos); } float CalculateOcclusion(in float3 position, in float3 direction, in float radius, in float pixelDepth) { float3 uv = GetUV(position + radius * direction); float d1 = DepthInvSqrt(GetDepth(texGBufferDepth, uv.xy)); float d2 = DepthInvSqrt(uv.z); return step(d1 - d2, 0) * min(1.0, radius / abs(d2 - pixelDepth)); } float GetRNDTexFactor(float2 texSize) { float width; float height; texGBufferDepth.GetDimensions(width, height); return float2(width, height) / texSize; } float main(FullScreenPSIn input) : SV_TARGET0 { int3 screenPos = int3(input.Position.xy, 0); float depth = DepthInvSqrt(GetDepth(texGBufferDepth, screenPos)); float3 normal = GetNormal(texGBufferNormal, screenPos); float3 position = GetPosition(texGBufferDepth, screenPos) + normal * SSAO_NORMAL_BIAS; float3 random = normalize(2.0 * texSSAONoise.Sample(samplerNoise, input.Texcoord * GetRNDTexFactor(SSAO_RND_TEX_SIZE)).rgb - 1.0); float SSAO = 0; [unroll] for (int index = 0; index < SSAO_KERNEL_SIZE; index++) { float3 dir = reflect(SamplesKernel[index].xyz, random); SSAO += CalculateOcclusion(position, dir * sign(dot(dir, normal)), SSAO_RADIUS, depth); } return 1.0 - SSAO / SSAO_KERNEL_SIZE; }

• I've been following this tutorial -> https://www.3dgep.com/introduction-to-directx-11/#The_Main_Function , did all the steps,and I ended up with the main.cpp you can see below.
The problem is the call at line 516
g_d3dDeviceContext->UpdateSubresource(g_d3dConstantBuffers[CB_Frame], 0, nullptr, &g_ViewMatrix, 0, 0); which is crashing the program, and the very odd thing is that the first time trough it works fine, it crash the app the second time it is called...
Can someone help me understand why? 😕    I have no idea...

• Hi guys, I'm trying to learn this stuff but running into some problems 😕
I've compiled my .hlsl into a header file which contains the global variable with the precompiled shader data:
//... // Approximately 83 instruction slots used #endif const BYTE g_vs[] = { 68, 88, 66, 67, 143, 82, 13, 236, 152, 133, 219, 113, 173, 135, 18, 87, 122, 208, 124, 76, 1, 0, 0, 0, 16, 76, 0, 0, 6, 0, //.... And now following the "Compiling at build time to header files" example at this msdn link , I've included the header files in my main.cpp and I'm trying to create the vertex shader like this:
hr = g_d3dDevice->CreateVertexShader(g_vs, sizeof(g_vs), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { return -1; } and this is failing, entering the if and returing -1.
Can someone point out what I'm doing wrong? 😕

• Hello everyone,
After a few years of break from coding and my planet render game I'm giving it a go again from a different angle. What I'm struggling with now is that I have created a Frustum that works fine for now atleast, it does what it's supose to do alltho not perfect. But with the frustum came very low FPS, since what I'm doing right now just to see if the Frustum worked is to recreate the vertex buffer every frame that the camera detected movement. This is of course very costly and not the way to do it. Thats why I'm now trying to learn how to create a dynamic vertexbuffer instead and to map and unmap the vertexes, in the end my goal is to update only part of the vertexbuffer that is needed, but one step at a time ^^

So below is my code which I use to create the Dynamic buffer. The issue is that I want the size of the vertex buffer to be big enough to handle bigger vertex buffers then just mPlanetMesh.vertices.size() due to more vertices being added later when I start to do LOD and stuff, the first render isn't the biggest one I will need.
vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; vertexBufferDesc.ByteWidth = mPlanetMesh.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; vertexData.pSysMem = &mPlanetMesh.vertices[0]; vertexData.SysMemPitch = 0; vertexData.SysMemSlicePitch = 0; result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); if (FAILED(result)) { return false; } What happens is that the
result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); Makes it crash due to Access Violation. When I put the vertices.size() in it works without issues, but when I try to set it to like vertices.size() * 2 it crashes.
I googled my eyes dry tonight but doesn't seem to find people with the same kind of issue, I've read that the vertex buffer can be bigger if needed. What I'm I doing wrong here?

Best Regards and Thanks in advance
Toastmastern
• By yonisi
Hi,
I have a terrain engine where the terrain and water are on different grids. So I'm trying to render planar reflections of the terrain into the water grid. After reading some web pages and docs and also trying to learn from the RasterTek reflections demo and the small water bodies demo as well. What I do is as follows:
1. Create a Reflection view matrix  - Technically I ONLY flip the camera position in the Y direction (Positive Y is up) and add to it 2 * waterLevel. Then I update the View matrix and I save that matrix for later. The code:
void Camera::UpdateReflectionViewMatrix( float waterLevel ) { mBackupPosition = mPosition; mBackupLook = mLook; mPosition.y = -mPosition.y + 2.0f * waterLevel; //mLook.y = -mLook.y + 2.0f * waterLevel; UpdateViewMatrix(); mReflectionView = View(); } 2. I render the Terrain geometry to a 512x512 sized Render target by using the Reflection view matrix and an opposite culling (My Terrain is using front culling by nature so I'm using back culling for the Reflction render pass). Let me say that I checked with the Graphics debugger and the Reflection Render target looks "OK" at this stage (Picture attached). I don't know if the fact that the terrain is shown only at the top are of the texture is expected or not, but it seems OK.

3. Render the Reflection texture into the water using projective texturing - I hope this step is OK code wise. Basically I'm sending to the shader the WorldReflectionViewProj matrix that was created at step 1 in order to use it for the projective texture coordinates, I then convert the position in the DS (Water and terrain are drawn with Tessellation) to the projective tex coords using that WorldReflectionViewProj matrix, then I sample the reflection texture after setting up the coordinates in the PS. Here is the code:
//Send the ReflectionWorldViewProj matrix to the shader: XMStoreFloat4x4(&mPerFrameCB.Data.ReflectionWorldViewProj, XMMatrixTranspose( ( mWorld * pCam->GetReflectedView() ) * mProj )); //Setting up the Projective tex coords in the DS: Output.projTexPosition = mul(float4(worldPos.xyz, 1), g_ReflectionWorldViewProj); //Setting up the coords in the PS and sampling the reflection texture: float2 projTexCoords; projTexCoords.x = input.projTexPosition.x / input.projTexPosition.w / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / input.projTexPosition.w / 2.0 + 0.5; projTexCoords += normal.xz * 0.025; float4 reflectionColor = gReflectionMap.SampleLevel(SamplerClampLinear, projTexCoords, 0); texColor += reflectionColor * 0.25; I'll add that when compiling the PS I'm getting a warning on those dividing by input.projTexPosition.w for a possible float division by 0, I tried to add some offset or some minimum to the dividing term but that still not solved my issue.
Here is the problem itself. At relatively flat view angles I'm seeing correct reflections (Or at least so it seems), but as I pitch the camera down, I'm seeing those artifacts which I have no idea where are coming from. I'm culling the terrain in the reflection render pass when it's lower than water height (I have heightmaps for that).

Any help will be appreciated because I don't know what is wrong or where else to look.
• By thmfrnk
Hi,
I am looking for a usefull commandline based texture compression tool with the rights to be able to ship with my application. It should have following caps:
Supports all major image format as source files (jpeg, png, tga, bmp) Export as DDS Compression Formats BC1, BC2, BC3, BC4, BC7 I am actually using the nvdxt tool from Nvidia, but it does not support BC4 (which I need for one-channel 8bit textures). Everything else which I found wasn't really useful.
Any suggestions?
Thx

• I have been trying to create a BlendState for my UI text sprites so that they are both alpha-blended (so you can see them) and invert the pixel they are rendered over (again, so you can see them).
In order to get alpha blending you would need:
SrcBlend = SRC_ALPHA DestBlend = INV_SRC_ALPHA and in order to have inverted colours you would need something like:
SrcBlend = INV_DEST_COLOR DestBlend = INV_SRC_COLOR and you can't have both.
So I have come to the conclusion that it's not possible; am I right?
• By Royma
I want to know the reason that I reduced the drawcalls from 8 to 1, but it runs slow down.Should I abandon this method or is there any way to optimize this method to run more efficiently than multi-pass rendering?
Here is the gs code:

[maxvertexcount(24)]
void main(
triangle DepthGsIn input[3] : SV_POSITION,
inout TriangleStream< DepthPsIn > output
)
{
for (uint k = 0; k < 8; ++k)
{
DepthPsIn element;
element.RTIndex = k;
for (uint i = 0; i < 3; ++i)
{
element.position = input.position + shadowBias * g_cameras[k].world[1];
element.position = mul(element.position, g_cameras[k].viewProjection);
element.depth = element.position.z / element.position.w;

output.Append(element);
}
output.RestartStrip();
}
}

• By savail
Hey,
There are a few things which confuse me regarding DirectX 11 and HLSL shaders in general. I would be very grateful for your advice!
1. Let's take for example a scene which invokes 2 totally separate pipeline render passes interchangeably. I understand I need to bind correct shaders for each of the render pass and potentially blend/depth or rasterizer state but what about resources such as Constant Buffers, Shader Resource Views and Unordered Access Views? Assuming that the second render pass uses none of the resources used by the first pass, do I still need to unbind the resources and clean pipeline state after first pass? Or is it ok to leave pipeline with unbound garbage since anything I'd need to bind for second pass would overwrite contents in the appropriate register slots anyway?
2. Is it a good practice to assign register slots manually to all resources in HLSL?
3. I thought about assigning manually register slots for every distinct render pass up to the maximum slot limit if neccessary. For example in 1 render pass I invoke 3 CS's, 2 VS's and 2 PS's and for all resources used by those shaders I try to fill as many register slots as neccessary and potentially reuse many times the same slot in shaders sharing the same resource. I was wondering if there is any performance penalty or gain when I bind all of my needed resources at the start of render pass and never gonna have to do it again until next render pass? - this means potentially binding a lot of registers and having excessive number of bound resources for every shader that is run.
4. Is it a good practice to create a separate include file for every resource that occurs in >= 2 shader files or is it better to duplicate the declarations? In first case, the code is imo easier to maintain and edit but might be harder to read if there's too many includes. I've come up with a compromise between these 2 like this: create a separate include file for every CB that occurs in >= 2 shader files and a separate include file for every sampler I ever need to use. All other resources like srvs and uavs I prefer to duplicate in multiple shaders because they take much less space than CB for example... I'm not sure however if that's a good practice

• 36
• 12
• 10
• 10
• 9
• ### Forum Statistics

• Total Topics
631359
• Total Posts
2999535
×