Sign in to follow this  
mmurphy

DX11 [DX11] Progressively adapting vertex position

Recommended Posts

I am attempting to make a simulation of something related to what is known to some people as a "Sky Guy" (those giant things outside of a used car dealership, such as [url="http://blog.chron.com/carsandtrucks/files/legacy/Sky%20guy.gif"]http://blog.chron.com/carsandtrucks/files/legacy/Sky%20guy.gif[/url] ). Anyway, I am finding it difficult to figure out how to change the vertex position at the top end of the object, while keeping the bottom one fixed. I was thinking that I would be able to do this in the tesselation or geometry shader stages, however, I can't seem to figure out how this might be possible. Could anyone clue me into how to do this?

Share this post


Link to post
Share on other sites
I would assign each vertex a value that corresponds to the distance from the root of the whole thing. Then use a constant value in the vertex shader to determine where to begin displacing the object into another direction. If the vertex value is less than the "strobe" value, then simply transform them as normal. If it is greater than the strobe value, then rotate it by x amount in the direction you want - but you need to make sure that the rotation is performed with respect to the point where the vertex value transitions from negative to positive.

That last part could either be done directly with trigonometry in the vertex shader, or you could generate a rotation matrix on the CPU for a given "strobe" value.

Share this post


Link to post
Share on other sites
I think I would approach this more as a physics problem than rendering. Treat the tube guy as a cloth, for example a spring system. Then fix the bottom ring of vertices and flip gravity (to make it rise). To make it move add some small sideways forces or model turbulence.

I recall that the game "Gunstringer" has a "sky guy" boss. Maybe you can look at that and get some ideas.

Edit: Found a [url="http://www.eurogamer.net/videos/twisted-pixel-shows-gunstringer-gameplay"]video[/url] of the wavvy tube man in Gunstringer. Skip to to 6:15.

Share this post


Link to post
Share on other sites
[quote name='Jason Z' timestamp='1318272900' post='4871154']
That last part could either be done directly with trigonometry in the vertex shader
[/quote]

Would you happen to be able to provide an example of how to do this? I have been trying to figure it out in my spare time over the last few days but I can't seem to nail this down.

Share this post


Link to post
Share on other sites
[quote name='mmurphy' timestamp='1318576135' post='4872430']
[quote name='Jason Z' timestamp='1318272900' post='4871154']
That last part could either be done directly with trigonometry in the vertex shader
[/quote]

Would you happen to be able to provide an example of how to do this? I have been trying to figure it out in my spare time over the last few days but I can't seem to nail this down.
[/quote]

You can do it also with a simple rotation matrix - it doesn't have to be manual trigonometry. If you have h = the height of the curving point, then you can take the vertex's object space position (x,y,z) and change it to (x,y-h,z). Then apply the desired rotation (such as a 20 degree rotation about the x-axis), which produces (x',(y-h)',z'). Then just add the value of h again to push the vertex back up to the appropriate height: (x',(y-h)'+h,z').

Note that this is working in object space, so then any additional rotations or translations for the world transform can be applied next. This is very much similar to skeletal animation, except that you are defining the bone weights dynamically based on a program supplied constant value.

(sorry for the slow response...)

Share this post


Link to post
Share on other sites
[quote name='Jason Z' timestamp='1318883500' post='4873621']
You can do it also with a simple rotation matrix - it doesn't have to be manual trigonometry. If you have h = the height of the curving point, then you can take the vertex's object space position (x,y,z) and change it to (x,y-h,z). Then apply the desired rotation (such as a 20 degree rotation about the x-axis), which produces (x',(y-h)',z'). Then just add the value of h again to push the vertex back up to the appropriate height: (x',(y-h)'+h,z').

Note that this is working in object space, so then any additional rotations or translations for the world transform can be applied next. This is very much similar to skeletal animation, except that you are defining the bone weights dynamically based on a program supplied constant value.

(sorry for the slow response...)
[/quote]

"h" because the max height of the curving point (apex), the current height of vert or the max height of the object? I just want to be clear because I believe I may have misunderstood you (I am sure you explained it fine, this is new territory to me so I still trying to grasp everything).

I attempted with the current height of the vert, though this did not turn out as well as I initially thought it would. Though I understand why. Disregarding object space at the moment, What this will end up doing is essentially cancel out the rotation applied from the rotation matrix.

[code]
float4 position = input.Pos;

position.y -= input.InputHeight;

position = mul( float4( position.xyz, 1.0f), RotationMatrix);
position.y += input.InputHeight;

output.Pos = mul( float4( position.xyz, 1.0f), ViewProjMatrix);
[/code]


[quote name='Jason Z' timestamp='1318883500' post='4873621']
(sorry for the slow response...)
[/quote]

No worries at all, it is the quality that matters. As someone who has taken a look at Hieroglyph 3 and has a copy of your book, I know that it is something I can always count on.

Share this post


Link to post
Share on other sites
From what I can tell, if your input.InputHeight variable is a constant for the whole mesh, then that code snippet should work. You are more or less trying to rotate a portion of your mesh based on its object space location (specifically its height in this case). If a vertex is above the input.inputHeight variable, then you want to rotate it by an amount relative to its distance from input.inputHeight.

After just writing that paragraph I realized that a shearing matrix might work better for the effect you are trying to get - but a rotation should do fine as well. If you can post your shader code along with a screen shot then I'm sure we can work out the kinks.

[quote]No worries at all, it is the quality that matters. As someone who has taken a look at Hieroglyph 3 and has a copy of your book, I know that it is something I can always count on.[/quote]
Thanks for the comment - I really appreciate it! If you have any feedback (good or bad), please let me know - I'm always interested to hear what works or doesn't on the book.

Share this post


Link to post
Share on other sites
I was using input.Height as just the height per vertex (so the local y value ... which I could have used position.y for this was going to be more an identifier also ... just more as general testing for the time being).

I tried making it the max height, though I just got a straight line, no matter how many sections I have in it (the image below shows three sections, but I have tested it with seven also).

Both of the these images are the same object, just from and side angle with a 20 degree x rotation.

Front: http://img818.imageshack.us/img818/1670/frontx.png
Side: http://img42.imageshack.us/img42/4139/backlm.png

My complete shader code is

[code]
cbuffer MatrixCB : register( b0 )
{
matrix WorldViewProjMatrix;
matrix WorldMatrix;
matrix ViewProjMatrix;
matrix RotationMatrix;
}

cbuffer DisplayColorCB: register (b1)
{
float4 DisplayColor;
}

struct VS_INPUT
{
float4 Pos : POSITION;
float InputHeight : INPUTHEIGHT;
};

struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
};

VS_OUTPUT VS( VS_INPUT input )
{
VS_OUTPUT output = (VS_OUTPUT)0;

float4 position = input.Pos;

position.y -= input.InputHeight;

position = mul( float4( position.xyz, 1.0f), RotationMatrix);
position.y += input.InputHeight;

output.Pos = mul( float4( position.xyz, 1.0f), ViewProjMatrix);

return output;
}

float4 PS( VS_OUTPUT input ) : SV_Target
{
return DisplayColor;
}
[/code]

Thank you for the help

Share this post


Link to post
Share on other sites
The way that I am envisioning it is that InputHeight should be a constant value provided in a constant buffer, along with the DisplayColor variable. Its value should range from the minimum vertex y-value to the maximum vertex y-value. Double check to make sure that the value is within this range - wherever that value falls within your height range is where the bend should occur.

Just to debug and make sure you are properly selecting the desired vertices, you could just use a static branch to determine the color of the vertices. If it is above then color it red, and below color it green. That should let you ensure that your value scales are all set up the way they should be. Once you are sure about the selection, then applying the rotation should be easier to test out.

Share this post


Link to post
Share on other sites
I think I have an idea of what you mean. Currently it will bend at the color change, I was hoping to have more of an arc though at the curve, so perhaps I should have a range that it that, then if the vert falls in that range it would be more tessellated to make it look more curved? If so, there is there only way to have the geometry shader (which I believe I would use over the tesselator, right?) to only target a certain range of verts?

Share this post


Link to post
Share on other sites
[quote name='The King2' timestamp='1319181308' post='4874954']
I recommend you to use softbody physics instead of vertex adaption. This should give you more accurate and realistic results. A good physics libary with softbody support is bullet physics.
[/quote]

The vertex adaption way is something that intrigues me, that is why I wanted to peruse it.

Share this post


Link to post
Share on other sites
[quote name='mmurphy' timestamp='1319179343' post='4874949']
I think I have an idea of what you mean. Currently it will bend at the color change, I was hoping to have more of an arc though at the curve, so perhaps I should have a range that it that, then if the vert falls in that range it would be more tessellated to make it look more curved? If so, there is there only way to have the geometry shader (which I believe I would use over the tesselator, right?) to only target a certain range of verts?
[/quote]

You could use the geometry shader to do some tessellation, but it will not be fast at all. If you want to do tessellation around the bend, I would highly recommend that you use the corresponding pipeline stages. It actually isn't too difficult to get up and running, and then the tessellation factors that determine how much to tessellate a triangle can be modified with your per-vertex height parameter. You can take a look at the BasicTessellation sample in Hieroglyph3 for a minimal tessellation set of shaders, which can then be modified to suit your needs...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628345
    • Total Posts
      2982202
  • Similar Content

    • By 51mon
      I want to change the sampling behaviour to SampleLevel(coord, ddx(coord.y).xx, ddy(coord.y).yy). I was just wondering if it's possible without explicit shader code, e.g. with some flags or so?
    • By GalacticCrew
      Hello,
      I want to improve the performance of my game (engine) and some of your helped me to make a GPU Profiler. After creating the GPU Profiler, I started to measure the time my GPU needs per frame. I refined my GPU time measurements to find my bottleneck.
      Searching the bottleneck
      Rendering a small scene in an Idle state takes around 15.38 ms per frame. 13.54 ms (88.04%) are spent while rendering the scene, 1.57 ms (10.22%) are spent during the SwapChain.Present call (no VSync!) and the rest is spent on other tasks like rendering the UI. I further investigated the scene rendering, since it takes über 88% of my GPU frame rendering time.
      When rendering my scene, most of the time (80.97%) is spent rendering my models. The rest is spent to render the background/skybox, updating animation data, updating pixel shader constant buffer, etc. It wasn't really suprising that most of the time is spent for my models, so I further refined my measurements to find the actual bottleneck.
      In my example scene, I have five animated NPCs. When rendering these NPCs, most actions are almost for free. Setting the proper shaders in the input layout (0.11%), updating vertex shader constant buffers (0.32%), setting textures (0.24%) and setting vertex and index buffers (0.28%). However, the rest of the GPU time (99.05% !!) is spent in two function calls: DrawIndexed and DrawIndexedInstance.
      I searched this forum and the web for other articles and threads about these functions, but I haven't found a lot of useful information. I use SharpDX and .NET Framework 4.5 to develop my game (engine). The developer of SharpDX said, that "The method DrawIndexed in SharpDX is a direct call to DirectX" (Source). DirectX 11 is widely used and SharpDX is "only" a wrapper for DirectX functions, I assume the problem is in my code.
      How I render my scene
      When rendering my scene, I render one model after another. Each model has one or more parts and one or more positions. For example, a human model has parts like head, hands, legs, torso, etc. and may be placed in different locations (on the couch, on a street, ...). For static elements like furniture, houses, etc. I use instancing, because the positions never change at run-time. Dynamic models like humans and monster don't use instancing, because positions change over time.
      When rendering a model, I use this work-flow:
      Set vertex and pixel shaders, if they need to be updated (e.g. PBR shaders, simple shader, depth info shaders, ...) Set animation data as constant buffer in the vertex shader, if the model is animated Set generic vertex shader constant buffer (world matrix, etc.) Render all parts of the model. For each part: Set diffuse, normal, specular and emissive texture shader views Set vertex buffer Set index buffer Call DrawIndexedInstanced for instanced models and DrawIndexed models What's the problem
      After my GPU profiling, I know that over 99% of the rendering time for a single model is spent in the DrawIndexedInstanced and DrawIndexed function calls. But why do they take so long? Do I have to try to optimize my vertex or pixel shaders? I do not use other types of shaders at the moment. "Le Comte du Merde-fou" suggested in this post to merge regions of vertices to larger vertex buffers to reduce the number of Draw calls. While this makes sense to me, it does not explain why rendering my five (!) animated models takes that much GPU time. To make sure I don't analyse something I wrong, I made sure to not use the D3D11_CREATE_DEVICE_DEBUG flag and to run as Release version in Visual Studio as suggested by Hodgman in this forum thread.
      My engine does its job. Multi-texturing, animation, soft shadowing, instancing, etc. are all implemented, but I need to reduce the GPU load for performance reasons. Each frame takes less than 3ms CPU time by the way. So the problem is on the GPU side, I believe.
    • By noodleBowl
      I was wondering if someone could explain this to me
      I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide
      I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA
      In one case I would think that: 
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels
      In the second case I would think that:
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void
      I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By GalacticCrew
      In some situations, my game starts to "lag" on older computers. I wanted to search for bottlenecks and optimize my game by searching for flaws in the shaders and in the layer between CPU and GPU. My first step was to measure the time my render function needs to solve its tasks. Every second I wrote the accumulated times of each task into my console window. Each second it takes around
      170ms to call render functions for all models (including settings shader resources, updating constant buffers, drawing all indexed and non-indexed vertices, etc.) 40ms to render the UI 790ms to call SwapChain.Present <1ms to do the rest (updating structures, etc.) In my Swap Chain description I set a frame rate of 60 Hz, if it's supported by the computer. It made sense for me that the Present function waits some time until it starts the next frame. However, I wanted to check, if this might be a problem for me. After a web search I found articles like this one, which states 
      My drivers are up-to-date so that's no issue. I installed Microsoft's PIX, but I was unable to use it. I could configure my game for x64, but PIX is not able to process DirectX 11.. After getting only error messages, I installed NVIDIA's NSight. After adjusting my game and installing all components, I couldn't get a proper result, because my game freezes after a few frames. I haven't figured out why. There is no exception or error message and other debug mechanisms like log messages and break points tell me the game freezes at the end of the render function after a few frames. So, I looked for another profiling tool and found Jeremy's GPUProfiler. However, the information returned by this tool is too basic to get an in-depth knowledge about my performance issues.
      Can anyone recommend a GPU Profiler or any other tool that might help me to find bottlenecks in my game and or that is able to indicate performance problems in my shaders? My custom graphics engine can handle subjects like multi-texturing, instancing, soft shadowing, animation, etc. However, I am pretty sure, there are things I can optimize!
      I am using SharpDX to develop a game (engine) based on DirectX 11 with .NET Framework 4.5. My graphics cards is from NVIDIA and my processor is made by Intel.
  • Popular Now