• 12
• 9
• 9
• 13
• 10
• ### Similar Content

• By Enerjak
protected void menuImage_ChildClick(Object sender, EventArgs e) { if(pbxPhoto.Image != null && sender is MenuItem) { MenuItem mi = (MenuItem)sender; pbxPhoto.SizeMode = modeMenuArray[mi.Index]; pbxPhoto.Invalidate(); } } I'm trying to make one click function to handle multiple menus. Using this code does not work when I click the menu buttons after the image is loaded in the picture box. Here is where I set the event handler of both menu items.

• Hi there,
I need a simple(ish) MP3 player for my C# WPF project. Sound isn't a major player in my project, but I do need to be able to play a couple of MP3s at the same time (loaded in as Content).
I'm currently using the standard MediaPlayer library just to get a placeholder in there, but it does not meet these requirements:
* Looping - no matter what I try, there's always a split-second break in the sound when it comes to looping (a few of the sounds are short tune loops, and the brief pause when looping really stands out like a sore... ear...!)
* Mixing - at times I need to play a couple of samples simultaneously, but I don't believe MediaPlayer can support this. Having said this I've not tried invoking more than one MediaPlayer class at the same time.
Are there any suggestions? I've browsed through NuGet but I'm surprised I can't find anything really useful.

• till now
i made almost perfect mouse selection unit
i mean rts style c# mouse scrtips
this works almost all of the warc3 game mouse function

now i want to unit to move with navmesh

navmeshagent.SetDestinaton(Input.mouseposition)
this doesn work T.T
im so sorry guys i forgot if (Physics.Raycast(Camera.main.ScreenPointToRay(Input.mousePosition), out hit, 100))
this im so sorry
• By cozzie
Hi all,
As a part of the debug drawing system in my engine,  I want to add support for rendering simple text on screen  (aka HUD/ HUD style). From what I've read there are a few options, in short:
1. Write your own font sprite renderer
2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer
3. Use an external library, like the directx toolkit etc.
I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device.  But other articles tell that this was 'patched' later on and should work now.
Can someone shed some light on this and ideally provide me an example or article on  how to set this up?
All input is appreciated.
• By stale
I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below:
'
However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error:

I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch:
for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!

# DX11 SharpDX Directx11 How to add normal mapping ?

## Recommended Posts

I am trying to add normal map to my project I have an example of a cube:

I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP)

            device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);
device.ImmediateContext.Draw(VerticesCount,0);

What should I do to set my normal map or how it is done in dx11 generally example c++?

##### Share on other sites

Normal mapping usually works like this:

You have a normal map which is a texture containing tangent space normal vectors in the red and green components (XY). You have to transform it to world space (XYZ) and replace your vertex normal with this. All your following light calculations should use this new normal vector. That's it.

The trickiest part is the basis transformation from tangent space to world space which you have to do in the pixel shader. Usually games provide tangent and binormal (also called bitangent) vectors in the vertex buffers for the mesh. Or just store the tangent and calculate the bitangent like this:

bitangent = cross(normal, tangent)

For calculating tangent vectors for the vertices, there is an open source library called mikktspace written in C++. You can derive a C# implementation if there is not one already available. An other option is that you can provide a quaternion representing the tangent space. And the final option is that you can calculate the tangent space inside the pixel shader from screen space derivatives, like this.

When you have you normal, tangent and binormal vectors, which are unit vectors, you can assemble the tangent space transform matrix like so:

tangentBasis = float3x3(tangent, binormal, normal)

If you multiply your tangent space normal vectors which came from the normal map texture, you get the world space normal vector.

##### Share on other sites
2 hours ago, turanszkij said:

Usually games provide tangent and binormal (also called bitangent) vectors in the vertex buffers for the mesh.

Alternatively, you can use Tangent Space Normal Mapping without Precomputed Tangents. This does not require tangents (or bitangents) stored per vertex at the expense of doing all calculations in the pixel shader.

Since, I do not use GLSL myself, I also have an HLSL version.

Edited by matt77hias

##### Share on other sites

IT IS GOD DAMN HARD. I have to study history. I do not have that time. If I use hlsl sided one can I stay away from all of those calculations ? If I do it, will it work same about quality ?

##### Share on other sites
30 minutes ago, gomidas said:

IT IS GOD DAMN HARD. I have to study history. I do not have that time.

So why bother then?

31 minutes ago, gomidas said:

stay away from all of those calculations

If you do not want to learn why and how things work, just reuse the code.

##### Share on other sites
Just now, matt77hias said:

So why bother then?

I do not know. Seriously I do not know.

##### Share on other sites
33 minutes ago, gomidas said:

If I do it, will it work same about quality ?

Same quality as what? As providing the coordinate frame yourself? It is not numerically the same, but that is not a real issue for your purposes. It is a perturbation (i.e. small scale), so that will be fine.

##### Share on other sites

alright if unreal engine does it too no problem for me.

##### Share on other sites

I get something wrong about the setting file I think.

public Texture2D T2D_DiffuseMap = "File.png";

device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);

So will I have a Texture2D array that keeps multiple files diffuse.png and normal.png then I will do it like : new ShaderResourceView(device, TextureData); //diffuse.png and normal.png ??? Then I will modifyshader file. Or I am getting it wrong ? I am asking because there is no source for c#.