• Advertisement
Sign in to follow this  

OpenGL [SOLVED] I am supposed to use Transpose but when i do...

This topic is 2136 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have been trying many different languages revolving around DirectX and even had a stab at OpenGL but i finally settled on C# + SlimDX because it's been the most comfortable solution for me.

So, i am making a custom engine for a game i intend to make. It's not going to be a fully fledged engine but more like a support framework.

----

Anyways, the question / problem i am facing right now is that i am (supposedly?) running DirectX 11 and made a simple triangle to play with so i could continue making a camera class. I set up my world, view and projection matrices and a basic color shader.
I started getting very strange issues however as i went along. First i got the RH vs LH wrong but that got sorted (I think) and then things got really wonky.

If i Matrix.Transpose my world, view and projection matrices before sending them to the shader my screen turns completely <color of triangle>!
That is, whatever color of the triangle i assign my view is filled with it. Having tried to "solve" any matrix issues i might have had in my camera class etc for about 4 hours straight i re-built a lot of the cbuffer updating procedures thinking something went wrong there.
Out of a "fluke" i commented out the Matrix.Transpose lines and VOILA. Everything worked as it should!

So... I know i am supposed to use Matrix.Transpose with DX11 but when i do it breaks.
When i don't it works as intended.
How can this be?

Thanks in advance for any clues or information on the subject.
//Cadde

Share this post


Link to post
Share on other sites
Advertisement
Well, actually you don't have to transpose the matrices, unless your shaders expects it (which is the normal case, afaik).

Best regards!

Share this post


Link to post
Share on other sites
Well it turns out that i did something wrong when setting the constant buffers after all. I thought there wasn't an Effect class in SlimDX.Direct3D11 but lo and behold there was one. So i started using that and now everything behaves as it should.

Still don't need to use Matrix.Transpose though... Which is weird since i am using the LH coordinate system and haven't done anything in the shader to make the switch.
I will probably find out later as i go along...

Another question as well though. I kinda feel that i am not in (as) complete control with effects rather than VertexShader and PixelShader. I much prefer to do things as low level / non "wrappery" as possible without it becoming a groundwork nightmare.
Is there any difference in performance between using the Effect class and using VertexShader/PixelShader classes?

And what happened really? I mean, the matrices i sent to the shader where fine yet the shader misbehaved greatly and made all kinds of wonky vertex transformations. It looked like i was looking through a fisheye lens.
After i started using effects and used Effect.GetVariableByName("").AsMatrix.SetMatrix() it behaved normally.

Sorry, don't have the code i used prior but i am thinking i was updating the wrong registers or whatever. I am new at this so ya, terminology is not right.
But i would love to have a working example on how it should be done.

[CODE]
cbuffer worldBuffer : register(b0)
{
matrix worldMatrix;
}
cbuffer viewBuffer : register(b1)
{
matrix viewMatrix;
}
cbuffer projectionBuffer : register(b2)
{
matrix projectionMatrix;
}
[/CODE]

Like, how would i go at sending matrices to each of these?
The reason i split them up like that is because the projection will only be set on initialization and viewport resize, view only once per frame and world once per model / mesh.

Share this post


Link to post
Share on other sites
Can you show any of the vertex / pixel shader code? It may give some clues why your matrices behave in the described way.

I'm not using the effects framework at this moment so I can't help you with it.

To update a constant buffer (with out the effect framework), you'll need to create constant buffer in the program side and bind it to the desired register. The program side constant buffer should be at least the same size as defined in the shader, and the program side structure containing the data should be the same as in the shader, in order to map correct variables to correct constants.

It is a good idea to split the constant buffers. It is suggested, however, that one shader doesn't use variables from more than 4-5 different constant buffers. Apparently there may be some performance penalties. So, in that sense, view matrix and projection matrix could be in the same buffer. You'd probably want to save some shader cycles too by providing a view-projection matrix, which you'll need to update every frame.

Best regards!

Share this post


Link to post
Share on other sites
By default, shaders assume that matrices in constant buffers are stored in a column-major memory layout and will emit code based on that when compiling vector/matrix multiplications. You can change or work around this in 4 ways:[list=1]
[*]By declaring the matrix variable with the "row_major" modifier in your shader
[*]By passing D3D10_SHADER_PACK_MATRIX_ROW_MAJOR when compiling the shader (sorry, I don't know what the SlimDX enum equivalent of this is)
[*]By using transpose() in your shader. In most cases won't actually reorder the data in registers or anything like that, instead it will just cause the compiler to emit different code for the mul() intrinsic.
[*]By switching the order of the vector and matrix parameters that you pass to mul(). Normally you will do mul(vector, matrix), but if you do mul(matrix, vector) it's equivalent to calling transpose() on the matrix.
[/list]
If you don't do any of these things, then you'll need to pre-transpose your matrices when setting their value in the constant buffer. Historically the Effects framework has always handled doing this for you, which inevitably causes confusion when people try handling shaders manually.

Share this post


Link to post
Share on other sites
Ok, so i decided to make a test bed for this particular thing because like i said, i cannot come to grasps with how to use the constant buffers.
It has all been written from scratch to make a proper test this time.

Program.cs
[code]using System;
using System.Diagnostics;
using System.Drawing;
using System.Windows.Forms;
using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.DXGI;
using SlimDX.D3DCompiler;
using SlimDX.Windows;
using Device = SlimDX.Direct3D11.Device;
using Buffer = SlimDX.Direct3D11.Buffer;
using Resource = SlimDX.Direct3D11.Resource;
namespace SlimDX_Testbed
{
static class Program
{
[STAThread]
static void Main()
{
// ----------------------------------------------------------------------------------------------------
// Form, device, render target and viewport creation.
RenderForm form = new RenderForm("SlimDX Testbed");
Device device;
DeviceContext context;
SwapChain swapChain;
FeatureLevel[] featureLevels;
RenderTargetView renderTarget;
Viewport viewport;

// Width and height of form and viewport etc.
int width = 1600;
int height = 900;
form.ClientSize = new Size(width, height);
featureLevels = new[]
{
FeatureLevel.Level_11_0,
FeatureLevel.Level_10_1,
FeatureLevel.Level_10_0,
};
// Create device and swap chain.
Device.CreateWithSwapChain
(
DriverType.Hardware, DeviceCreationFlags.Debug,
featureLevels,
new SwapChainDescription
{
BufferCount = 2,
Flags = SwapChainFlags.AllowModeSwitch,
IsWindowed = true,
ModeDescription = new ModeDescription
{
Format = Format.R8G8B8A8_UNorm,
Width = width,
Height = height,
RefreshRate = new Rational(60, 1),
Scaling = DisplayModeScaling.Unspecified,
ScanlineOrdering = DisplayModeScanlineOrdering.Progressive,
},
OutputHandle = form.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput,
},
out device, out swapChain
);
// Assign the context.
context = device.ImmediateContext;
// Create the render target view.
using (Resource resource = Resource.FromSwapChain<Texture2D>(swapChain, 0))
renderTarget = new RenderTargetView
(
device, resource,
new RenderTargetViewDescription
{
Dimension = RenderTargetViewDimension.Texture2D,
Format = Format.R8G8B8A8_UNorm,
MipSlice = 0,
}
);
// Create the viewport.
viewport = new Viewport(0.0f, 0.0f, width, height, 0.0f, 1.0f);
// Assign the render targets and viewport to the context.
context.OutputMerger.SetTargets(renderTarget);
context.Rasterizer.SetViewports(viewport);

// ----------------------------------------------------------------------------------------------------
// Model
DataStream vertexData;
DataStream indexData;

Buffer vertexBuffer;
Buffer indexBuffer;
VertexBufferBinding[] binding;

int vertexCount = 8;
int indexCount = 36;
// Create the vertex data stream.
vertexData = new DataStream(12 * vertexCount, true, true);

// Create a 1x1x1 cube.
// We will be using TriangleList primitive topology here.
vertexData.Write(new Vector3(-1.0f, 1.0f, 1.0f)); // FTL 0
vertexData.Write(new Vector3( 1.0f, 1.0f, 1.0f)); // FTR 1
vertexData.Write(new Vector3(-1.0f, -1.0f, 1.0f)); // FBL 2
vertexData.Write(new Vector3( 1.0f, -1.0f, 1.0f)); // FBR 3
vertexData.Write(new Vector3(-1.0f, 1.0f, -1.0f)); // BTL 4
vertexData.Write(new Vector3( 1.0f, 1.0f, -1.0f)); // BTR 5
vertexData.Write(new Vector3(-1.0f, -1.0f, -1.0f)); // BBL 6
vertexData.Write(new Vector3( 1.0f, -1.0f, -1.0f)); // BBR 7

// Create the index data stream.
indexData = new DataStream(sizeof(UInt32) * indexCount, true, true);
// Assign the indices for 12 triangles making up the cube
indexData.WriteRange<UInt32>
(
new UInt32[]
{
// Front
0, 1, 2, // 1
2, 1, 3, // 2
// Right
1, 5, 3, // 1
3, 5, 7, // 2
// Back
5, 4, 7, // 1
7, 4, 6, // 2
// Left
4, 0, 6, // 1
6, 0, 2, // 2
// Top
4, 5, 0, // 1
0, 5, 1, // 2
// Bottom
2, 3, 6, // 1
6, 3, 7, // 2
}
);
// return the reading positions.
vertexData.Position = 0;
indexData.Position = 0;
// Create the vertex buffer.
vertexBuffer = new Buffer
(
device, vertexData,
new BufferDescription
{
BindFlags = BindFlags.VertexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 12 * vertexCount,
StructureByteStride = 0,
Usage = ResourceUsage.Default,
}
);
// Create the index buffer.
indexBuffer = new Buffer
(
device, indexData,
new BufferDescription
{
BindFlags = BindFlags.IndexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = sizeof(UInt32) * indexCount,
StructureByteStride = 0,
Usage = ResourceUsage.Default,
}
);
// Create the vertex bidnings.
binding = new[]
{
new VertexBufferBinding(vertexBuffer, 12, 0),
};
// Assign the vertex and index buffers to the rendering pipeline.
context.InputAssembler.SetVertexBuffers(0, binding);
context.InputAssembler.SetIndexBuffer(indexBuffer, Format.R32_UInt, 0);
// ----------------------------------------------------------------------------------------------------
// Shaders.
VertexShader vertexShader;
PixelShader pixelShader;

InputElement[] elements;
InputLayout layout;
ShaderSignature inputSignature;
// Create the vertex shader.
using (ShaderBytecode bytecode = ShaderBytecode.CompileFromFile("ColorShader.fx", "VShader", "vs_5_0", ShaderFlags.Debug | ShaderFlags.EnableStrictness, EffectFlags.None))
{
inputSignature = ShaderSignature.GetInputSignature(bytecode);
vertexShader = new VertexShader(device, bytecode);
}
// Create the pixel shader.
using (ShaderBytecode bytecode = ShaderBytecode.CompileFromFile("ColorShader.fx", "PShader", "ps_5_0", ShaderFlags.Debug | ShaderFlags.EnableStrictness, EffectFlags.None))
pixelShader = new PixelShader(device, bytecode);
elements = new[]
{
new InputElement("POSITION", 0, Format.R32G32B32_Float, 0),
};
layout = new InputLayout(device, inputSignature, elements);
// Set the vertex and pixel shaders to the active rendering pipeline.
context.VertexShader.Set(vertexShader);
context.PixelShader.Set(pixelShader);
// Set the layout and primitive topology.
context.InputAssembler.InputLayout = layout;
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;

// ----------------------------------------------------------------------------------------------------
// Matrices
Matrix world = Matrix.Identity;
Matrix view = Matrix.Identity;
Matrix projection = Matrix.PerspectiveFovRH(ToRad(45.0f), (float)width / height, 0.0f, 10000.0f);
Matrix viewProjection = view * projection;

DataStream worldMatrixData;
Buffer worldMatrixBuffer;
DataStream viewProjectionMatrixData;
Buffer viewProjectionMatrixBuffer;
// Create the world matrix data stream.
worldMatrixData = new DataStream(64, true, true);
// Create the world matrix buffer.
worldMatrixBuffer = new Buffer
(
device,
new BufferDescription
{
BindFlags = BindFlags.ConstantBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 64,
StructureByteStride = 0,
Usage = ResourceUsage.Default,
}
);
// Create the combined view and projection matrix data stream.
viewProjectionMatrixData = new DataStream(64, true, true);
// Create the combined view and projection matrix buffer.
viewProjectionMatrixBuffer = new Buffer
(
device,
new BufferDescription
{
BindFlags = BindFlags.ConstantBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 64,
StructureByteStride = 0,
Usage = ResourceUsage.Default,
}
);
Buffer[] constantBuffers = new[]
{
worldMatrixBuffer,
viewProjectionMatrixBuffer,
};
context.VertexShader.SetConstantBuffers(constantBuffers, 0, 2);
// ----------------------------------------------------------------------------------------------------
// Application loop.
Stopwatch sw = new Stopwatch();

double lastUpdate = 0.0f;
double updateFrequency = 1.0f / 20;
form.KeyDown += (o, e) =>
{
if (e.KeyCode == Keys.Escape)
form.Close();
};
MessagePump.Run
(
form,
() =>
{
if (!sw.IsRunning)
sw.Start();
// Update every <updateFrequency> seconds.
if (sw.Elapsed.TotalSeconds - lastUpdate >= updateFrequency)
{
// UPDATE()

// Set the view matrix.
// Set the "camera" (view) to 0, 0, 0 looking down the negative Z axis using 0, 1, 0 as the up vector.
view = Matrix.LookAtRH(new Vector3(0.0f, 0.0f, 0.0f), new Vector3(0.0f, 0.0f, -1.0f), Vector3.UnitY);
// Update the viewProjection matrix.
viewProjection = projection * view;

// Commented this out because it doesn't behave as it should.
// viewProjection = Matrix.Transpose(viewProjection);
// Update the matrix constant buffers.
viewProjectionMatrixData.Write(viewProjection);
viewProjectionMatrixData.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, viewProjectionMatrixData), viewProjectionMatrixBuffer, 0);
lastUpdate = sw.Elapsed.TotalSeconds;
}

// RENDER()
// Set the world matrix.

// Move the cube along the z axis between -10.0f and 10.0f.
float z = (float)Math.Cos(sw.Elapsed.TotalSeconds) * 20;
world = Matrix.Translation(new Vector3(0.0f, 0.0f, z));
// Commented this out because it doesn't behave as it should.
//world = Matrix.Transpose(world);

// Update the matrix constant buffers.
worldMatrixData.Write(world);
worldMatrixData.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, worldMatrixData), worldMatrixBuffer, 0);
context.ClearRenderTargetView(renderTarget, new Color4(1.0f, 0.0f, 0.1f, 0.2f));
context.DrawIndexed(indexCount, 0, 0);
swapChain.Present(0, PresentFlags.None);
}
);

// ----------------------------------------------------------------------------------------------------
// Cleanup.

// Matrix constant buffers.
if (worldMatrixBuffer != null) worldMatrixBuffer.Dispose();
if (viewProjectionMatrixBuffer != null) viewProjectionMatrixBuffer.Dispose();
if (worldMatrixData != null)
{
worldMatrixData.Close();
worldMatrixData.Dispose();
}
if (viewProjectionMatrixData != null)
{
viewProjectionMatrixData.Close();
viewProjectionMatrixData.Dispose();
}
// Shaders.
if (inputSignature != null) inputSignature.Dispose();
if (layout != null) layout.Dispose();
if (pixelShader != null) pixelShader.Dispose();
if (vertexShader != null) vertexShader.Dispose();
// Model.
if (indexBuffer != null) indexBuffer.Dispose();
if (vertexBuffer != null) vertexBuffer.Dispose();
if (indexData != null)
{
indexData.Close();
indexData.Dispose();
}

if (vertexData != null)
{
vertexData.Close();
vertexData.Dispose();
}

// Device etc.
if (renderTarget != null) renderTarget.Dispose();
if (swapChain != null) swapChain.Dispose();
if (context != null) context.Dispose();
if (device != null) device.Dispose();
if (form != null) form.Dispose();
// Happy smiley faces in yo face brah! =)
}
// Converts degrees to radians.
static float ToRad(float value)
{
return (float)(Math.PI / 180) * value;
}
}
}
[/code]

ColorShader.fx
[code]cbuffer WorldMatrixBuffer : register(cb0)
{
matrix world;
};
cbuffer ViewProjectionMatrixBuffer : register(cb1)
{
matrix viewProjection;
};
struct VS_IN
{
float3 position : POSITION;
};
struct PS_IN
{
float4 position : SV_POSITION;
};
PS_IN VShader(VS_IN input)
{
PS_IN output;
output.position = float4(input.position, 1.0);
output.position = mul(output.position, world);
output.position = mul(output.position, viewProjection);
return output;
}
float4 PShader(PS_IN input) : SV_Target
{
return float4(1.0, 1.0, 0.0, 1.0);
}[/code]

Still not getting my view and projection matrices to update or behave properly.
If i comment the "output.position = mul(output.position, viewProjection);" line i get alternating dark blue and yellow as the model moves back and forth.
If i uncomment the Matrix.Transpose lines i get the same effect except the yellow phases are shorter.

As always, really appreciate the help here!
//Cadde

EDIT: Forgot to attach the solution. Edited by Cadde

Share this post


Link to post
Share on other sites
Bump and update.

I converted the code in the DirectX SDK for Direct3D 11 Tutorial 7 into SlimDX code line by line. (Or at least i think i did)
Only a few minor changes/additions needed to be made considering c++ is by far superior to C# when it comes to sizeof() and other such "unsafe" and dangerous things... (UGH)

Either way, here is the conversion (Attached file) if anyone is interested in looking at it. I experience the same problems as before... Nothing get's shown on screen when using UpdateSubresource and SetConstantbuffers.
I know the code works in C++ but it doesn't do it when i convert to C# and SlimDX.

I am at a total loss here.
Thanks for any assistance!
//Cadde

EDIT:

Some additional information...

Using SlimDX January 2012 version.
.Net 2.0
Tried 32 and 64 bit SlimDX dll's for .net 2.0

It works when i use the Effect class. But doing it the "right" way doesn't work at all for me.

Share this post


Link to post
Share on other sites
[quote name='Mike.Popoloski' timestamp='1331650569' post='4921660']
In your vertex buffer binding you set the stride to 0 when it should actually be sizeof(SimpleVertex), which is 20.
[/quote]

DOH!
It works, you are a god and all that.
Now i have a working example to go from to fix my other ones. Thanks!

EDIT:

Ok, this is what i have learned for all this so far.
[list=1]
[*]For projection, a near plane of 0.0f does [color=#ff0000][b]NOT [/b][/color]work.
[*]One does indeed need to transpose the matrices. How i managed to get anything useful out of not transposing them i will never know. (Matrix math is still beyond me...)
[*]It helps to pay attention when you code, as Mike pointed out you have to set a proper stride.
[*]It was initially very unclear to me how to use constant buffers and the answer from Mike on [url="http://stackoverflow.com/questions/4962225/setting-up-the-constant-buffer-using-slimdx"]stackoverflow[/url] never mentioned that you had to set then to the shaders using SetConstantBuffers. This could have been realized had i known anything about D3D in the first place or by simply looking into the DX SDK examples.
[/list]
Point is, never give up and when you get stuck make sure you don't start assuming things as i did.

Now, a quick example on how to properly update constant buffers in case anyone else finds themselves in a similar situation:

A simple shader with 3 constant buffers:
[code]cbuffer WorldMatrixBuffer : register(b0)
{
matrix world;
};
cbuffer ViewMatrixBuffer : register(b1)
{
matrix view;
};
cbuffer ProjectionMatrixBuffer : register(b2)
{
matrix projection;
};
struct VS_IN
{
float4 position : POSITION;
};
struct PS_IN
{
float4 position : SV_POSITION;
};
PS_IN VShader(VS_IN input)
{
PS_IN output;
output.position = input.position;
output.position = mul(output.position, world);
output.position = mul(output.position, view);
output.position = mul(output.position, projection);
return output;
}
float4 PShader(PS_IN input) : SV_Target
{
return float4(1.0, 1.0, 0.0, 1.0);
}[/code]

This will take in a vertex buffer, multiply it's positions by the world, view and projection matrices that are defined in their own separate constant buffers.
It passes the information along down the rasterizer where each pixel is set to a solid yellow color.
One could gain a little bit of performance by multiplying the view and projection matrices before sending them to the shader and using a combined viewProjection cbuffer to reduce the number of mul() operations in the vertex shader stage. For the sake of clarity i have decided not to do this here.

The registers (b0, b1 and b2) are defined so we can assign them in code using their respective indexes (slots) and to update and assign them you need (in code):
[list=1]
[*]A Buffer with the ConstantBuffer bind, Default resource usage and in this case, no CPU access flags.
[*]A data stream to write matrix data to. A matrix is 64 bytes large (float4x4, 16 floats of 4 bytes each) thus you need to have 64 bytes of memory allocated to write to these constant buffers.
[*]A context to call UpdateSubresoruce().
[/list]
Sample code:
[code] // Create the projection matrix buffer.
projectionMatrixBuffer = new Buffer
(
device,
new BufferDescription
{
BindFlags = BindFlags.ConstantBuffer,
CpuAccessFlags = CpuAccessFlags.None,
SizeInBytes = Marshal.SizeOf(projection),
Usage = ResourceUsage.Default,
}
);
// Update the projection constant buffer.
using (DataStream data = new DataStream(Marshal.SizeOf(projection), true, true))
{
data.Write(Matrix.Transpose(projection));
data.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, data), projectionMatrixBuffer, 0);
}[/code]

So first we create a buffer with the proper buffer description. Marshal.SizeOf() resides in System.Runtime.InteropServices and is used to determine the size of Types (Classes) and objects (assigned variables) which is helpful if you don't want to manually calculate the size of each constant buffer.
In this case i write the matrix directly to the stream but if your cbuffer has more than one element in it it may be useful to create a structure or class to contain all elements of the constant buffer in it before writing to the buffer.

For example:
[code] [StructLayout(LayoutKind.Sequential)]
class HerpClass
{
public Matrix world;
public Matrix view;
public Matrix projection;
}
struct HerpStruct
{
public Matrix world;
public Matrix view;
public Matrix projection;
}
...
int classSize = Marshal.SizeOf(typeof(HerpClass)); // Is 192 (64 * 3)
int structSize = Marshal.SizeOf(typeof(HerpStruct)); // Is 192 (64 * 3)[/code]

Adding any private variables to these classes will still count towards the total size of the class/structure so don't do it. (No i didn't either btw if you thought so, i just want to cover this incase someone gets any ideas.)
The reason you need "[StructLayout(LayoutKind.Sequential)]" on the class is because otherwise Marshal.SizeOf will produce an ArgumentException that reads "HerpClass cannot be marshaled as an unmanaged structure; no meaningful size or offset can be computed."
Thus, using a struct is your best option and structs and classes are pretty much the same things anyways.

Yes, you can create a constructor thus enabling you to use "new HerpClass(world, view, projection);" if you so desire.

Right, moving on...
Before you render you need to set the buffers to the vertex shader (and pixel shader where needed, they are separate) and to do that you do this:

[code] // Set the vertex and pixel shaders to the active rendering pipeline.
context.VertexShader.Set(vertexShader);
context.VertexShader.SetConstantBuffers(new Buffer[] { worldMatrixBuffer }, 0, 1);
context.VertexShader.SetConstantBuffers(new Buffer[] { viewMatrixBuffer }, 1, 1);
context.VertexShader.SetConstantBuffers(new Buffer[] { projectionMatrixBuffer }, 2, 1);
context.PixelShader.Set(pixelShader);
context.DrawIndexed(indexCount, 0, 0);
[/code]

Now, i included the Vertex/PixelShader.Set() here as well as the DrawIndexed and Present calls for clarity.
You can do it any way you like, but the gist of it all is you set them BEFORE the draw calls.
Obviously setting the projection for each mesh you draw is excessive and wastes precious cycles. You only need to set the projection when you change the shader or your projection changes. Like from a form resize or if you are zooming the view or changing the view distance.
The same applies with the view matrix, that only needs to be when the camera moves or you switch shaders.

To summarize then.[list=1]
[*]Create buffers in code that match the buffers in the shader.
[*]Write to the buffers in code when the world, view or projection matrices change using a data stream and update them in the context using UpdateSubresource.
[*]Set them to the shader using SetConstantBuffers when the shader is changed. The second argument to the function call is the index as defined in the shader file.
[/list]
Happy coding!
//Cadde

And once again, thanks for the assist! Edited by Cadde

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  • Advertisement