Jump to content
  • Advertisement
Sign in to follow this  
TheResolute

Draw individual pixels manually DirectX 11 C++

This topic is 2292 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I want to manually determine the color of every pixel in a 1024 * 768 window at runtime using D3D11 and C++
How I intend to do that is through a pixel shader which takes the data from an array of 1024 * 768 integers (each corresponding to a pixel) and uses some if statements or a switch to choose a predefined color based on the number stored at a specific index in that array calculated from the x and y values of the point on a singular window sized rectangle being currently drawn
The issues are that I don't understand how to pass such an array from my C++ code to my shader at render time every frame and that I'm not entirely sure how to determine the position of the current pixel in the shader


I already ran a performance test and believe that my machine will be able to handle the basic calculations I intend to perform for every pixel
Another issue is that I want to store about 800000 instances of a "particle" structure which contains 2 int's and a char, and got a stack overflow error when I used a basic array and ran the program
This is not my main concern but if it is convenient, addressing this as well would be appreciated

If it would help, my goal is to run a simulation that applies a few basic physics based operations to half a window full of pixel sized particles (gravity moves it 1 pixel down, if it occupies the same pixel as another particle, move it up 1 pixel due to bouncing and move the other 1 pixel down, etc.)

Thanks for any help

Share this post


Link to post
Share on other sites
Advertisement
There are far easier ways to go about what you're doing.

You could create an image from your data CPU side, then display it using standard windows calls or send it to the GPU as a texture, for example. Or, you could track your particles and then send them up as a point list and render them that way.

Since it sounds like all your processing is CPU-side, and you aren't doing any complex graphics, it would be better to not introduce the dependancy on D3D at all.

Share this post


Link to post
Share on other sites
another thing you could do is create a vertex buffer that stores all the points representing the pixels you want, then just define pixels in the resolution you want, so x = 0 to 1024 and y = 0 to 768. then convert their positions into projection space before storing them in the vertex buffer, where x = -1 to 1 and y = 1 to -1. when drawing, you won't need to do any view or projection multiplications, just simply pass them from the vertex shader to the pixel shader

about storing all those instances. are you storing them in an array in a constant buffer? because an array in a constant buffer is limited to 4096 float4's i think. if you need an array that's larger than that, you could have multiple arrays in one constant buffer, or do multiple draw calls

Share this post


Link to post
Share on other sites

How I intend to do that is through a pixel shader which takes the data from an array of 1024 * 768 integers (each corresponding to a pixel) and uses some if statements or a switch to choose a predefined color based on the number stored at a specific index in that array calculated from the x and y values of the point on a singular window sized rectangle being currently drawn


This particular part just sounds like a regular texture lookup to me and coding it as a regular texture lookup will give you much better performance.


Another issue is that I want to store about 800000 instances of a "particle" structure which contains 2 int's and a char, and got a stack overflow error when I used a basic array and ran the program


Did you declare the array as a local variable?


If it would help, my goal is to run a simulation that applies a few basic physics based operations to half a window full of pixel sized particles (gravity moves it 1 pixel down, if it occupies the same pixel as another particle, move it up 1 pixel due to bouncing and move the other 1 pixel down, etc.)


Like the others said, just running this simulation part on the CPU will most likely be preferable. Yes, the GPU can be faster for certain types of operation, but if you need to regularly transfer data from system memory to video memory (or back, which is even worse) you're going to lose a huge amount of that performance.

Share this post


Link to post
Share on other sites
I'm not very familiar with HLSL and would just like 2 things that are unclear to me please, because I believe I will be able to make it work just fine if I have them: some code that demonstrates a transfer of an array of data from my program to my shader (both the C++ and HLSL ends), probably using a constant buffer; and a little snippet of HLSL code for the pixel shader that demonstrates how to determine the coordinates of the pixel currently being processed

Thanks for your help in solving this problem
By the way, I set the thread to notify me of updates (which it didn't do of course)

Share this post


Link to post
Share on other sites
you can send an array of data to the shaders using buffers. that's probably the way you'll want to go. you can send them through a constant buffer, but that limits the size of an array.

an example of sending an array of data (4d floats to make it simple, since you can have a maximum of 4096 in an array in the shaders from what i know) using a constant buffer:

In your app
// constant buffer structure
struct cbPerFrame
{
XMVECTOR positions[4096]; // a maximum of 4096 4d floats
}

cbPerFrame cbPF; // You will put your data from your app into this structure (the positions in this example)
cbPerFrameBuffer *cbPerFrameBuffer // You will update this buffer with the "cbPF" object, so that the shaders can read from it as the constant buffer


// later in the code, at initialization time, you'll have to create a constant buffer. here's an example:
D3D11_BUFFER_DESC cbbd;
ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC));

cbbd.Usage = D3D11_USAGE_DEFAULT;
cbbd.ByteWidth = sizeof(cbPerFrame);
cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbbd.CPUAccessFlags = 0;
cbbd.MiscFlags = 0;

hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerFrameBuffer);

// Later, when you need to update the buffer. This will update the constant buffer "cbPerFrameBuffer" with the "cbPF" object
d3d11DevCon->UpdateSubresource( cbPerFrameBuffer, 0, NULL, &cbPF, 0, 0);


In the shaders. you will define the constant buffer like this:
cbuffer cbPerFrame
{
float4 positions[4096]; // I'll say before it's asked, you cannot make arrays in shaders dynamic ;)
};


this way is useful for things like instancing. but i would use a regular vertex buffer or instance buffer if your doing positions for each of your points. To get the position of the current pixel in a pixel shader, you can use the SV_Positionsemantic:

float4 PS(PS_INPUT input, float4 pixelPosition : SV_Position) : SV_TARGET
{
float currentPixelX = pixelPosition.x;
float currentPixelY = pixelPosition.y;
// etc.
}


Hope that helps out a little

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!