Jump to content

  • Log In with Google      Sign In   
  • Create Account


We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.

Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Member Since 27 Mar 2006
Offline Last Active Aug 18 2014 09:53 PM

Topics I've Started

Full Screen Quad with Texture, looks off

29 July 2014 - 03:00 AM

I'm trying to convert my software rasterizer from ddraw to dx10. I like to at some point use the pixel shader to draw some elements while I do other elements on cpu. So I decided to just create a texture, lock it and draw directly on it, then use a quad to display it. It does work, and all my programs that I wrote under ddraw run just fine, most old functions have this declaration function(DWORD* video_mem, int lpitch32) so all I do is replace the locked texture memory into vid ram, and the stride of the texture into lptich32. Easy enough. The problem comes in when I try to do something like a dotted line, for example if I try to plot a pixel every 5 pixels.


for (int x = 0, x1 = 0; x < 50;x++, x1+=5)


        texture_buffer[x1 + y1*lpitch32] = COLOR_MAGENTA;



what I get is a line, but some pixels appear to be either merged with others or too far apart. I don't know what's causing it. To give you a run down of my setup, I create a texture the size of my back buffer, which is the same size as my screen. I create a quad with (-1,-1) to (1,1) coords, I leave it in clip-space and don't use any additional transforms in the vertex shader. My pixel shader just samples the texture. The sampler state filter is set to mip-min-mag or something like that. I thought it might be the filtering, but I didn't find an option how to specify no filter.


If needed ill post a picture of what I'm getting and just let me know which code you want me to show you.

D3DCreateEffectFromMemory Fails, everything is correct, why?

25 July 2014 - 12:04 AM

Hi, I'm trying to use a simple hlsl file in the rendering, so I compile my .fx with the fxc.exe as directed on msdn, it produces a clean object with no errors. I read it into memory using msdn's method, but for some reason this function still comes back with E_FAIL. Here's some of my code:


ifstream is(TEXT("DX10RendererEffect.fxo"), ios::binary);

is.seekg(0, ios_base::end);

streampos pos = is.tellg();

is.seekg(0, ios_base::beg);

char * effectBuffer = new char[pos];

is.read(effectBuffer, pos);


if (FAILED(hr = D3D10CreateEffectFromMemory((void*)effectBuffer, pos, 0, softObjPtr->pD3D10Device, NULL, &softObjPtr->pD3D10Effect))){



Also, I've created my device using the D3D10DEBUG_DEVICE flag (something like that), but the function doesn't say anything besides E_FAIL, that on msdn says that means the debug layer is not installed.


I don't know if this matters but I'm on a fresh in stall of Windows 7 and I just installed Windows 8.1 SDK. It's a hassle because all the D3DX helper functions are removed, so I can't compile from file, and must use a build rule on visual studio to compile my fx files manually. And the D3D sdk is completely merged with Windows SDK, so I can't tell if the debug layer was installed or not, it should have been right? Any suggestions?

[C#] Getting ray from mouse position

16 November 2012 - 02:59 AM

I'm trying to build a small game in Unity and I ran into this problem. I want to use an unlocked cursor in the game meaning that the look at vector of the camera would not exactly point to the cursor. This is the default setting in Unity it's done by not locking the cursor the center of the screen and simply using the offset, instead the camera moves by the distance that the user moved the cursor. This allows me to use a hardware cursor in the game and also I kind of like the feel of it since the game I have in mind involves a lot of 3D picking. So the problem is that when I try to actually shoot a ray from camera through the cursor (converted to 3D position) the shooting vector is off by a lot at certain camera angles and very close to what it's supposed to be at other angles. I'm not sure what's causing it, this is the code I have so far (except from a Unity C# script)
vector3 v = Camera.ScreenPointToRay(mousePos)
ray = v.direction;

I'm not exactly sure how to fix this or even why it's happening. I suspect that the conversion of the 2D cursor to 3D space results in a false Z value in some camera orientation but I don't exactly how to fix it. Any help appreciated

[DX10] Can't get the constant buffer to update

10 March 2012 - 07:15 PM

Hey all, to be brief, I can't get the constant buffer in the shader to update.
Heres the code:

ID3D10Buffer*   g_pConstantBuffer;

float fDelta;

cbDesc.ByteWidth = sizeof( _SHADER_CONST_BUFFER );
cbDesc.Usage = D3D10_USAGE_DYNAMIC;
cbDesc.BindFlags = D3D10_BIND_CONSTANT_BUFFER;
cbDesc.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
cbDesc.MiscFlags = 0;
HRESULT hr = pD3DDevice->CreateBuffer( &cbDesc, &InitData, &g_pConstantBuffer );
pD3DDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer );

g_pConstantBuffer->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData );
pConstData->fDelta = gTimeDelta;
pD3DDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer );

This is the way it looks in the .fx file

cbuffer cb0
float fDelta;

I am given no errors during compilation or runtime. I don't show it here, but I tested all the functions' HRESULTs - no failures.I have the D3D error reporting enabled, but I'm not getting messages from there either.
It appears to have written the values, but when I read them from the shader they all show 0.0000, also a little animation in the texture is not playing (because this is the time delta I'm trying to set)

I have gone trough the directx samples and copied the code verbatim, so I don't know what the problem could be.

[HLSL] How to use arrays of textures in hlsl?

25 February 2012 - 05:24 PM

I'm trying to load bunch of similar textures on to the gpu, and then access them from the cpu.
SSo in my .fx file I have
Texture2D letters[26];
But when it comes time to map that to the ShaderResourceVariable, which is declared like this:
ID3D10EffectShaderResourceVariable* pShaderLetters[26];

I get an error saying something like can't convert * to an array (even though both are arrays).

Any suggestions?