Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 22 Jul 2008
Offline Last Active Feb 16 2015 04:13 PM

Topics I've Started

compute shader resource slot allocation across multiple files

16 February 2015 - 03:40 PM

do i need to maintain register slot numbering across multiple compute shader files?


cs file 1 ->

tex11 = t0

tex12 = t1

cs file 2 ->

tex21 = t2

tex22 = t3


or can i use


cs file 1 ->

tex11 = t0

tex12 = t1

cs file 2 ->

tex21 = t0

tex22 = t1


without them overwriting each other?


09 October 2013 - 04:50 PM

im using d3d11 through slimdx


when i try to reload settings and basically reinitialize everything the exact same way i created it in the first place i end up getting SEH exceptions that then lead to DXGI_ERROR_DEVICE_REMOVED: Hardware device removed. (-2005270523). i have been seeing this intermittently for a while and have never really been able to track it down. i get nothing in the output window and SEH exceptions are about as useful as a poke in the eye when there is no more information. and microsoft says the device removed exception is coming from the device literally being removed which is obviously and absolutely not happenning as it is in a laptop and the error is repeatable in software. the get_device_removed_reason just tells me it was an internal driver error...


and when i rebuild the device the error persists, i put a try catch around the swapchain.present and when i get a device removed i rebuild the entire device and it keeps giving me the error until i restart the application entirely.


anyone have any idea how i can try to further track it down?

migrating from dx10 to 11 gives me an undebug-able error

13 December 2012 - 12:10 PM

I moved my code from using directx10 to 11 so i could use a compute shader to process the data on the gpu multiple times before rendering it. So i made the move and got all the explicit errors/exceptions out.

Now I am running into a "vshost32.exe has stopped working" and it gives me the option to debug but it then tells me " a debugger is attached to myprogram.vshost.exe but not configured to debug this unhandled exception. to debug this exception, detach the current debugger."

so i went digging, i set vs to break on all exceptions thrown, and turned on unmanaged debugging. when it hits the exception there is no available source code for me to see where any of this is happening it only gives me some point buried in the assembly, but it always seems to happen in an nvwgf2um.dll in the call stack.

> nvwgf2um.dll!09bbd565()
[Frames below may be incorrect and/or missing, no symbols loaded for nvwgf2um.dll]

so i dug a little deeper and this seems to happen in bf3 quite a bit where people say that a newish release of the nvidia drivers are the culprit ad to just roll back the drivers. so i did and it did not help, so i installed the latest and again still having the same issue. so i went to laptopvideo2go and installed a modded one, still nvwgf2um.dll crashes.

anyone have any ideas? am i stuck waiting for an update from nvidia, or is it possibly something in my code i did wrong?

AMD APU 6520g fails on texture map from size

19 September 2012 - 03:11 PM

i have a texture of some random size, usually a base 2 but recently i have had to accommodate random sizes like 128x455 instead of 128x512. in the hardware it wants some multiple of 32 or else it will wrap around and mess up the image so i rearranged the sizing

the texture is still only xsize*ysize (ie 128 x 455)

newtdesc.Width = xsize;
newtdesc.Height = ysize;
pTexture = new SlimDX.Direct3D10.Texture2D(g_pd3dDevice, newtdesc);

i map the texture and fill with new data like normal but if i am using the actual size of the texture on the new data it will wrap when it is not a 32 multiple

DataRectangle mappedTex = null;
mappedTex = pTexture.Map(0, D3D10.MapMode.WriteDiscard, D3D10.MapFlags.None);

so i have to pad the new data with zeros to get the multiple of 32 for the total new data size even though the texture is smaller

int texsizefix = ysize + (32 - (ysize % 32));
Byte[] NewData = new Byte[(int)((texsizefix * xsize) * 4)]; // * 4 due to r8g8b8a8
(ie 128x480 zero padded on each line)

so now this all works fine on several cards, im running a gtx 460m, a gtx 560m, and an amd 6750m but now i picked up a bottom end amd apu with a 6520g integrated gpu and it will not work, it gives me an 'attempted to read past the end of stream' error from this size difference between the texture and the NewData on the map instruction. i tried upgrading drivers and slimdx/directx versions etc but nothing seems to change it. i know i am being a little lazy about it in that i could force all the textures to be some multiple of 32 and handle some stretching in the shaders/vertex mapping but it will set me back significantly if i have to recalculate everything involved with that. am i missing something obvious that fixes the wrapping bug in slimdx/directx or is this just an issue with the new chips not having mature enough drivers to handle this.

SlimDX August09 install fails

17 March 2011 - 04:53 PM

i am having trouble installing the end user runtime for slimdx august 09 on a new windows 7 x64 machine. the error it gives me is

An error occurred during the installation of assembly 'Microsoft.VC90.CRT, version="9.0.21022.8",publicKeyToken="1fc8b3b9a1e18e3b",processorArchitecture="amd64", type="win32"'. Please refer to Help and Support for more information.

anyone seen this?

running directx jun10 and .net 4 etc everything else is up to date.