• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

ucfchuck

Members
  • Content count

    84
  • Joined

  • Last visited

Community Reputation

149 Neutral

About ucfchuck

  • Rank
    Member
  1. Thank you, I couldnt find where i read it before, but i kept thinking that the compile used the gpu driver to optimize to that hardware. I had set it up to compile on the target machine on the first execution, and output the compiled shader byte code to an fxo file and just load that every time the application started thereafter. but it left me nervous about shipping the uncompiled, unobfuscated, and fully commented shaders.
  2. I have a set of shaders that i want to compile to run on different machines, and I want to precompile them. does the compiler optimize the code for the current graphics driver of the machine it is being compiled on? say i have an nvidia 750m and precompile my shaders with D3DXSHADER_OPTIMIZATION_LEVEL3 set. and then i move those shaders to run on an intel integrated gpu. Are they as efficient on the intel gpu when precompiled on the nvidia vs compiled directly on the intel at runtime?
  3. ok im retarded, was casting the output textures incorrectly, was using Texture2D Input1 : register(t0); RWTexture2D<float4> Output1 : register(u0);  moved to Texture2D<int> Input : register(t0); Texture2D<int> ParsedData : register(t1); Texture2D<int> FusedData : register(t2);     RWTexture2D<int> ParsedOut : register(u0); RWTexture2D<int> FusedOut : register(u1); RWTexture2D<float> MagOut : register(u2); and i am getting a proper ramp from the input data. still not sure why the last one should be a float when i am using a unorm but it works now.
  4. getting some further weirdness. if i pass ramped data in going from 0-256 left to right, and have the textures all set up as R32_Uint, but then cast it as a float on the first shader it will show input data. it is not correct as trying to cast a uint value packed into 32 bits as a float will very much change the values, but it does show data at that point and is no longer all 0's.     float output = Input1[pos];     Output1[threadID.xy] =  output ;     keeping the data cast as an int or uint still gives me all 0's,     uint output = Input1[pos];     Output1[threadID.xy] =  output ; gives me all black and     uint output = Input1[pos];     if (output == 0) output = 1;     Output1[threadID.xy] = output ; gives me all white      
  5. i have a string of compute shaders that need to process a bock of data. the data comes in as 32 bit signed ints, then it moves them around and then combines them and then compresses them a couple of times. I have debugged most of the shaders and overrided any processing to simplify the passage of data to try and see what is going on, so really it is just passing the same data between shaders until it gets output to the screen. the issue is that when i write the data into the shader it is no where near what is expected. right now im just running test data to straighten everything out. it seems like i am using the wrong type of dxgi format for the textures and I have tried every combination that made any sense to me.   the first shader (Parse)  uses the InputTex, this is where i write to from the cpu                 DXGI.SampleDescription sampleDescription = new DXGI.SampleDescription();                 // multisample count                 sampleDescription.Count = 1;                 sampleDescription.Quality = 0;                    _InputTextureDescription = new D3D11.Texture2DDescription();                 _InputTextureDescription.Width = _SampleCount*4;                 _InputTextureDescription.Height = _LineCount/4;                 _InputTextureDescription.MipLevels = _InputTextureDescription.ArraySize = 1;                 _InputTextureDescription.Format = DXGI.Format.R32_UInt;                 _InputTextureDescription.SampleDescription = sampleDescription;                 _InputTextureDescription.Usage = D3D11.ResourceUsage.Dynamic;                 _InputTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource;                 _InputTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.Write;                 if (_InputTex != null)                     _InputTex.Dispose();                 _InputTex = new D3D11.Texture2D(_Device, _InputTextureDescription);                 if (_InputTexView != null)                     _InputTexView.Dispose();                 _InputTexView = new D3D11.ShaderResourceView(_Device, _InputTex); coming out of that shader it writes to the _ParsedLinesOutTex                _ParsedTextureDescription = new D3D11.Texture2DDescription();                 _ParsedTextureDescription.Width = _SampleCount;                 _ParsedTextureDescription.Height = _LineCount;                 _ParsedTextureDescription.MipLevels = _ParsedTextureDescription.ArraySize = 1;                 _ParsedTextureDescription.Format = DXGI.Format.R32_UInt;                 _ParsedTextureDescription.SampleDescription = sampleDescription;                 _ParsedTextureDescription.Usage = D3D11.ResourceUsage.Dynamic;                 _ParsedTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource;                 _ParsedTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.Write;                 if (_ParsedLinesTex != null)                     _ParsedLinesTex.Dispose();                 _ParsedLinesTex = new D3D11.Texture2D(_Device, _ParsedTextureDescription);                 if (_ParsedLinesTexView != null)                     _ParsedLinesTexView.Dispose();                 _ParsedLinesTexView = new D3D11.ShaderResourceView(_Device, _ParsedLinesTex);                 _ParsedTextureDescription.Usage = D3D11.ResourceUsage.Default;                 _ParsedTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource | D3D11.BindFlags.UnorderedAccess;                 _ParsedTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.None;                 if (_ParsedLinesOutTex != null)                     _ParsedLinesOutTex.Dispose();                 _ParsedLinesOutTex = new D3D11.Texture2D(_Device, _ParsedTextureDescription);                 if (_ParsedLinesOutTexView != null)                     _ParsedLinesOutTexView.Dispose();                 _ParsedLinesOutTexView = new D3D11.UnorderedAccessView(_Device, _ParsedLinesOutTex); which is then copied into the ParsedLinesTex to go into the next shader and that shader writes to the FusedLinesOutTex                    _FusedTextureDescription = new D3D11.Texture2DDescription();                 _FusedTextureDescription.Width = _SampleCount;                 _FusedTextureDescription.Height = _LineCount;                 _FusedTextureDescription.MipLevels = _FusedTextureDescription.ArraySize = 1;                 _FusedTextureDescription.Format = DXGI.Format.R32_UInt;                 _FusedTextureDescription.SampleDescription = sampleDescription;                 _FusedTextureDescription.Usage = D3D11.ResourceUsage.Dynamic;                 _FusedTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource;                 _FusedTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.Write;                 if (_FusedLinesTex != null)                     _FusedLinesTex.Dispose();                 _FusedLinesTex = new D3D11.Texture2D(_Device, _FusedTextureDescription);                 if (_FusedLinesTexView != null)                     _FusedLinesTexView.Dispose();                 _FusedLinesTexView = new D3D11.ShaderResourceView(_Device, _FusedLinesTex);                 _FusedTextureDescription.Usage = D3D11.ResourceUsage.Default;                 _FusedTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource | D3D11.BindFlags.UnorderedAccess;                 _FusedTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.None;                 if (_FusedLinesOutTex != null)                     _FusedLinesOutTex.Dispose();                 _FusedLinesOutTex = new D3D11.Texture2D(_Device, _FusedTextureDescription);                 if (_FusedLinesOutTexView != null)                     _FusedLinesOutTexView.Dispose();                 _FusedLinesOutTexView = new D3D11.UnorderedAccessView(_Device, _FusedLinesOutTex); which is then copied into the FusedLinesTex that goes into the next shader where it is compressed to 16 bits and that shader outputs to                 // create texture description                 _MagnitudeTextureDescription = new D3D11.Texture2DDescription();                 _MagnitudeTextureDescription.Width = _SampleCount;                 _MagnitudeTextureDescription.Height = _LineCount;                 _MagnitudeTextureDescription.MipLevels = _MagnitudeTextureDescription.ArraySize = 1;                 _MagnitudeTextureDescription.Format = DXGI.Format.R16_UNorm;                 _MagnitudeTextureDescription.SampleDescription = sampleDescription;                 _MagnitudeTextureDescription.Usage = D3D11.ResourceUsage.Default;                 _MagnitudeTextureDescription.BindFlags = D3D11.BindFlags.ShaderResource | D3D11.BindFlags.UnorderedAccess;                 _MagnitudeTextureDescription.CpuAccessFlags = D3D11.CpuAccessFlags.None;                 if (_MagnitudeOutTex != null)                     _MagnitudeOutTex.Dispose();                 _MagnitudeOutTex = new D3D11.Texture2D(_Device, _MagnitudeTextureDescription);                 if (_MagnitudeOutTexView != null)                     _MagnitudeOutTexView.Dispose();                 _MagnitudeOutTexView = new D3D11.UnorderedAccessView(_Device, _MagnitudeOutTex); this then gets copied over to another texture and compressed again down to 8 bits.     the parse shader seems to work fine and all the data coming out when i override the output is as expected, not sure why i need to divide it down for uints but it makes it work. the shader when running without the override takes each line and splits it up into 4 lines, its basically l1s1, l2s1, l3s1, l4s1, l1s2, l2s2, l3s2 ... l3sN,l4sN, l5s1,l6s1,l7s1,l8s1,l5s2... and it spits out l1s1, l1s2, l1s3,...l1sN,l2s1  etc... [numthreads(1, 1, 1)] void Parse(uint3 threadID : SV_DispatchThreadID) {     int2 pos = threadID.xy;     int sampleNumber = (pos.x * 4)+(pos.y%4);     int lineNumber = (pos.y / 4);     pos = int2(sampleNumber, lineNumber);          uint output = Input1[pos];      // override output to make sure data is getting passed out of here correctly     pos = threadID.xy;     Output1[threadID.xy] =  pos.y / 4294967296.0;      } on the rest of the shaders i basically have this to override any processing to make sure the data is moving around correctly     Output1[threadID.xy] = Input1[threadID.xy]; on the Magnitude shader i have it output  Output1[threadID.xy] = Input1[threadID.xy]*65536; to just use the bottom 16 bits as a unorm and just clip everything above to 1. so here i am using 256 lines and using the bottom 8 bits of Output1[threadID.xy] =  pos.y / 4294967296.0; goes from 0-256 as is expected.           so the shaders all work together fine and will pass data from one to the next as expected except for having to use a fraction output for uints which is weird... but when i try to write data into the texture for the first shader to use it entirely stops making sense to me.        private void writeDatatoTex(byte[] data)         {             try             {             // override data input with test data                 byte[] newData = new byte[data.Length];                 for (int y = 0; y < LineCount/4; y++)                     for (int x = 0; x < _SampleCount * 4; x++)                     {                         newData[(x + y * _SampleCount * 4) * 4] = 0xff;//0;                         newData[(x + y * _SampleCount * 4) * 4 + 1] = 0xff;//0;                         newData[(x + y * _SampleCount * 4) * 4 + 2] = 0xff;//0;                         newData[(x + y * _SampleCount * 4) * 4 + 3] = 0xff;// (byte)(y&0xff);                     }                 int temprow = 0;                 //DataBox mappedTex = null;                 DataStream mappedTexDataStream;                 //assign and lock the resource                 DataBox mappedTex = _Device.ImmediateContext.MapSubresource(_InputTex, 0, D3D11.MapMode.WriteDiscard, D3D11.MapFlags.None, out mappedTexDataStream);                 texrowsize = mappedTex.RowPitch;                 if (templine == null || templine.Length != (_InputTextureDescription.Width * 4))                 {                     templine = new byte[texrowsize];                     rowsize = (_InputTextureDescription.Width * 4);                 }                 // if unable to hold texture                 if (!mappedTexDataStream.CanWrite) { throw new InvalidOperationException("Cannot Write to the Texture"); }                 // write new data to the texture                 for (int x = 0; x < _InputTextureDescription.Height; x++)                 {                     Array.Copy(newData, temprow, templine, 0, rowsize);                     mappedTexDataStream.WriteRange<byte>(templine, 0, texrowsize);                     temprow += rowsize;                 }                 // unlock the resource                 _Device.ImmediateContext.UnmapSubresource(_InputTex, 0);             }             catch (Exception e)             {                 e = e;             }         } feeding 0xff into all values of the texture gets nothing to the shader.     uint output = Input1[pos];     Output1[threadID.xy] = output; gives me all black, and if i add     uint output = Input1[pos];     if (output == 0)   output = 1;     Output1[threadID.xy] = output;  then the whole image is white, so every value in the texture is 0, even though im writing 0xFF FF FF FF into every sample. tried using SInt and writing 0x7F FF FF FF and still nothing. how do you properly feed data into a 32 bit texture for use in a compute shader. i have no issues doing the exact same thing for 16 bit data into a UNorm.  
  6. Im using directx as a display for an application that has an external system connected via pcie. The device is connected via a northwest logic driver for doing dma transfers to an fpga. im trying to allow for the system to re enumerate in case the link is broken. the process goes like: 1 link goes down 2 external system watchdog turns off link 3 pc re enumerates device list to remove dead device 4 external system comes back online 5 pc re enumerates devices and the link is re established. but when the link goes down the pc hangs and has been losing the graphics driver. i get a dxgi device removed on swapchain.present the drivers should not have anything to do with each other and i dont get device removed issues without breaking the pcie link to the external system. anyone ever encounter anything like this? where can i find more info on recovering from losing the graphics driver or is it even possible without restarting the application?
  7. do i need to maintain register slot numbering across multiple compute shader files? eg: cs file 1 -> tex11 = t0 tex12 = t1 cs file 2 -> tex21 = t2 tex22 = t3   or can i use   cs file 1 -> tex11 = t0 tex12 = t1 cs file 2 -> tex21 = t0 tex22 = t1   without them overwriting each other?
  8. ok so i upgraded the driver to the latest beta version and it is now telling me     {DXGI_ERROR_DEVICE_HUNG: Device hung due to badly formed commands. (-2005270522)}
  9. im using d3d11 through slimdx   when i try to reload settings and basically reinitialize everything the exact same way i created it in the first place i end up getting SEH exceptions that then lead to DXGI_ERROR_DEVICE_REMOVED: Hardware device removed. (-2005270523). i have been seeing this intermittently for a while and have never really been able to track it down. i get nothing in the output window and SEH exceptions are about as useful as a poke in the eye when there is no more information. and microsoft says the device removed exception is coming from the device literally being removed which is obviously and absolutely not happenning as it is in a laptop and the error is repeatable in software. the get_device_removed_reason just tells me it was an internal driver error...   and when i rebuild the device the error persists, i put a try catch around the swapchain.present and when i get a device removed i rebuild the entire device and it keeps giving me the error until i restart the application entirely.   anyone have any idea how i can try to further track it down?
  10. D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Draw cannot be invoked while a bound Resource is currently mapped. The Resource bound at Pixel Shader Resource slot (0) is still mapped. [ EXECUTION ERROR #364: DEVICE_DRAW_BOUND_RESOURCE_MAPPED ] D3D11: ERROR: ID3D11DeviceContext::Draw: Draw cannot be invoked while a bound Resource is currently mapped. The Resource bound at Pixel Shader Resource slot (0) is still mapped. [ EXECUTION ERROR #364: DEVICE_DRAW_BOUND_RESOURCE_MAPPED ] D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Draw cannot be invoked while a bound Resource is currently mapped. The Resource bound at Pixel Shader Resource slot (0) is still mapped. [ EXECUTION ERROR #364: DEVICE_DRAW_BOUND_RESOURCE_MAPPED ] D3D11: CORRUPTION: ID3D11DeviceContext::Unmap: Two threads were found to be executing functions associated with the same Device at the same time. This will cause corruption of memory. Appropriate thread synchronization needs to occur external to the Direct3D API. 2420 and 2432 are the implicated thread ids. [ MISCELLANEOUS CORRUPTION #28: CORRUPTED_MULTITHREADING ] First-chance exception at 0x75a4c41f in WPControlPanel.exe: 0x0000087D: 0x87d. D3D11: ERROR: ID3D11DeviceContext::Draw: Draw cannot be invoked while a bound Resource is currently mapped. The Resource bound at Pixel Shader Resource slot (0) is still mapped. [ EXECUTION ERROR #364: DEVICE_DRAW_BOUND_RESOURCE_MAPPED ] so it is a threading issue. i have a draw function spinning on the main thread and a separate consumer thread trying to pull frames from a usb device as fast as possible which then loads it into the texture. this never was an issue with dx10, it seemed to just wait for one to let go of the resource, seems dx11 does not deal with multiple threads pulling a resource from the device context at the same time. hopefully this is the cause of both exceptions, i think i can just double buffer it into a separate texture and then copyresource on the draw call with a lock to minimize stall time.
  11. yes the texture was created with dynamic usage. no i did not have the debug enabled, but now that i do, i am getting sehexceptions at random places, from the clear render target view, map, write, and unmap, not in any specific order or consistent place, just literally at random from one of those places. though it is hitting a different exception much faster than it was before. with the debug on i get through about 50-100 frames before it throws the seh exceptions and crashes. weird that it would crash on new exceptions that didnt show up before?
  12. ok so im not sure i have really narrowed it down to this point as the try catch block does not catch the exception, but when i do not allow the texture to be written to, then it stops crashing. the map function is as follows, [source lang="csharp"] try { // if everything is in order then send data to graphics memory. if (pTexture != null) { DataBox mappedTex = null; //assign and lock the resource mappedTex = g_pd3dDevice.ImmediateContext.MapSubresource(pTexture, 0, pTexture.Description.Height * pTexture.Description.Width*4, D3D11.MapMode.WriteDiscard, D3D11.MapFlags.None); // if unable to hold texture if (!mappedTex.Data.CanWrite) { throw new InvalidOperationException("Cannot Write to the Texture"); } // write new data to the texture mappedTex.Data.WriteRange<byte>(NewData); // unlock the resource g_pd3dDevice.ImmediateContext.UnmapSubresource(pTexture, 0); } } catch (Exception P) { MessageBox.Show("texheight = " + pTexture.Description.Height + " \n" + "texwidth = " + pTexture.Description.Width + "\n" + "data size = " + NewData.Length); throw; } [/source] could this be some kind of threading issue with the ImmediateContext ? there was some issue with having to move from a datarectangle to a databox as well.
  13. I moved my code from using directx10 to 11 so i could use a compute shader to process the data on the gpu multiple times before rendering it. So i made the move and got all the explicit errors/exceptions out. Now I am running into a "vshost32.exe has stopped working" and it gives me the option to debug but it then tells me " a debugger is attached to myprogram.vshost.exe but not configured to debug this unhandled exception. to debug this exception, detach the current debugger." so i went digging, i set vs to break on all exceptions thrown, and turned on unmanaged debugging. when it hits the exception there is no available source code for me to see where any of this is happening it only gives me some point buried in the assembly, but it always seems to happen in an nvwgf2um.dll in the call stack. > nvwgf2um.dll!09bbd565() [Frames below may be incorrect and/or missing, no symbols loaded for nvwgf2um.dll] nvwgf2um.dll!09b945fd() nvwgf2um.dll!09d30f8b() nvwgf2um.dll!09d12419() nvwgf2um.dll!09cd82ac() nvwgf2um.dll!09b68296() nvwgf2um.dll!09b65b51() nvwgf2um.dll!09c212f8() nvwgf2um.dll!0a1de315() nvwgf2um.dll!0a1de39f() kernel32.dll!763a3677() ntdll.dll!776c9f42() ntdll.dll!776c9f15() so i dug a little deeper and this seems to happen in bf3 quite a bit where people say that a newish release of the nvidia drivers are the culprit ad to just roll back the drivers. so i did and it did not help, so i installed the latest and again still having the same issue. so i went to laptopvideo2go and installed a modded one, still nvwgf2um.dll crashes. anyone have any ideas? am i stuck waiting for an update from nvidia, or is it possibly something in my code i did wrong?