Jump to content
  • Advertisement

Search the Community

Showing results for tags 'DX11' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1105 results

  1. Hi, I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. Is there anyone that is wishing to help me set up my compute shader? Thank you in advance for any replies and interest!
  2. I am trying to draw a screen-aligned quad with arbitrary sizes. currently I just send 4 vertices to the vertex shader like so: pDevCon->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); pDevCon->Draw(4, 0); then in the vertex shader I am doing this: float4 main(uint vI : SV_VERTEXID) : SV_POSITION { float2 texcoord = float2(vI & 1, vI >> 1); return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1); } that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships. one thing I tried is: float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1); .. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas? .
  3. Hello! Have a problem with reflection shader for D3D11: 1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection I tried to add this: #include <D3D11Shader.h> #include <D3Dcompiler.h> #include <D3DCompiler.inl> #pragma comment(lib, "D3DCompiler.lib") //#pragma comment(lib, "D3DCompiler_47.lib") As MSDN tells me but still no fortune. I think lot of people did that already, what I missing? I also find this article http://mattfife.com/?p=470 where recommend to use SDK headers and libs before Wind SDK, but I am not using DirectX SDK for this project at all, should I?
  4. Is it possible to asynchronously create a Texture2D using DirectX11? I have a native Unity plugin that downloads 8K textures from a server and displays them to the user for a VR application. This works well, but there's a large frame drop when calling CreateTexture2D. To remedy this, I've tried creating a separate thread that creates the texture, but the frame drop is still present. Is there anything else that I could do to prevent that frame drop from occuring?
  5. Hi, I've been trying to implement a basic gaussian blur using the gaussian formula, and here is what it looks like so far: float gaussian(float x, float sigma) { float pi = 3.14159; float sigma_square = sigma * sigma; float a = 1 / sqrt(2 * pi*sigma_square); float b = exp(-((x*x) / (2 * sigma_square))); return a * b; } My problem is that I don't quite know what sigma should be. It seems that if I provide a random value for sigma, weights in my kernel won't add up to 1. So I ended up calling my gaussian function with sigma == 1, which gives me weights adding up to 1, but also a very subtle blur. Here is what my kernel looks like with sigma == 1 [0] 0.0033238872995488885 [1] 0.023804742479357766 [2] 0.09713820127276819 [3] 0.22585307043511713 [4] 0.29920669915475656 [5] 0.22585307043511713 [6] 0.09713820127276819 [7] 0.023804742479357766 [8] 0.0033238872995488885 I would have liked it to be more "rounded" at the top, or a better spread instead of wasting [0], [1], [2] with values bellow 0.1. Based on my experiments, the key to this is to provide a different sigma, but if I do, my kernel values no longer adds up to 1, which results to a darker blur. I've found this post ... which helped me a bit, but I am really confused with this the part where he divide sigma by 3. Can someone please explain how sigma works? How is it related to my kernel size, how can I balance my weights with different sigmas, ect... Thanks :-)
  6. i'm trying draw a circule using math: class coordenates { public: coordenates(float x=0, float y=0) { X = x; Y = y; } float X; float Y; }; coordenates RotationPoints(coordenates ActualPosition, double angle) { coordenates NewPosition; NewPosition.X = ActualPosition.X*sin(angle) - ActualPosition.Y*sin(angle); NewPosition.Y = ActualPosition.Y*cos(angle) + ActualPosition.X*cos(angle); return NewPosition; } but now i know that these have 1 problem, because i don't use the orign. even so i'm getting problems on how i can rotate the point. these coordinates works between -1 and 1 floating points. can anyone advice more for i create the circule?
  7. Hi all! I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images). The effect contains the following passes: 1) Depth scene pass; 2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture. 3) Shafts pass texture + RGBA BackBuffer texture. Shafts shader for 2 pass: // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D DepthSampler; varying vec2 tex; #ifndef saturate float saturate(float val) { return clamp(val, 0.0, 1.0); } #endif void main(void) { vec2 uv = tex; float sceneDepth = texture2D(DepthSampler, uv.xy).r; vec4 scene = texture2D(FullSampler, tex); float fShaftsMask = (1.0 - sceneDepth); gl_FragColor = vec4( scene.xyz * saturate(sceneDepth), fShaftsMask ); } final shader: // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D BlurSampler; // shafts sampler varying vec4 Sun_pos; const vec4 ShaftParams = vec4(0.1,2.0,0.1,2.0); varying vec2 Tex_UV; #ifndef saturate float saturate(float val) { return clamp(val, 0.0, 1.0); } #endif vec4 blendSoftLight(vec4 a, vec4 b) { vec4 c = 2.0 * a * b + a * a * (1.0 - 2.0 * b); vec4 d = sqrt(a) * (2.0 * b - 1.0) + 2.0 * a * (1.0 - b); // TODO: To look in Crysis what it the shit??? //return ( b < 0.5 )? c : d; return any(lessThan(b, vec4(0.5,0.5,0.5,0.5)))? c : d; } void main(void) { vec4 sun_pos = Sun_pos; vec2 sunPosProj = sun_pos.xy; //float sign = sun_pos.w; float sign = 1.0; vec2 sunVec = sunPosProj.xy - (Tex_UV.xy - vec2(0.5, 0.5)); float sunDist = saturate(sign) * saturate( 1.0 - saturate(length(sunVec) * ShaftParams.y )); sunVec *= ShaftParams.x * sign; vec4 accum; vec2 tc = Tex_UV.xy; tc += sunVec; accum = texture2D(BlurSampler, tc); tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.875; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.75; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.625; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.5; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.375; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.25; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.125; accum *= 0.25 * vec4(sunDist, sunDist, sunDist, 1.0); accum.w += 1.0 - saturate(saturate(sign * 0.1 + 0.9)); vec4 cScreen = texture2D(FullSampler, Tex_UV.xy); vec4 cSunShafts = accum; float fShaftsMask = saturate(1.00001 - cSunShafts.w) * ShaftParams.z * 2.0; float fBlend = cSunShafts.w; vec4 sunColor = vec4(0.9, 0.8, 0.6, 1.0); accum = cScreen + cSunShafts.xyzz * ShaftParams.w * sunColor * (1.0 - cScreen); accum = blendSoftLight(accum, sunColor * fShaftsMask * 0.5 + 0.5); gl_FragColor = accum; } Demo project: Demo Project Shaders for postprocess Shaders/SunShaft/ What i do wrong ? Thanks!
  8. I have some code (not written by me) that is creating a window to draw stuff into using these: CreateDXGIFactory1 to create an IDXGIFactory1 dxgi_factory->CreateSwapChain to create an IDXGISwapChain D3D11CreateDevice to create an ID3D11Device and an ID3D11DeviceContext Other code (that I dont quite understand) that creates various IDXGIAdapter1 and IDXGIOutput instances Still other code (that I dont quite understand) that is creating some ID3D11RenderTargetView and ID3D11DepthStencilView instances and is doing something with those as well (possibly loading them into the graphics context somewhere although I cant quite see where) What I want to do is to create a second window and draw stuff to that as well as to the main window (all drawing would happen on the one thread with all the drawing to the sub-window happening in one block and outside of any rendering being done to the main window). Do I need to create a second IDXGISwapChain for my new window? Do I need to create a second ID3D11Device or different IDXGIAdapter1 and IDXGIOutput interfaces? How do I tell Direct3D which window I want to render to? Are there particular d3d11 functions I should be looking for that are involved in this? I am good with Direct3D9 but this is the first time I am working with Direct3D11 (and the guy who wrote the code has left our team so I cant ask him for help
  9. I have found an example here: http://xboxforums.create.msdn.com/forums/t/66208.aspx about sharing surfaces/textures created by DX11 into the ID2D1DCRenderTarget, however the above example is based on SwapChain bound to a window (hwnd) What I need is to draw a shared texture into GDI as this is the only plugin interface in an old software I have. Any ideas ? Here is what I do. SharedSurface::SharedSurface(): { //Initialize the ID2D1Factory object D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); //D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, &pD2DFactory); //initialize the ID2D1DCRenderTarget D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_HARDWARE, // D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), 0, 0, D2D1_RENDER_TARGET_USAGE_GDI_COMPATIBLE, D2D1_FEATURE_LEVEL_DEFAULT ); HRESULT hr = pD2DFactory->CreateDCRenderTarget(&props, &pRT); DWORD createDeviceFlags = 0; createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG; ID3D11DeviceContext* context; D3D_FEATURE_LEVEL fl; DXGI_SWAP_CHAIN_DESC sd; ZeroMemory( &sd, sizeof( sd ) ); sd.BufferCount = 1; sd.BufferDesc.Width = width; sd.BufferDesc.Height = height; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHARED; sd.OutputWindow = 0; // g_hWnd; sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.Windowed = FALSE, // TRUE; hr = D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, createDeviceFlags, nullptr, 0, D3D11_SDK_VERSION, &sd, &pSwapChain, &mDevice, &fl, &context); hr = m_pSwapChain->GetBuffer(0, IID_PPV_ARGS(&pBackBuffer)); } bool SharedSurface::CreateTexture(ID3D11Device* device, UINT width, UINT height) { HRESULT hr; D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = 1; desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX; // D3D11_RESOURCE_MISC_SHARED; // desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; desc.CPUAccessFlags = 0; // D3D11_CPU_ACCESS_READ; // 0; if (device != nullptr) mDevice = device; hr = mDevice->CreateTexture2D(&desc, NULL, &pTexture); IDXGIResource * pDXGIResource = NULL; hr = pTexture->QueryInterface(__uuidof(IDXGIResource), (void **)&pDXGIResource); if SUCCEEDED(hr) { hr = pDXGIResource->GetSharedHandle(&sharedHandle); pDXGIResource->Release(); if SUCCEEDED(hr) { hr = pTexture->QueryInterface(__uuidof(IDXGIKeyedMutex), (LPVOID*)&pMutex); } } hr = pTexture->QueryInterface(__uuidof(IDXGISurface), (void **)&pSurface); FLOAT dpiX; FLOAT dpiY; pD2DFactory->GetDesktopDpi(&dpiX, &dpiY); D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(D2D1_RENDER_TARGET_TYPE_HARDWARE/*D2D1_RENDER_TARGET_TYPE_DEFAULT*/, D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED),dpiX,dpiY); hr = pD2DFactory->CreateDxgiSurfaceRenderTarget(pBackBuffer,&props, &pBackBufferRT); DXGI_SURFACE_DESC sdesc; D2D1_BITMAP_PROPERTIES bp; ZeroMemory(&bp, sizeof(bp)); pSurface->GetDesc(&sdesc); bp.pixelFormat = D2D1::PixelFormat(sdesc.Format, D2D1_ALPHA_MODE_PREMULTIPLIED); hr = pBackBufferRT->CreateSharedBitmap(__uuidof(ID3D11Texture2D), pSurface,&bp, &pBitmap); return SUCCEEDED(hr); } void SharedSurface::Draw() { pBackBufferRT->BeginDraw(); pBackBufferRT->DrawBitmap(pBitmap); pBackBufferRT->EndDraw(); pBackBufferRT->Present(); } void SharedSurface::BindDC(HDC hdc, int width, int height) { RECT rct; rct.top = 0; rct.left = 0; rct.right = width; rct.bottom = height; pRT->BindDC(hdc, &rct); } // HOW TO EXCHANGE between pBackBufferRT and pRT ?
  10. Hi, I am trying to use shared textures with my rendering but with no success. I create texture which I share with another process which draws on that texture. Later on I want to use that texture in my rendering loop. //class members - once initialized, kept static during the rendering ID3D11ShaderResourceView* g_mTexture; ID3D11Texture2D *mTexture; bool MyTexture::CreateTexture(ID3D11Device* device, UINT width, UINT height, int targetWidth, int targetHeight) { HRESULT hr; D3D11_TEXTURE2D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = width; desc.Height = height; desc.MipLevels = desc.ArraySize = 1; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.SampleDesc.Count = 1; desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED; // D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX desc.Usage = D3D11_USAGE_DEFAULT; desc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE; desc.CPUAccessFlags = 0; hr = device->CreateTexture2D( &desc, NULL, &mTexture ); if (SUCCEEDED(hr)) { D3D11_RENDER_TARGET_VIEW_DESC rtDesc; ZeroMemory(&rtDesc, sizeof(rtDesc)); rtDesc.Format = desc.Format; rtDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D; rtDesc.Texture2D.MipSlice = 0; D3D11_SHADER_RESOURCE_VIEW_DESC svDesc; ZeroMemory(&svDesc, sizeof(svDesc)); svDesc.Format = desc.Format; svDesc.Texture2D.MipLevels = 1; svDesc.Texture2D.MostDetailedMip = 0; svDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; hr = device->CreateShaderResourceView(mTexture, &svDesc, &g_mTexture); } IDXGIResource * pDXGIResource; HRESULT hr = mTexture->QueryInterface(__uuidof(IDXGIResource), (void **)&pDXGIResource); if SUCCEEDED(hr) { hr = pDXGIResource->GetSharedHandle(&sharedHandle); pDXGIResource->Release(); if SUCCEEDED(hr) { OutputDebug(L"RequestSharedHandle: w=%d, h=%d, handle=%d", width, height, sharedHandle); return (unsigned long long) sharedHandle; } } .... } the problem is to use that shared handle during my rendering loop as the texture is always black, below are all the options I tried: 1) OPTION 1 (bare texture) In this option I simply tried to use mTexture object created with device->CreateTexture2D() that I shared with another process , so basically I left my legacy code untouched with the exception of CreateTexture where I modified D3D11_TEXTURE2D_DESC options for shared version. In my rendering loop I used g_mTexture created during init by CreateShaderResourceView ``` #!c++ pDeviceContext->PSSetShaderResources( 0, 1, &(*it_region)->g_mTexture ); ``` in the legacy (without shared texture) I simply mapped the texture, copied the bits and unmapped and was working fine: D3D11_MAPPED_SUBRESOURCE mappedResource; HRESULT hr = pDeviceContext->Map(mTexture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); if (SUCCEEDED(hr)) { BYTE* mappedData = reinterpret_cast<BYTE*>(mappedResource.pData); if (mappedData != NULL) // { for (UINT i = 0; i < h; ++i) { if (bUpsideDown) { memcpy(mappedData, bits, w * 4); mappedData -= mappedResource.RowPitch; bits += (UINT)w * 4; } else { memcpy(mappedData, bits, w * 4); mappedData += mappedResource.RowPitch; bits += (UINT)w * 4; } } } if ((*it_region)->mTexture != NULL) pDeviceContext->Unmap(mTexture, 0); 2) OPTION 2 - OpenSharedResource In this version I tried the get the handle to the shared texture using OpenSharedResource (with 2 combination) // Option 2.1 - get pTexture directly ID3D11Texture2D *pTexture; // temp handler HRESULT hr = mDevice->OpenSharedResource(sharedHandle, __uuidof(ID3D11Texture2D), (LPVOID*)&pTexture); // Option 2.2 get pTexture indirectly by using QueryInterface IDXGIResource* sharedResource = 0; HRESULT hr = mDevice->OpenSharedResource(sharedHandle, __uuidof(ID3D11Texture2D), (LPVOID*)&sharedResource); hr = sharedResource->QueryInterface(__uuidof(ID3D11Texture2D), (void**)(&pTexture)); OutputDebug(L"OpenSharedResource:%d\n", hr); Now, having new temporary pTexture handle I tried the following options (with the combination of the above options retrieving the shared pTexture) OPTION 2.3 - copy pTexure into mTexture #!c++ mDevice->CopyResource(mTexture,pTexture); pDeviceContext->PSSetShaderResources( 0, 1, &g_mTexture ); ``` OPTION 2.4 - create new temporary shader resource using temporary pTexture ``` #!c++ ID3D11ShaderResourceView* g_pTexture; // temp handler hr = device->CreateShaderResourceView(pTexture, &svDesc, &g_pTexture); pDeviceContext->PSSetShaderResources( 0, 1, &g_pTexture ); ``` OPTION 3 - MUTEX version Basically I tried all above options using D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag during texture creation and the following code to acquire the mutex: UINT acqKey = 1; UINT relKey = 0; DWORD timeOut = 5; IDXGIKeyedMutex pMutex; hr = pTexture->QueryInterface( __uuidof(IDXGIKeyedMutex), (LPVOID*)&pMutex ); DWORD result = pMutex->AcquireSync(acqKey, timeOut); if (result == WAIT_OBJECT_0) // Rendering using 2.2 and 2.3 options .... else // nothing here - skip frame result = pMutex->ReleaseSync(relKey)); if ( result == WAIT_OBJECT_0 ) return S_OK; NONE of those solutions worked for me, any HINTS ?
  11. While I am running my directX11 engine in windowed mode, I can use the snipping too to get a screen cap of it. However, if I run it in full screen and hit print screen button, and try to copy paste that into paint or photoshop, it has nothing - print screen isnt capturing the image for some reason, and I cant use snipping tool to get the screen cap while its in full screen? Is there some setting or soemthing thats not allowing windows to detect the game while it's in full screen? Very important to get these captures for my portfolio, and if I capture it in windowed mode, it looks really bad and low res. Heres a link to the D3D class in my engine for reference https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/D3DClass.cpp Any help would be much appreciated, thanks!
  12. Would a 1024x16384 texture generally perform as well as a 4096x4096 texture? Memory wise they should be the same but perhaps having such a large dimension as 16384 would be worse anyway? I'm considering making a texture atlas like a long strip where it would be tilable in one direction.
  13. I am having trouble with the graphics device disconnecting with the reason DXGI_ERROR_DRIVER_INTERNAL_ERROR on some of our devices after our game terminal has been running for a few hours. Specifically the Intel HD 530, though a few of the other Intel based GPUs will crash as well. The odd thing is that the same code will run indefinitely on the same computers if we put in a discreet card. It also runs without any problem on other Intel chipsets, even those that are quite a bit older. We have tried the basics like Windows updates and getting the latest drivers from Intel. We have also verified that this is not a computer-specific problem, as the problem persists across different computers with the same build. I have tried adding ID3D11DeviceContext::Flush calls by resource creation as suggested by https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/610376 with no apparent help. I have also verified that no graphics handles are being held onto for a very long time, and our GPU memory usage never gets much above 400 Mb, which should be well within the ability of an integrated card to handle. We actually wrote a watchdog application to monitor that, and the usually the device is removed while the memory is lower than normal. I'm having a hard time finding any resources that would help us find the root problem, as DXGI_ERROR_DRIVER_INTERNAL_ERROR is not a very helpful error message. We are using the DirectX 11 api, and run on a variety of Windows based computers, including both all in ones and desktops. I would appreciate any help or ideas anyone has, as we haven't been able to make much forward progress even after a few weeks of intensive debugging and engine changes.
  14. My shadows (drawn using a depth buffer which I later sample from in the shadow texture), seem to be detaching slightly from their objects. I looked this up and I think it's "peter panning," and they were saying you have to change the depth offset, but not sure how to do that. https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324(v=vs.85).aspx Is there a fast way I can tweak the code to fix this? Should I change the "bias" perhaps, or something else? Thanks, Here is the code for the shadows, for reference: // ENTRY POINT float4 main(PixelInputType input) : SV_TARGET { float2 projectTexCoord; float depthValue; float lightDepthValue; //float4 lightColor = float4(0,0,0,0); float4 lightColor = float4(0.05,0.05,0.05,1); // Set the bias value for fixing the floating point precision issues. float bias = 0.001f; //////////////// SHADOWING LOOP //////////////// for(int i = 0; i < NUM_LIGHTS; ++i) { // Calculate the projected texture coordinates. projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) { // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = depthTextures[i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. float lightIntensity = saturate(dot(input.normal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { float spotlightIntensity = CalculateSpotLightIntensity(input.lightPos_LS[i], cb_lights[i].lightDirection, input.normal); //lightColor += (float4(1.0f, 1.0f, 1.0f, 1.0f) * lightIntensity) * .3f; // spotlight lightColor += float4(1.0f, 1.0f, 1.0f, 1.0f) /** lightIntensity*/ * spotlightIntensity * .3f; // spotlight } } } } return saturate(lightColor); } https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/MultiShadows_ps.hlsl
  15. Hi! Pretty new to 3D programming, using SharpDX wrapper to build a 3D world (for testing and learning). I am adding several visible camera objects (very rudimentary models) in order to visualize different views. Let's say I have a "world" floor grid covering vectors {0,0,0 - 1,1,0} I add a pretend camera "CAM2" object at {0.5, 1.5, -1.0} I am looking at this world by setting the parameters for "CAM1" a worldView projection position at pos: {0.0, 1.5, 1.5} lookat: {0.5, 0.0, 0.5} (looking down from upper left towards the center of the floor). I would like to draw a line from the pretend camera "CAM2" model origin, to the center of the floor as it is projected through "CAM1" view. Obviously a line from "CAM1" to the Lookat point would be invisible. But I can't for my life figure out how to apply the correct conversions to the vector end point for "CAM2". As can be seen in the snashot, the line (green) from "CAM2" points to.... well.. Russia?? :-D Can anyone help? BR Per
  16. Hi, so I imported some new models into my engine, and some of them show up with ugly seams or dark patches, while others look perfect (see pictures) I'm using the same shader for all of them, and all of these models have had custom UV mapped textures created for them, which should wrap fully around them, instead of using tiled textures. I have no idea why the custom UV mapped textures are mapping correctly on some, but not others. Possible causes are 1. Am I using the wrong SamplerState to sample the textures? (Im using SampleTypeClamp ) 2. The original models had quads, and were UV mapped by an artist in that state, then I reimported them into 3DS Max and reexported them as all triangles (my engine object loader only accepts triangles). 3. Could the original model UVs just be wrong? Please let me know if somebody can help identify this problem, I'm completely baffled. Thanks. For reference, here's a link to the shader being used to draw the problematic models and the shader code below. https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Light_SoftShadows_ps.hlsl ///////////// // DEFINES // ///////////// #define NUM_LIGHTS 3 ///////////// // GLOBALS // ///////////// // texture resource that will be used for rendering the texture on the model Texture2D shaderTextures[7];// NOTE - we only use one render target for drawing all the shadows here! // allows modifying how pixels are written to the polygon face, for example choosing which to draw. SamplerState SampleType; /////////////////// // SAMPLE STATES // /////////////////// SamplerState SampleTypeClamp : register(s0); SamplerState SampleTypeWrap : register(s1); /////////////////// // TYPEDEFS // /////////////////// // This structure is used to describe the lights properties struct LightTemplate_PS { int type; float3 padding; float4 diffuseColor; float3 lightDirection; //(lookat?) //@TODO pass from VS BUFFER? float specularPower; float4 specularColor; }; ////////////////////// // CONSTANT BUFFERS // ////////////////////// cbuffer SceneLightBuffer:register(b0) { float4 cb_ambientColor; LightTemplate_PS cb_lights[NUM_LIGHTS]; } ////////////////////// // CONSTANT BUFFERS // ////////////////////// // value set here will be between 0 and 1. cbuffer TranslationBuffer:register(b1) { float textureTranslation; //@NOTE = hlsl automatically pads floats for you }; // for alpha blending textures cbuffer TransparentBuffer:register(b2) { float blendAmount; }; struct PixelInputType { float4 vertex_ModelSpace : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 viewDirection : TEXCOORD1; float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2; float4 vertex_ScrnSpace : TEXCOORD5; }; float4 main(PixelInputType input) : SV_TARGET { bool bInsideSpotlight = true; float2 projectTexCoord; float depthValue; float lightDepthValue; float4 textureColor; float gamma = 7.f; /////////////////// NORMAL MAPPING ////////////////// float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex); // Sample the shadow value from the shadow texture using the sampler at the projected texture coordinate location. projectTexCoord.x = input.vertex_ScrnSpace.x / input.vertex_ScrnSpace.w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ScrnSpace.y / input.vertex_ScrnSpace.w / 2.0f + 0.5f; float shadowValue = shaderTextures[6].Sample(SampleTypeClamp, projectTexCoord).r; // Expand the range of the normal value from (0, +1) to (-1, +1). bumpMap = (bumpMap * 2.0f) - 1.0f; // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal! float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal)); //////////////// AMBIENT BASE COLOR //////////////// // Set the default output color to the ambient light value for all pixels. float4 lightColor = cb_ambientColor * saturate(dot(bumpNormal, input.normal) + .2); // Calculate the amount of light on this pixel. for(int i = 0; i < NUM_LIGHTS; ++i) { float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { lightColor += (cb_lights[i].diffuseColor * lightIntensity) * 0.3; } } // Saturate the final light color. lightColor = saturate(lightColor); // TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location. input.tex.x += textureTranslation; // BLENDING float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex); float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex); float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex); //textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2)); textureColor = color1; // Combine the light and texture color. float4 finalColor = lightColor * textureColor * shadowValue * gamma; //if(lightColor.x == 0) //{ // finalColor = cb_ambientColor * saturate(dot(bumpNormal, input.normal) + .2) * textureColor; //} return finalColor; }
  17. I make DXGI adapters and monitors enumeration. The second monitor connected to my computer is Dell P2715Q, which has 3840*2160 resolution. However, the program reports it as 2560*1440, the second available resolution. Minimal code to reproduce: #include "stdafx.h" #include <Windows.h> #include <stdio.h> #include <tchar.h> #include <iostream> #include <DXGI.h> #pragma comment(lib, "DXGI.lib") using namespace std; int main() { IDXGIFactory1* pFactory1; HRESULT hr = CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)(&pFactory1)); if (FAILED(hr)) { wcout << L"CreateDXGIFactory1 failed. " << endl; return 0; } for (UINT i = 0;; i++) { IDXGIAdapter1* pAdapter1 = nullptr; hr = pFactory1->EnumAdapters1(i, &pAdapter1); if (hr == DXGI_ERROR_NOT_FOUND) { // no more adapters break; } if (FAILED(hr)) { wcout << L"EnumAdapters1 failed. " << endl; return 0; } DXGI_ADAPTER_DESC1 desc; hr = pAdapter1->GetDesc1(&desc); if (FAILED(hr)) { wcout << L"GetDesc1 failed. " << endl; return 0; } wcout << L"Adapter: " << desc.Description << endl; for (UINT j = 0;; j++) { IDXGIOutput *pOutput = nullptr; HRESULT hr = pAdapter1->EnumOutputs(j, &pOutput); if (hr == DXGI_ERROR_NOT_FOUND) { // no more outputs break; } if (FAILED(hr)) { wcout << L"EnumOutputs failed. " << endl; return 0; } DXGI_OUTPUT_DESC desc; hr = pOutput->GetDesc(&desc); if (FAILED(hr)) { wcout << L"GetDesc1 failed. " << endl; return 0; } wcout << L" Output: " << desc.DeviceName << L" (" << desc.DesktopCoordinates.left << L"," << desc.DesktopCoordinates.top << L")-(" << (desc.DesktopCoordinates.right - desc.DesktopCoordinates.left) << L"," << (desc.DesktopCoordinates.bottom - desc.DesktopCoordinates.top) << L")" << endl; } } return 0; } Program output: Adapter: Intel(R) Iris(TM) Pro Graphics 6200 Output: \\.\DISPLAY1 (0,0)-(1920,1200) Output: \\.\DISPLAY2 (1920,0)-(2560,1440) DISPLAY2 is reported with low resolution. Environment: Windows 10 x64 Intel(R) Iris(TM) Pro Graphics 6200 DELL P2715Q What can cause this behavior: DirectX restrictions, video memory, display adapter, driver, monitor? How can I fix this and get full available resolution?
  18. Hello, DX9Ex. I have the problem with driver stability in time of serial renderings, which i try to use for image processing in memory with fragment shaders. For big bitmaps the video driver sometimes becomes unstable ("Display driver stopped responding and has recovered") and, for instance, if the media player runs video in background, it sometimes freezes and distorts. I tried to use next methods of IDirect3DDevice9Ex: SetGPUThreadPriority(-7); WaitForVBlank(0); EvictManagedResources(); with purpose to give some time for GPU between scenes, but it seems to be has not notable effect in this case. I don't want to reinitilialize subsystem for every step to avoid performance loss. So, my question is next: does some common practice exists to avoid overloading of GPU by running tasks? Many thanks in advance.
  19. Bartosz Boczula

    DX11 Shadow Map Details

    I think I understand the idea behind Shadow Mapping, however I'm having problems with implementation details. In VS I need light position - but I don't have one! I only have light direction, what light position should I use? I have working camera class, with Projection and View matrices and all - how can I reuse this? I should put camera position, but how to calculate "lookAt" parameter? Is this suppose to be ortographic or perspective camera? And one more thing - when in the 3D piplene is the actual write to the Depth Buffer? In PS or somewhere earlier? Br.,BB
  20. So last night I was messing about with some old code on a Direct 3D 11.4 interface and trying out some compute stuff. I had set this thing up to send data in, run the compute shader, and then output the result data into a structured buffer. To read this data back in to the CPU, I had copied the structured buffer into a staging buffer and retrieved the data from there. This all worked well enough. But I was curious to see if I could remove the intermediate copy to stage and read from the structured buffer directly using Map. To do this, I created the buffer using D3D11_CPU_ACCESS_READ and a usage of default, and to my shock and amazement... it worked (and no warning messages from the D3D Debug log). However, this seems to run counter to what I've read in the documentation for D3D11_CPU_ACCESS_FLAG: The bolded part is what threw me off. Here, I had a structured buffer created with default usage, and a UAV (definitely bindable to the pipeline), but I was able to map and read the data. Does this seem wrong? I'm aware that some hardware manufacturers may implement things differently, but if MS says that this flag can't be used outside of a staging resource, then shouldn't the manufacturer (NVidia) adhere to that? I can find nothing else in the documentation that says this is allowed or not allowed (beyond the description for D3D11_CPU_ACCESS_READ). And the debug output for D3D doesn't complain in the slightest. So what gives? Is it actually safe to do a map & read from a default usage resource with CPU read flags?
  21. A new player of my game reported an issue. When he starts the game, it immediately crashes, before he even can see the main menu. He sent me a log file of my game and it turns out that the game crashes, when my game creates a 2D render target. Here is the full "Interface not supported" error message: HRESULT: [0x80004002], Module: [General], ApiCode: [E_NOINTERFACE/No such interface supported], Message: Schnittstelle nicht unterstützt bei SharpDX.Result.CheckError() bei SharpDX.Direct2D1.Factory.CreateDxgiSurfaceRenderTarget(Surface dxgiSurface, RenderTargetProperties& renderTargetProperties, RenderTarget renderTarget) bei SharpDX.Direct2D1.RenderTarget..ctor(Factory factory, Surface dxgiSurface, RenderTargetProperties properties) bei Game.AGame.Initialize() Because of the log file's content, I know exactly where the game crashes: Factory2D = new SharpDX.Direct2D1.Factory(); _surface = backBuffer.QueryInterface<SharpDX.DXGI.Surface>(); // It crashes when calling this line! RenderTarget2D = new SharpDX.Direct2D1.RenderTarget(Factory2D, _surface, new SharpDX.Direct2D1.RenderTargetProperties(new SharpDX.Direct2D1.PixelFormat(_dxgiFormat, SharpDX.Direct2D1.AlphaMode.Premultiplied))); RenderTarget2D.AntialiasMode = SharpDX.Direct2D1.AntialiasMode.Aliased; I did some research on this error message and all similar problems I found were around six to seven years old, when people tried to work with DirectX 11 3D graphics and Dirext 10.1 2D graphics. However, I am using DirectX 11 for all visual stuff. The game runs very well on the computers of all other 2500 players. So I am trying to figure out, why the source code crashes on this player's computer. He used Windows 7 with all Windows Updates, 17179 MB memory and a NVIDIA GeForce GTX 870M graphics card. This is more than enough to run my game. Below, you can see the code I use for creating the 3D device and the swap chain. I made sure to use BGRA-Support when creating the device, because it is required when using Direct2D in a 3D game in DirectX 11. The same DXGI format is used in creating 2D and 3D content. The refresh rate is read from the used adapter. // Set swap chain flags, DXGI format and default refresh rate. _swapChainFlags = SharpDX.DXGI.SwapChainFlags.None; _dxgiFormat = SharpDX.DXGI.Format.B8G8R8A8_UNorm; SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1); // Get proper video adapter and create device and swap chain. using (var factory = new SharpDX.DXGI.Factory1()) { SharpDX.DXGI.Adapter adapter = GetAdapter(factory); if (adapter != null) { // Get refresh rate. refreshRate = GetRefreshRate(adapter, _dxgiFormat, refreshRate); // Create Device and SwapChain _device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); _swapChain = new SharpDX.DXGI.SwapChain(factory, _device, GetSwapChainDescription(clientSize, outputHandle, refreshRate)); _deviceContext = _device.ImmediateContext; } }
  22. I've been trying for hours now to find the cause of this problem. My vertex shader is passing the wrong values to the pixel shader, and I think it might be my input/output semantics. *This shader takes in a prerendered texture with shadows in it, based on Rastertek Tutorial 42. So the light/dark values of the shadows are already encoded in the blurred shadow texture, sampled from Texture2D shaderTextures[7] at index 6 in the pixel shader. struct VertexInputType { float4 vertex_ModelSpace : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; }; struct PixelInputType { float4 vertex_ModelSpace : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 viewDirection : TEXCOORD1; float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2; float4 vertex_ScrnSpace : TEXCOORD5; }; Specifically PixelInputType is causing a ton of trouble - if I switch the tags "SV_POSITION" for the first variable and "TEXCOORD5" for the last one, it gives completely different values to the Pixel shader even though all the calculations are exactly the same. Specifically, the main issues are that I have a spotlight effect in the pixel shader that takes the dot product of the light to Surface Vector with the light direction to give it a falloff, which was previously working, but in this upgraded version of the shader, it seems to be giving completely wrong values. / (See full vertex shader code below). Is there some weird thing about pixel shader semantics that Im missing? Does the order of the variables in the struct matter? I've also attached teh full shader files for reference. Any insight would be much appreciated, thanks. PixelInputType main( VertexInputType input ) { //The final output for the vertex shader PixelInputType output; // Pass through tex coordinates untouched output.tex = input.tex; // Pre-calculate vertex position in world space input.vertex_ModelSpace.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.vertex_ModelSpace = mul(input.vertex_ModelSpace, cb_worldMatrix); output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_viewMatrix); output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_projectionMatrix); // Store the position of the vertice as viewed by the camera in a separate variable. output.vertex_ScrnSpace = output.vertex_ModelSpace; // Bring normal, tangent, and binormal into world space output.normal = normalize(mul(input.normal, (float3x3)cb_worldMatrix)); output.tangent = normalize(mul(input.tangent, (float3x3)cb_worldMatrix)); output.binormal = normalize(mul(input.binormal, (float3x3)cb_worldMatrix)); // Store worldspace view direction for specular calculations float4 vertex_WS = mul(input.vertex_ModelSpace, cb_worldMatrix); output.viewDirection = normalize(cb_camPosition_WS.xyz - vertex_WS.xyz); for(int i = 0; i< NUM_LIGHTS; ++i) { // Calculate light position relative to the vertex in WORLD SPACE output.lightPos_LS[i] = cb_lights[i].lightPosition_WS - vertex_WS.xyz; } return output; } Repo link: https://github.com/mister51213/DirectX11Engine/tree/master/DirectX11Engine Light_SoftShadows_ps.hlsl Light_SoftShadows_vs.hlsl
  23. Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow. http://www.rastertek.com/dx11tut42.html He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it. The way he does it is : 1. Project the objects in the scene to a render target using the depth shader. 2. Draw black and white shadows on another render target using those depth textures. 3. Blur the black/white shadow texture produced in step 2 by a) rendering it to a smaller texture b) vertical / horizontal blurring that texture c) rendering it back to a bigger texture again. 4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity. So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required. Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system? Like combining any of these render textures into one for example? If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time understanding the way this works, so a super complicated change would be beyond my capacity. Thanks. *For reference, here is my repo, in which I have simplified his tutorial and added an additional light. https://github.com/mister51213/DX11Port_SoftShadows/tree/MultiShadows
  24. // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.lightViewPositions[i].z / input.lightViewPositions[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. //lightIntensity = saturate(dot(input.normal, input.lightPositions)); lightIntensity = saturate(dot(input.normal, normalize(input.lightPositions[i]))); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseCols[i] * lightIntensity * 0.25f); } } else // shadow falloff here { float4 shadowcol = (1,1,1,1); float shadowintensity = saturate(length(input.lightpositions[i])*0.038); color += shadowcol * shadowintensity*shadowintensity*shadowintensity; } } } // Saturate the final light color. color = saturate(color); Hi, I want to add a fall off to the shadows in this pixel shader. This should be really straightforward - just get the distance between the light position and the vertex position, and multiply it by the light intensity at the pixel being shadowed, so the light intensity will increase and the shadow will fade away towards the edges. As you can see, I get the "lightPosition" from the input (which comes from the vertex shader, and was calculated by worldLightPosition - worldVertexPosition inside the vertex shader, so taking its length should give you the distance between the light and the pixel.) I multiplied it by 0.038, an arbitrary number, to scale it down, because it needs to be between 0 and 1 before multiplying it by shadow color (1,1,1,1) to give a gradient. However, this does absolutely nothing, and I cant tell where its failing. Please look at the attached files to see the full code of the vertex and pixel shaders. Any advice would be very welcome, thanks! Light_ps.hlsl Light_vs.hlsl
  25. hi, after implementing skinning with compute shader i want to implement skinning with VertexShader Streamout method to compare performance. The following Thread is a discussion about it. Here's the recommended setup: Use a pass-through geometry shader (point->point), setup the streamout and set topology to point list. Draw the whole buffer with context->Draw(). This gives a 1:1 mapping of the vertices. Later bind the stream out buffer as vertex buffer. Bind the index buffer of the original mesh. draw with DrawIndexed like you would with the original mesh (or whatever draw call you had). I know the reason why a point list as input is used, because when using the normal vertex topology as input the output would be a stream of "each of his own" primitives that would blow up the vertexbuffer. I assume a indexbuffer then would be needless ? But how can you transform position and normal in one step when feeding the pseudo Vertex/Geometry Shader with a point list ? In my VertexShader i first calculate the resulting transform matrix from bone indexes(4) und weights (4) and transform position and normal with the same resulting transform Matrix. Do i have to run 2 passes ? One for transforming position and one for transforming normal ? I think it could be done better ? thanks for any help
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!