• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

152 Neutral

About RythBlade

  • Rank
  1. Render To Texture only color, no texture

    The difference between those two flags is that: DXGI_FORMAT_D32_FLOAT specifies a depth buffer where the depth is stored as a 32-bit floating point value. DXGI_FORMAT_D24_UNORM_S8_UINT specifies that the depth buffer is stored as a 24-bit floating point value. The remaining 8 bits (the _S8_UINT section) specifies the stencil test value is stored as an 8-bit unsigned integer. You'll need to have another look through your set up of the depth-stencil test and how you create your graphics device and back buffer to ensure you've got depth testing AND stencil testing enabled along with the back face culling. These are some good tutorials with excellent explanation of the parameters etc that should help you get to grips with setting this stuff up: [url="http://www.directxtutorial.com/Tutorial11/tutorials.aspx"]http://www.directxtutorial.com/Tutorial11/tutorials.aspx[/url] This one is also a more complete example but a bit harder to follow. [url="http://www.rastertek.com/tutdx11.html"]http://www.rastertek.com/tutdx11.html[/url] Hope this helps!
  2. Recreating your devices from scratch will take a considerable amount of time as the majority of DirectX objects rely on the device and context so would need to be recreated also. I can think of a few possibilities - for your problem - like MJP says, when a GPU task takes too long your graphics driver will time out. For instance if you attempted to render an obsene amount of triangles and do heavy shader calculations on them, all in one go - your grpahics driver will probably die. I had this problem while writing a DirectCompute shader that sometimes took a very long time and kept killing my drivers. Also - the device object has some quirks when its windowed. For example in a multi-screen situation, draggin the window from one screen to the other will destroy your current graphics device object, as we've changed which physical device we want to use. The DXUT framework is really good at handling this sort of thing for you with some handy event handler stub functions. This may also happen when you minimise the window (I can't remember if I've actually ever tried this.....). Also I believe if your running it in full-screen and you minimise, this can cause problems as well similar to those above. If you're in full-screen mode, use the alt+enter shortcut to switch back to a windowed mode. Then you should be able to get to visual studio without minimising the window - you're just moving focus to another window - which in my experience has always been fine. You could also try placing some timers around you draw calls that output to the debug window. Compare the times in situations where it works and doesn't work to see if its a long running GPU task that kills the device - in this instance you'd probably see a long running time or a start time without a finish time immediatly before the crash. Hope this has been helpful!!
  3. For all those that are interested - just found [url="http://www.gamedev.net/topic/610868-dx-11-shadersmaterials-design/page__hl__samplerstate"]this post[/url] which discusses a nice way to deal with this situation, which is what I'm now going to do. The developer automatically sets up a set of SamplerState objects in their engine and set them in the shaders by default. The shaders have a common include containing the register references for all of these so that all shaders have access to set of samplers etc automatically. [Edit:] Though I'd give a quick update - this method works fantastically - all my shaders now have immediate access to any samplers they might need and its simplified my some parts of my engine code a lot! Works perfectly.
  4. No I'm not.....goddamn it! I was hoping for a quick win.....Ok. Yeah it looks like you're right, just stumbled onto [url="http://www.gamedev.net/topic/542778-samplerstate-within-shader-snt-working-in-d3d10/"]this post[/url] after reading your reply. It seems my google-fu is still not as strong as I'd hoped. Thanks very much for the help!
  5. Hi there, I'm having some trouble with a hlsl shader I've written for a deferred shader to do texture mapping onto a road in my game. Basically I define a SamplerState at the top of my shader which I then use to sample my textures to map onto the surface. I've defined it based on the MSDN documentation so that I shouldn't need to create and set and Samplers from within my C++ code and it can all be done from within the shader. However when I run it, the sampler seems to be ignored. The directx debug gives the following warning: [CODE] D3D11: WARNING: ID3D11DeviceContext::Draw: The Pixel Shader unit expects a Sampler to be set at Slot 0, but none is bound. This is perfectly valid, as a NULL Sampler maps to default Sampler state. However, the developer may not want to rely on the defaults. [ EXECUTION WARNING #352: DEVICE_DRAW_SAMPLER_NOT_SET ] [/CODE] Here are the relevant code snippets from my shader: [CODE] // textures are passed in from my application Texture2D widthMap : register ( t0 ); Texture2D lengthMap : register ( t1 ); // the sampler state defined in shader code so that my application // doesn't have to SamplerState MySampler { Filter = MIN_MAG_MIP_POINT; AddressU = Wrap; AddressV = Wrap; }; // pixel shader which samples the texture POut PShader(PIn input) { ..... output.colour = float4(lengthMap.Sample(MySampler, float2(input.texCoord.y, 0.0f))); ...... return output; } [/CODE] Ignore the dodgy texture coordinate - the premise for my texture sampling is to "build" the actual texture out of 2 textures which define the u colour and v colour - when both the u and v are filled in then I use the colour, otherwise that pixel is set to black - I'm using it to generate the road lines so that I can define any road line layout I want based on 2 very small textures. I've dug through a few of the DirectX samples and I can see them declaring the SamplerState at the top just like I have and seem to have no such problems. I've also tried declaring a samplerstate for each texture I want to sample and within each state I set the "Texture" field to the target texture. I changed it to the current version as this is how the directx samples seem to do it. This problem is also present everywhere I sample a texture in my deferred shaders as well! I've got no idea what I've missed. I can't see any settings that I need to set to tell DirectX to use what ever the shader itself supplies, as far as I was aware - declaring it in my shader should work fine. I can post more examples of my shader files if needed. Has anyone got any suggests??? Thanks very much!!
  6. DirectX 11 Jerky rendering

    Ok I think I've tracked down more of my problem - I had quite a lot of debug messages printing. As soon as I removed them the stammering reduced further. So I've compiled it under release mode and now its barely noticeable. I'm still unsure as to why it suddenly became an issue for me though considering in my past experience its not been much of a problem....
  7. DirectX 11 Jerky rendering

    Hi Hodgman! Thanks for the advice. I think I ended up discounting VSync because I thought it was pausing elsewhere in the application but I've just switched it on and its definitely alleviating the problem. If full screen mode I can now only see a jerk every few seconds rather than quite constantly - but its still quite bad in windowed mode! Although I know of VSync I've never had any problems that have meant I had to worry about it. The pre-cursor to this engine was a sample framework I put together with no VSync and all the animations where constant step. It ran at a few hundred fps and never had any trouble with stuttering animations..... I've read a bit about vsyc and I think I understand what it's all about - but I don't understand why it's suddenly a problem for me when other projects have been fine with no vsync and didn't display any sort of visible stammer problems. I've had them all running fine on this machine with an identical set-up. Yeah, there is an anti-virus app running in the background and could possibly be causing me a problem...but I can't switch it off - its all in German I don't think I should need to worry about windows thread scheduling stuff too much. Threaded building blocks works with a thread pool idea. It sets up a thread for each available hardware thread and just keeps pushing tasks into them.
  8. DirectX 11 Jerky rendering

    Hi folks, thanks for the responses!! @mhagain I'm not aware of any timers that I'm using. I'm running it as fast as it can go and my animation is a constant step per frame - so the higher the frame rate, the faster the animation. Might be DXUT trying to impose a timer of some sort?? @Hornsj3 My animation is based on a "centred camera" sort of like you'd find a 3d modelling package. It specifies vertical (from the positive y axis) and horizontal (from the positive x axis) rotation along with a radius (the camera's distance from the origin). Also my cameras are implemented as a wrapper around a basic camera to give it some specialised functionality. For completeness I've include the animation code and the basic and centred camera class implementations. The 'updateCameraVectors()' in the centred camera class is where I do the rotational calculations and the set rotation functions clip the angle to be between 0 and 360 degrees for the horizontal rotation and 0 and 180 degrees for the vertical rotation. Here's the animation code: [CODE] ICameraManager* cameras = taskData->m_pRenderingService->getCameraManager(); ICentredCamera* camera = cameras->getCentredCamera(); // horizontal rotation animation camera->setHorizontalRotation(camera->getHorizontalRotation() + 0.01); // camera zoom animation if(camera->getRadius() >= 30.0f) { camera->setRadius(10.0f); } else { camera->setRadius(camera->getRadius() + 0.01f); } [/CODE] And here is the centred camera class: [CODE] #include "CentredCameraWrapper.h" namespace GEngine_Graphics { CentredCameraWrapper::CentredCameraWrapper(Camera* cameraToWrap) { GEngineNonZeroAssert(cameraToWrap, L"No camera passed to CentredCameraWrapper constructor", L"No Camera"); m_pWrappedCamera = cameraToWrap; m_horizontalRotation = 0.0f; m_verticalRotation = D3DX_PI / 2.0f; m_radius = 1.0f; } CentredCameraWrapper::~CentredCameraWrapper() { } void CentredCameraWrapper::setMatrices() { m_pWrappedCamera->setMatrices(); } D3DXVECTOR3* CentredCameraWrapper::getCameraPosition() { return m_pWrappedCamera->getCameraPosition(); } D3DXMATRIXA16* CentredCameraWrapper::getViewMatrix() { return m_pWrappedCamera->getViewMatrix(); } D3DXMATRIXA16* CentredCameraWrapper::getProjectionMatrix() { return m_pWrappedCamera->getProjectionMatrix(); } void CentredCameraWrapper::setAsActive() { VectorUtilities::setVector(m_pWrappedCamera->getCameraTarget(), 0.0f, 0.0f, 0.0f); VectorUtilities::setVector(m_pWrappedCamera->getCameraUp(), 0.0f, 1.0f, 0.0f); updateCameraVectors(); } /* sets the horizontal rotation of the camera Parameter list rotation: to horizontal rotation of the camera */ void CentredCameraWrapper::setHorizontalRotation(float rotation) { m_horizontalRotation = fmod(rotation, 2.0f * (float)D3DX_PI); updateCameraVectors(); } /* sets the vertical rotation of the camera Parameter list rotation: to vertical rotation of the camera */ void CentredCameraWrapper::setVerticalRotation(float rotation) { m_verticalRotation = rotation; if (m_verticalRotation > D3DX_PI) { m_verticalRotation = D3DX_PI; } else if (m_verticalRotation < 0.0f) { m_verticalRotation = 0.0f; } updateCameraVectors(); } void CentredCameraWrapper::setRadius(float radius) { m_radius = radius; // do not let the radius equal 0!!! If the camera position is the same as it's lookat position the view matrix will break!! if (m_radius <= 1.0f) { m_radius = 1.0f; } updateCameraVectors(); } float CentredCameraWrapper::getHorizontalRotation() { return m_horizontalRotation; } float CentredCameraWrapper::getVerticalRotation() { return m_verticalRotation; } float CentredCameraWrapper::getRadius() { return m_radius; } /* recalculates and updates the camera vectors */ void CentredCameraWrapper::updateCameraVectors() { D3DXVECTOR3 forward(sinf(m_horizontalRotation), cosf(m_verticalRotation), cosf(m_horizontalRotation)); D3DXVec3Normalize(&forward, &forward); VectorUtilities::setVector( m_pWrappedCamera->getCameraPosition(), m_radius * forward.x, m_radius * forward.y, m_radius * forward.z); } } [/CODE] Finally here's the basic camera that the centred camera wraps up: [CODE] // LCCREDIT start of 3DGraph1 #include "Camera.h" /*----------------------------------------------------------------------------\ * Initialisation and Clean up | *----------------------------------------------------------------------------*/ #define DEFAULT_CAMERA_FOV D3DX_PI / 4.0f #define DEFAULT_CAMERA_NEAR_PLANE 1.0f #define DEFAULT_CAMERA_FAR_PLANE 1000.0f #define SCREEN_WIDTH 800 #define SCREEN_HEIGHT 600 namespace GEngine_Graphics { /* Constructs and initialises a camera object */ Camera::Camera() { m_cameraPosition = D3DXVECTOR3( 0.0f, 0.0f, -1.0f ); m_cameraTarget = D3DXVECTOR3( 0.0f, 0.0f, 0.0f ); m_cameraUp = D3DXVECTOR3( 0.0f, 1.0f, 0.0f ); m_nearPlane = DEFAULT_CAMERA_NEAR_PLANE; m_farPlane = DEFAULT_CAMERA_FAR_PLANE; m_fieldOfView = DEFAULT_CAMERA_FOV; m_aspectRatio = (float)SCREEN_WIDTH / (float)SCREEN_HEIGHT; setMatrices(); } /* Destructs a camera object */ Camera::~Camera() { } /*----------------------------------------------------------------------------\ * Transformation pipeline code | *----------------------------------------------------------------------------*/ /* Sets the view and projection transformation matrices. */ void Camera::setMatrices() { setViewMatrix(); setProjectionMatrix(); } /* Sets the view transformation matrix. */ void Camera::setViewMatrix() { D3DXMatrixLookAtLH( &m_viewMatrix, &m_cameraPosition, &m_cameraTarget, &m_cameraUp ); } /* Sets the projection transformation matrix. */ void Camera::setProjectionMatrix() { D3DXMatrixPerspectiveFovLH( &m_projectionMatrix, m_fieldOfView, m_aspectRatio, m_nearPlane, m_farPlane ); } D3DXMATRIXA16* Camera::getViewMatrix() { return &m_viewMatrix; } D3DXMATRIXA16* Camera::getProjectionMatrix() { return &m_projectionMatrix; } D3DXVECTOR3* Camera::getCameraPosition() { return &m_cameraPosition; } D3DXVECTOR3* Camera::getCameraTarget() { return &m_cameraTarget; } D3DXVECTOR3* Camera::getCameraUp() { return &m_cameraUp; } float Camera::getNearClipPlane() { return m_nearPlane; } float Camera::getFarClipPlane() { return m_farPlane; } float Camera::getFieldOfView() { return m_fieldOfView; } void Camera::setNearClipPlane(float value) { m_nearPlane = value; } void Camera::setFarClipPlane(float value) { m_farPlane = value; } void Camera::setFieldOfView(float value) { m_fieldOfView = value; } } // end of 3DGraph1 [/CODE] Any tips??? Thanks very much! [Edit] I'll give the higher resolution timer a try and see what happens if I time the different sections of code. It's a performance thing - I'll expect to find group of functions that oddly take ages every now and again. Hopefully the timer can show this.
  9. Hi folks, I'm still in the process of debugging this problem but I've got no idea what the problem really is. Basically I have a simple direct x 11 simulation running - 3 balls that spin. Basically it mostly runs incredibly fast as you'd expect - how ever every 1 or seconds it seems to stick or jump. Varying from quite small to very large jumps. I'm not aware of any extra work that I'm doing in between these frames - generally after initialisation the workload is constant. I just wanted to try brainstorm a few potential causes for my problems. The only thing I thought it might be is that I'm writing a parallel game engine that uses a deferred shader and deferred DirectX contexts. My engine is broken up into tasks using the Intel Threaded building blocks API. I have an input, asset, update and graphics task. The update task relies on both input and assets tasks and the graphics task relies on update the update task. I've profiled in sampling and instrumentation mode and can't seem to find a trouble spot if its some mystery blocking function or something but nothing stands out. DXUT seems to think I'm running at a consistent 500 frames per second - but I added a few getTickCounts() and I can see a few spikes appearing in the frame time. Any help or ideas with this would be fantastic!!
  10. Hi folks, I've been looking for a quick and easy way to load in a model of a person and animate them for a simulator I'm working on. Oh before I forget, I'm using DirectX 11. I've been looking at the SDKMesh class in DXUT as it seems the fastest simplest way for me to achieve this, but I'm a little confused over the vertex format stuff. I've looked at a few different examples using the class and each has a different, seemingly global, vertex format, but they all use the SDKMesh class. E.g. the old multi-animation example only has position and texture coordinates, but in the DXUT tutorials they add the normals as well. I'm new to dealing with meshes loaded from file in DirectX, I'm usually hard coding spheres for examples Does the SDKMesh class deal with the vertex format separately - or is it that the vertex format is embedded in the .sdkmesh file, and you have to mirror it in the code?? I notice that the intel multi animation example using threaded building blocks includes some animation data as part of their vertex format and yet in the folder structure I can see an sdkmesh file - presumably the actual mesh - and an sdkmesh_anim file - which I assume is the animation data. I'm just wandering how they all deal with this.... This isn't something I'm interested in programming at the moment, nor do I have the time - my uni deadline is fast approaching :s Basically I need a way to call load on something to get a mesh and call animate on it to make it move and then I'll move the object around the world as it animates. Lazy I know, but animation is not the theme of my project but I'd like it in there for completeness and an added facet of realism. Sorry its a bit disjoint - I'm not really sure how to ask what I need :s Hopefully you'll understand Thanks very much!!
  11. No worries, that's why I put all the detail [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Those are some good tutorials! I'd also have a look through the tutorils available on [url="http://www.directxtutorial.com/Tutorial11/tutorials.aspx"]directtutorial.com[/url], they give some nice descriptions of what everything does, an explanation of the parameters and why they've done it. You have to pay for some of them, but I did perfectly fine with the free ones.
  12. HI! I see this thread is getting a bit old now so I was wondering what you settled on how the project went. I'm in the same situation you were. I've already done a project using raw DX11 that was sucessful but didn't require any sort of real GUI implementation. I'm about to start the main development on a large university project that is going to require quite a lot of GUI stuff so whatever I do will be from scratch. I'm confident I could abstract DXUT or raw DX11. I could easily develop these myself but I'm wary of the time overhead. I guess I'm wondering if you used DXUT and what performance overhead there was on some of the libraries. Also how restrictive was it?? How intuitive would it be to add further GUI elements. Also has anyone else got any experience with DXUT and raw DX11 that could give me a comparison?? Thanks very much
  13. No worries! Thought I'd better make sure I document my solution as I know how helpful these boards are! Update - Pix also suffers the same problems! Make sure you make similar alterations in the NVidia control panel for PIX as we did above. Either add a new profile for the Pix executable or modify the global settings. Make sure that you have closed Pix while making these changes - or at least restart it after you've made them. Note that your application will run fine when running your experiment, but when you attempt to inspect the rendering and debug the pixels after the exeriment, if will constantly fail as Pix isn't able to create the feature level 11 devices like the application did. I assume this will be the same for all of the DX utilities as I can't see them in the profile list on the NVidia control panel.
  14. I've found some more information on this problem. Like I said earlier, some machines - espcially laptops have a second low power graphics chip to minimise power consumption when high performance isn't needed. The NVidia Optimus technology is in charge of determining this for NVidia cards. That is what is interferring with my ability to create feature level 11. The Optimus is apparently triggered whenever DirectX, DXVA or Cuda calls are made. You can also create an application profile via the NVidia Control Panel which targets an executeable and enables/disables its features according to what you set. Most games register with NVidia and add themselves to their profile database, which is distributed as part of Optimus. The DirectX SDK has several of these profiles which is why the samples can see and make use of the GeForce card and why I can't. I'm not sure about Optimus triggering itself when it detects DirectX.....as I wouldn't have this problem in the first place. It seems a temperamental utility at present. So anyway - I've added my own profile to the NVidia control panel to enable the graphics card when my application starts and reset the global settings back to auto-select the graphics adapter to use (just so I don't forget to switch it back off and then wonder where all my battery is going....) and everything works fine. I've found a link to the [url="http://www.nvidia.com/object/LO_optimus_whitepapers.html"]white paper[/url] explaining this. Pages 15, 16, 17 are the ones to read on this issue. Thanks again for your help with matter!!
  • Advertisement