• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Icebone1000

DX11
[DX11]UpdateSubresource-What am I missing?

13 posts in this topic

Im trying to change the mesh of a vertex buffer using UpdateSubresource, the layout is the same, everything is the same, I just trying to change the vertices(from a ball to a cube, lets say). Since everything is the same I though doing just a call to UpdateSubresource and then IASetVertexBuffers would be fine, but notting happens, the ball is still being rendered...So Im probaly missing some concepts..No warnings or errors are being displayed on the output.. Some code: The buffer I create at first, this works fine:
D3D11_BUFFER_DESC buffdesc = {0};
		buffdesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
		buffdesc.ByteWidth = iSizeOfpV_p;//sizeof( vposnormaltex )*24;//
		buffdesc.Usage = D3D11_USAGE_DEFAULT;
		D3D11_SUBRESOURCE_DATA subres = {0};
			subres.pSysMem = pV_p;//vertex array(ball)


	if( pDevice_p->CreateBuffer( &buffdesc, &subres, &pVBuff_p ) != S_OK ) return E_FAIL;

So I create the Input Layout to the input assemble stage, this works fine also.
//Create Input Layout(based on the desc passed as param)
	D3DX11_PASS_DESC passdesc = {0};
		pEPass_p->GetDesc( &passdesc );

	donne( pInputLayout_p );

	if( pDevice_p->CreateInputLayout(	InputElmDesc_p, NumElm,
										passdesc.pIAInputSignature, passdesc.IAInputSignatureSize,
										&pInputLayout_p ) != S_OK ){
		
		for( UINT i=0; i<NumVB; i++ ) donne( pVBuff_p[i] );
		for( UINT i=0; i<NumIB; i++ ) donne( pIBuff_p[i] );

		return E_FAIL;
	}


	//bind buffers(vertex mesh) to the IA:
	UINT offset = NULL;
	pDIContext_p->IASetVertexBuffers( 0, NumVB, pVBuff_p, strides, &offset );


	//set the input layout to the Input Assembly
	pDIContext_p->IASetInputLayout( pInputLayout_p );

	//set primitive type:
	pDIContext_p->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST );
Now Im trying to just update the vertex buffer, so I can display other stuff, Im doing this:
//Update VBuffer with new data:
	D3D11_BOX bla; ZeroMemory( &bla, sizeof(D3D11_BOX));
	pDIContext_p->UpdateSubresource( pVBuff_p, 0, &bla, pV_p, 0, 0 );


	//Bind VBuffer to the Input Assemble
	UINT offset = NULL;	
	pDIContext_p->IASetVertexBuffers( 0, 1, &pVBuff_p, &strides[0], &offset );
pVBuff is the same vertex buffer used for the ball, its not Resource interface, is a Buffer interface.. Like I said, nothing happens..What Is going on?
0

Share this post


Link to post
Share on other sites
Please run this in a debugger with a debug device and then step through the code call to UpdateSubResource() and see what debug messages occur. That should tell us what the problem is.

And, your D3D10_BOX is all zeros, which should mean copy nothing. To copy the entire contents you should pass in NULL for the box.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by DieterVW
Please run this in a debugger with a debug device and then step through the code call to UpdateSubResource() and see what debug messages occur. That should tell us what the problem is.

And, your D3D10_BOX is all zeros, which should mean copy nothing. To copy the entire contents you should pass in NULL for the box.


1- Im alredy runing with a debug device.
2- Null for D3D11_BOX results on access violation
Should I set this box with 1 ? Just tryied that, nothing happens yet.
-edit-
Just checked the sdk for the box:
A box that defines the portion of the destination subresource to copy the resource data into. Coordinates are in bytes for buffers and in texels for textures. If NULL, the data is written to the destination subresource with no offset. The dimensions of the source must fit the destination.

So why Im getting access violation when passing NULL? Because my cube have less vertices than the ball? How should I set this cube?
0

Share this post


Link to post
Share on other sites
Maybe the D3D debug layer has a bug. Try running without the debug layer and see if null works then.
0

Share this post


Link to post
Share on other sites
Very weird, I tested a bunch of times, some times it works, sometimes dont,most times dont, but when it works, it works ALWAYS when I dont quit the application, so I quit and restart, and it will (probaly) dont work again...
This happens on both debug and normal layer...this happens doesnt matter if the box is null or no, and if the rowpitch is null or not...

btw, can someone explain me what should go in the rowpich?(since my buffer is a vertex buffer..)
0

Share this post


Link to post
Share on other sites
Just curious, are you doing this for some sort of smooth vertex animation or morphing?

And, this behavior sounds bit more like the use of uninitialized memory. Are you saying that your app always starts out by not working, and then randomly starts working and stays working until you restart?
1

Share this post


Link to post
Share on other sites
Im doing this because I want display any number of meshs I want, and all of they have the same layout/format..isnt this a motive for use update subresource?

And no, it doesnt starts working, or it works, or it dont, never both on the same execution.

I alredy asked something like this before, the thing is, since you have a buffer count limit to use, you have to use the same buffer to display lots of stuff, its just that what Im trying to do, reuse the buffer with a new mesh everytime I want. Since its the same format/layout, Im trying to use updatesubresource(and not Map).

Im also not putting everything on the same buffer at once because I want each mesh to have its own world transformations...

Theres any mistake on this concept? Thats how I understand the pipeline...(displaying just one mesh is easy, now I want a entire scene).
0

Share this post


Link to post
Share on other sites
So I would use UpdateSubResource only to modify a small part of a buffer while maintaining the data surrounding it, for instance deforming part of a mesh or splatting a texture. Using update subresource may cause the driver to allocate temporary space to store your data due to dependencies of the resource by pending commands in the gpu buffer. Updating the same location in a resource several times a frame in this way will result in worst case performance. UpdateSubResource can perform very well if the driver can schedule the transfer immediately and if there are no pending commands (at least 2 frames worth) that require the data you're trying to overwrite. There's not much you can do about the first. For the second you may make the determination based on your algorithm and how long ago that data was last used, or by keeping track of dependencies by using a query to determine when the gpu is finished with a task.

You can create quite a number of vertex buffers, but creating a couple large ones and then packing them with data is certainly best. You don't want to have to upload the same data more than once if that's at all possible -- you want to limit the amount of data that needs to be moved to the GPU each frame in order to maximize performance. If you have to stream objects in and out then you'll have to work out a way to consolidate or fill holes in the resource.

What you're describing here sounds like a good fit for using Map with a dynamic vertex buffer. Dynamic buffers are designed to provide the fastest path for getting large chunks of new data to the GPU. They also prevent the need for temporary storage allocation and allow the GPU to decide the best scheduling for the transfer. Recommendations will vary depending on many factors so I'll just suggest a few. But the key here is to use Map append.

Simple option:
Once data is on the gpu it can stay there since we're not worried about running out of space. Allocate a very large dynamic buffer and just append new models to it as they are needed. This approach means that you won't use DISCARD since that will cause previously written data in the buffer to be lost. Drawing will require you to know the offsets of each model in the buffer.

More Complicated:
Objects need to be streamed in and have a variable lifetime on the GPU. In this case I would use the dynamic vertex buffer to get data to the GPU, and then do a gpu copy of the new data to a very large default resource in order to maintain it long term. Drawing will still require you to know the offsets into the vertex buffer for each model. The same offsets can also be used to keep track of free spaces as they open up. Depending on how much streaming you end up doing, you will probably have to consolidate the data occasionally to prevent fragmentation. Make sure that the dynamic buffer is large enough to contain all data that you want to stream over the course of at least 2-4 frames so that you don't have to call discard too often.
1

Share this post


Link to post
Share on other sites
Dude, this map is so more complex than the old lock or Im getting dumber with experience?..

Doesnt the old lock method returns a pointer to a pointer to the data, so u could just update the data by updating the address pointed to(since is a pointer to a pointer, and not a pointer to the data)..

Now you just cant do it? I will have to update all the data pointed to one by one? I hope Im wrong(probaly).

I mean I cant do this:

D3D11_MAPPED_SUBRESOURCE newdata_map; ZeroMemory( &newdata_map, sizeof(D3D11_MAPPED_SUBRESOURCE) );
if( pDIContext_p->Map( pVBuff_p, 0, D3D11_MAP_WRITE_DISCARD, NULL, &newdata_map )!= S_OK ) return E_FAIL;
newdata_map.pData = pV_p;
pDIContext_p->Unmap( pVBuff_p, 0 );



neither this:( this copies just the index[0] on the array right?)

D3D11_MAPPED_SUBRESOURCE newdata_map; ZeroMemory( &newdata_map, sizeof(D3D11_MAPPED_SUBRESOURCE) );

if( pDIContext_p->Map( pVBuff_p, 0, D3D11_MAP_WRITE_DISCARD, NULL, &newdata_map )!= S_OK ) return E_FAIL;
vposnormaltexmaterial * pData_SRC = (vposnormaltexmaterial*)pV_p;
vposnormaltexmaterial * pData_DST = (vposnormaltexmaterial*)newdata_map.pData;
*pData_DST = *pData_SRC;
pDIContext_p->Unmap( pVBuff_p, 0 );



Just that seems to work:

D3D11_MAPPED_SUBRESOURCE newdata_map; ZeroMemory( &newdata_map, sizeof(D3D11_MAPPED_SUBRESOURCE) );

if( pDIContext_p->Map( pVBuff_p, 0, D3D11_MAP_WRITE_DISCARD, NULL, &newdata_map )!= S_OK ) return E_FAIL;

memcpy( newdata_map.pData, pV_p, iRowPitch_aka_Width );
pDIContext_p->Unmap( pVBuff_p, 0 );




Am I doing what is suppose to do?(im using discard now to make my life easier)
Isnt that much less efficient than the lock method?
0

Share this post


Link to post
Share on other sites
The pointer returned when calling Map() is the location of the resource on the CPU side. You should memcpy or write your vertex data directly to this memory (making sure not to exceed the resource size) and then Unmap it.

Mapping a dynamic resource many times a frame with D3D11_MAP_WRITE_NO_OVERWRITE will not incur any additional cost. In this scenario you just want to append data to the buffer. Data already in the buffer is still usable by the GPU and unaffected by the call to Map (no locking involved). Of course you have to keep track of where the current 'end' to the buffer is. Mapping with D3D11_MAP_WRITE_DISCARD means that a new buffer will be allocated for you (also no locking involved).

Dynamic resources have a CPU side space and a GPU side space. The CPU side resource makes Mapping them cheap for write new data to them. Later on when you go to use the data, the driver will figure out that it needs do a transfer to the GPU and do so in the most efficient way possible. The driver will also ensure that it only transferred the data that was actually needed to draw the model. So essentially the driver is keeping track of what parts of the dynamic buffer are valid and managing a few things for you.

The transfer is one way, calling discard invalidates everything in the buffer and essentially resets the drivers information about the resource.
0

Share this post


Link to post
Share on other sites
The D3D11.h file has a set of #defines at the top indicating all of the limits/bounds of the API. It contains D3D11_REQ_RESOURCE_SIZE_IN_MEGABYTES_EXPRESSION_A_TERM which is 128 meg. You might be able to create something larger but that depends on if the driver wants to let you.
0

Share this post


Link to post
Share on other sites
DieterVW, so just to double-check, the best way to render a scene is to have big dynamic vertex/index buffers per input layout that are filled on the fly each frame as opposed to storing the geometry on a per mesh basis, right?
0

Share this post


Link to post
Share on other sites
There are several things involved in answering the question of what is best.

Packing several models with the same layout into a single buffer, whether it is dynamic, immutable, or default, should have some gains since it can be left bound to the pipeline for more draw calls and it fragments memory less.

I only recommend dynamic buffers for model data that needs changing on a relatively frequent basis. You do not want to be uploading the same model data to the GPU every frame, so data that will persist for a while should be moved out of the dynamic buffer and into a default buffer.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By AireSpringfield
      Hi,
        I started reading Introduction to 3D Game Programming with Direct3D 11.0 and have a little question about callback function. In author's example code d3dApp.cpp, he managed to assign a member function to WNDCLASS::lpfnWndProc
      namespace {     // This is just used to forward Windows messages from a global window     // procedure to our member function window procedure because we cannot     // assign a member function to WNDCLASS::lpfnWndProc.     D3DApp* gd3dApp = 0; } LRESULT CALLBACK MainWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) {     // Forward hwnd on because we can get messages (e.g., WM_CREATE)     // before CreateWindow returns, and thus before mhMainWnd is valid.     return gd3dApp->MsgProc(hwnd, msg, wParam, lParam); } in constructor D3DApp::D3DApp()
      gd3dApp = this; and in bool D3DApp::InitMainWindow()
      wc.lpfnWndProc = MainWndProc; Notice that D3DApp::MsgProc is a virtual function. 
      As far as I'm concerned, I would find it convenient to declare MsgProc member function as static. However, a static member can't be virtual. Is there any solution so that I can overcome the contradiction except author's method?
       
    • By Holy Fuzz
      I am working on a game (shameless plug: Cosmoteer) that is written in a custom game engine on top of Direct3D 11. (It's written in C# using SharpDX, though I think that's immaterial to the problem at hand.)
      The problem I'm having is that a small but understandably-frustrated percentage of my players (about 1.5% of about 10K players/day) are getting frequent device hangs. Specifically, the call to IDXGISwapChain::Present() is failing with DXGI_ERROR_DEVICE_REMOVED, and calling GetDeviceRemovedReason() returns DXGI_ERROR_DEVICE_HUNG. I'm not ready to dismiss the errors as unsolveable driver issues because these players claim to not be having problems with any other games, and there are more complaints on my own forums about this issue than there are for games with orders of magnitude more players.
      My first debugging step was, of course, to turn on the Direct3D debug layer and look for any errors/warnings in the output. Locally, the game runs 100% free of any errors or warnings. (And yes, I verified that I'm actually getting debug output by deliberately causing a warning.) I've also had several players run the game with the debug layer turned on, and they are also 100% free of errors/warnings, except for the actual hung device:
      [MessageIdDeviceRemovalProcessAtFault] [Error] [Execution] : ID3D11Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_DEVICE_HUNG: The Device took an unreasonable amount of time to execute its commands, or the hardware crashed/hung. As a result, the TDR (Timeout Detection and Recovery) mechanism has been triggered. The current Device Context was executing commands when the hang occurred. The application may want to respawn and fallback to less aggressive use of the display hardware). So something my game is doing is causing the device to hang and the TDR to be triggered for a small percentage of players. The latest update of my game measures the time spent in IDXGISwapChain::Present(), and indeed in every case of a hung device, it spends more than 2 seconds in Present() before returning the error. AFAIK my game isn't doing anything particularly "aggressive" with the display hardware, and logs report that average FPS for the few seconds before the hang is usually 60+.
      So now I'm pretty stumped! I have zero clues about what specifically could be causing the hung device for these players, and I can only debug post-mortem since I can't reproduce the issue locally. Are there any additional ways to figure out what could be causing a hung device? Are there any common causes of this?
      Here's my remarkably un-interesting Present() call:
      SwapChain.Present(_vsyncIn ? 1 : 0, PresentFlags.None); I'd be happy to share any other code that might be relevant, though I don't myself know what that might be. (And if anyone is feeling especially generous with their time and wants to look at my full code, I can give you read access to my Git repo on Bitbucket.)
      Some additional clues and things I've already investigated:
      1. The errors happen on all OS'es my game supports (Windows 7, 8, 10, both 32-bit and 64-bit), GPU vendors (Intel, Nvidia, AMD), and driver versions. I've been unable to discern any patterns with the game hanging on specific hardware or drivers.
      2. For the most part, the hang seems to happen at random. Some individual players report it crashes in somewhat consistent places (such as on startup or when doing a certain action in the game), but there is no consistency between players.
      3. Many players have reported that turning on V-Sync significantly reduces (but does not eliminate) the errors.
      4. I have assured that my code never makes calls to the immediate context or DXGI on multiple threads at the same time by wrapping literally every call to the immediate context and DXGI in a mutex region (C# lock statement). (My code *does* sometimes make calls to the immediate context off the main thread to create resources, but these calls are always synchronized with the main thread.) I also tried synchronizing all calls to the D3D device as well, even though that's supposed to be thread-safe. (Which did not solve *this* problem, but did, curiously, fix another crash a few players were having.)
      5. The handful of places where my game accesses memory through pointers (it's written in C#, so it's pretty rare to use raw pointers) are done through a special SafePtr that guards against out-of-bounds access and checks to make sure the memory hasn't been deallocated/unmapped. So I'm 99% sure I'm not writing to memory I shouldn't be writing to.
      6. None of my shaders use any loops.
      Thanks for any clues or insights you can provide. I know there's not a lot to go on here, which is part of my problem. I'm coming to you all because I'm out of ideas for what do investigate next, and I'm hoping someone else here has ideas for possible causes I can investigate.
      Thanks again!
       
    • By thmfrnk
      Hello,
      I am working on a Deferred Shading Engine, which actually uses MSAA for Antialising. Apart from the big G-Buffer ressources its working fine. But the intention of my engine is not only realtime-rendering as also render Screenshots as well as Videos. In that case I've enough time to do everything to get the best results. While using 8x MSAA, some scenes might still flicker.. especially on vegetations. Unfortunately 8x seems to be the maximum on DX11 Hardware, so there is no way to get better results, even if don't prefer realtime.
      So finally I am looking for a solution, which might offer an unlimited Sample count. The first thing I thought about was to find a way to manually manipulate MSAA Sample locations, in order to be able to render multiple frames with different patterns and combining them. I found out that NVIDIA did something equal with TXAA. However, I only found a solution to use NVAPI, in order to change sample locations. https://mynameismjp.wordpress.com/2015/09/13/programmable-sample-points/
      While I am working on .NET and SlimDX I've no idea how hard it would to implement the NVIDIA API and if its possible to use it together with SlimDX. And this approach would be also limited to NV.
      Does anyone have an idea or maybe a better approach I could use?
      Thanks, Thomas
    • By matt77hias
      For vector operations which mathematically result in a single scalar f (such as XMVector3Length or XMPlaneDotCoord), which of the following extractions from an XMVECTOR is preferred:
      1. The very explicit store operation
      const XMVECTOR v = ...; float f; XMStoreFloat(&f, v); 2. A shorter but less explicit version (note that const can now be used explicitly)
      const XMVECTOR v = ...; const float f = XMVectorGetX(v);  
    • By Coelancanth
      Hi guys,
      this is a exam question regarding alpha blending, however there is no official solution, so i am wondering  whether my solution is right or not... thanks in advance...

      my idea:
      BS1:
      since BS1 with BlendEnable set as false, just write value into back buffer.
      -A : (0.4, 0.4, 0.0, 0.5)
      -B : (0.2, 0.4, 0.8, 0.5)
       
      BS2:
       
      backbuffer.RGB: = (0.4, 0.0, 0.0) * 1 + (0.0, 0.0, 0.0) * (1-0.5)      = ( 0.4, 0.0, 0.0)
      backbuffer.Alpha = 1*1 + 0*0   =1
       
      A.RGB = (0.4, 0.4, 0.0)* 0.5 + (0.4, 0.0, 0.0)* ( 1-0.5)   = (0.4,0.2,0.0)
      A.Alpha=0.5*1+1*(1-0.5) = 1
       
       
      B.RGB = (0.2, 0.4, 0.8) * 0.5 + (0.4, 0.2, 0.0) * (1-0.5)  = (0.3, 0.3, 0.4)
      B.Alpha = 0.5 * 1 + 1*(1-0.5)  = 1
       
      ==========================
      BS3:
       
      backbuffer.RGB = (0.4, 0.0, 0.0) + (0.0, 0.0, 0.0)  = (0.4, 0.0, 0.0)
      backbuffer.Alpha = 0
       
      A.RGB = (0.4, 0.4, 0.0) + (0.4, 0.0, 0.0) = (0.8, 0.4, 0.0)
      A.Alpha = 0
       
      B.RGB = (0.2, 0.4, 0.8) + (0.8, 0.4, 0.0) = (1.0, 0.8, 0.8)
      B.Alpha = 0
       
       
       
  • Popular Now