Jump to content

  • Log In with Google      Sign In   
  • Create Account


BornToCode

Member Since 17 Apr 2004
Offline Last Active Yesterday, 09:25 PM
-----

#5108896 DirectX11 Swap Chain Format

Posted by BornToCode on 13 November 2013 - 12:10 AM

 


So what about vista or windows 7 with directx 10/11 where flip sequential might not be available or using say discard as the presentation model?

It will use an additional BitBlit() to copy the DX surface to an intermediate DWM surface, which is slower. But same principal holds - if the swap-chain format matches the screen-format, this operation will be faster. When using the BitBlit mode, you also have more options for swap-chain formats.

 

This is not completely true for windows 7. As long your backbuffer matches your front buffer and you have your swap chain set to full screen. It will not bitblt, but swap the backbuffer with the front. The only time it bitblt is when the swapchain is in window mode or when the swap chain back buffer does not match the front buffer width/heigh and format.




#5108407 Fullscreen Direct3D 11 multi device (gpu), multi swap chain per device rendering

Posted by BornToCode on 10 November 2013 - 11:41 PM

One way you can handle is to called ChangeDisplayModeSettingEx to change the monitor screen resolution. Then create a bordeless window that is the same size that you have you display setting set to. This will simulate exactly what directx does when you go fullscreen. At my job we have to handle 4 monitors in full screen and that is how i did it.


#5099647 interpolating scalar instead of vector

Posted by BornToCode on 08 October 2013 - 11:54 AM

If you go down that route then it will interpolate across vertices instead of pixels. That basically means all the pixels across vertex bounds will have the same value.




#5098118 Improving Graphics Scene

Posted by BornToCode on 01 October 2013 - 01:11 PM

There are alot of things that can be done in order to increase perfomance. The beast optimization is not draw at all :). What i basically mean by that the less you draw the better. A way to handle that is to use a Scene partition algorithm, and depending on the type of game there are many out there you can choose from. Another thing you can look into is LOD(Level of detail), objects closer to the camera are drawned at full detail while things in the distance are drawned with lower amount of details. The same concept can be applied to textures in the case of mipmapping. That is just some of the few things you can do. Also make sure you use the correct data structure for what you are trying to do.

Hope that kind of give you an idea. Happy Coding :)




#5079680 API Wars horror - Will it matter?

Posted by BornToCode on 22 July 2013 - 04:06 PM

Most of the api are the same but they do have some difference for example there is no such thing as Pixel Buffer objects in DirectX. Just wanted to throw my two cent in there. But at the end of the day if you want to write GL api that looks very similar to DirectX11 it is not impossible. As this is something i am currently doing right now.




#5076025 How to make a map editor with cubes

Posted by BornToCode on 07 July 2013 - 09:21 PM

Build the editor into the engine, using the same rendering code for gameplay and editing, as well as the code that loads/saves maps.

Do not build the editor into the engine, Use the engine as the back end for the editor.




#5075506 How to make a map editor with cubes

Posted by BornToCode on 05 July 2013 - 12:16 PM

You know the grid size. So all you need to save the infromation for each tile of the grid that is filled. I do not see what the confusion is.




#5075318 I can't see anything?

Posted by BornToCode on 04 July 2013 - 03:04 PM

Run it through pix if you are on windows 7 or use the built in graphics debugger in VS2012 Pro or above to debug and see exactly what is getting set on the GPU side.




#5074018 HLSL5 tex2dproj equivalent?

Posted by BornToCode on 29 June 2013 - 05:49 PM

Can you do the projection yourself, by dividing each component by W. Then just use those values into texture.Sample for the UV coordinate.




#5072121 Using D3D9 Functions and, HLSL

Posted by BornToCode on 22 June 2013 - 07:52 PM

Another thing to know is that resource sharing across device is not possible. Resource on a IDirect3DDevice9 cannot be shared with ID3D11Device resources. The only thing you can do is copy between them.




#5070112 skybox - issue with not working

Posted by BornToCode on 16 June 2013 - 12:47 AM

Do not start cursing at the system. That is why you have pix, why don't you debug it in pix to see where you are going off.




#5069452 how do i know the vertex coordinate of the mesh

Posted by BornToCode on 13 June 2013 - 10:36 AM

you can grab the the fvf and & D3DFVF_XYZ to see if the FVF have vertex information .If it does then that data is usually resides at the beginning of the vertex memory block when you lock it for access.




#5067889 Index Buffer Object vs. Index Array

Posted by BornToCode on 06 June 2013 - 02:02 PM

You use the Index buffer that way not only you save memory on reusing vertices. You can also used index buffers as a way to split the mesh by material id, that way If you have a mesh that uses multiple material, you can break each section into his own index buffer while referencing the same vertex buffer. For example let's say you wanted to draw an Quad, you would basically need 6 vertices. But with an index buffer you only need 4 vertices and the index buffer would reference which  one of those vertices it needs to use to create the quad.

 

Wut? Whether the index data is stored in a vertex buffer, or whether it's stored in an array in system memory, they'll both consume exactly the same amount of memory.

 

 

>>>My question is what are (if any) the performance differences in storing index data in an array and using it to call glDrawElements:

The Buffer version *should* be quicker (unless you are using an integrated graphics card that makes use of system memory instead of its own dedicated GDDR5 ram. In general, prefer vertex buffer objects, to the older vertex array approach.

 

I think you are confuse, the OP is not asking if he is storing the index into the vertex buffer. He is talking about splitting his mesh into Submeshes.




#5067757 Index Buffer Object vs. Index Array

Posted by BornToCode on 05 June 2013 - 08:29 PM

You use the Index buffer that way not only you save memory on reusing vertices. You can also used index buffers as a way to split the mesh by material id, that way If you have a mesh that uses multiple material, you can break each section into his own index buffer while referencing the same vertex buffer. For example let's say you wanted to draw an Quad, you would basically need 6 vertices. But with an index buffer you only need 4 vertices and the index buffer would reference which  one of those vertices it needs to use to create the quad.




#5066685 Multi-threaded PBO Question

Posted by BornToCode on 01 June 2013 - 09:40 AM

I did a similar system like that in the past. What I did is that in my background thread it just decode the frame and store it into an buffer. then send an event to my main thread letting it know it has a buffer available. Then that frame get send down to the main thread along with movie handle which contains an array of PBO. then that frame get inserted into one of the PBO. When it comes time to render I figure which frame the movie is on then grab the correct PBO and apply that to the image and draw. Once the frame changed, I marked that PBO index as invalid and when the thread sends the next event that PBO will be used. So to summarize you have two threads one that is doing Decoding and the main thread which uploads the decoding data to the PBO. then on your render function you just draw the correct pbo.






PARTNERS