Jump to content

  • Log In with Google      Sign In   
  • Create Account


Jason Z

Member Since 04 Feb 2004
Offline Last Active Today, 03:06 PM
*****

#5111214 PIX Crash

Posted by Jason Z on 21 November 2013 - 09:48 PM

Which OS are you targeting and/or developing on?  Like Ravyne and MJP mentioned, you can use the Graphics debugger from VS2013 for Store Apps to debug desktop apps too.  Depending on which platform you are targeting, you will have to jump through different hoops - but in the end you will get a usable debugger.




#5111010 Design help on software rasterizer / renderer

Posted by Jason Z on 21 November 2013 - 09:55 AM

I would recommend implementing the shaded portion in C++, as throwing your own language compiler into the mix doesn't seem like a useful addition in the beginning.  You could always add script based shaded programs later on.  Regarding your second question, you need to map how D3D does the boundaries - there are specific rules for each stage's input and output data.  You need to be able to mimic that in C++.

 

There are many ways to do that.  You could just accept a vector of attributes at the input of each stage, and map to those inputs based on the 'register' location within the vector.  Or you could use a template to define what your input and output structures will look like, and then have your shader program key off of members in your template parameters.  Does that make sense about how to get started?




#5109093 Render To Texture: WTF results

Posted by Jason Z on 13 November 2013 - 08:12 PM

The alpha value that ends up in the scene is either based on your clear color (which I think you mentioned is alpha = 0) or the result of a drawing operation, in which the color (including the alpha) would be included in the pixel shader output.  So unless you are clearing it to alpha = 1, you should expect the value to be zero anywhere you didn't render something.  You can enable/disable alpha blending in the output merger's blend state - it is definitely still turned on if you get transparent pixels when alpha is 0!


In this one, the cube looks like its leaning downwards: http://tinypic.com/view.php?pic=2r59qwy&s=5#.UoOE1vnwmt8
And in this one, the cube looks like it is squished (view from the side of the quad): http://tinypic.com/view.php?pic=2hxpykx&s=5#.UoOF4vnwmt8

Those two images actually look correct to me.  Remember what you are doing here - you render the scene into one image, then you take that image and draw it in another coordinate space.  This is akin to having a television in your room and the moving around the angle that you are viewing it from.  Even though the television picture isn't changing, the shape it appears to you from the different vantage points will be changing and distorting.




#5107555 Multiple shaders in the same HLSL file (no effect framework)

Posted by Jason Z on 06 November 2013 - 04:58 PM

The constants used by each individual shader should be independent of one another.  For example, if you have 10 constants, and only 5 of them are shared among shaders and the other five are mutually exclusive, then you would get less than 10 (but greater than or equal to 5) constants required for each shader.

 

With that said, I would highly recommend against doing this.  You don't gain anything by putting them into one file, and it adds complexity when you want to dig through a shader.  This is somewhat analogous to putting all of the classes in your C++ code into one header and one CPP file - the same arguments against doing this apply to your idea here.

 

Instead I would group all of the shaders that will be used together (i.e. one pipeline configuration) into a single file.  This is pretty easy to manage, and if you will have shared shaders (for example multiple pipeline configurations use the same vertex shader) then you can always make that shared shader into its own file and just use an include mechanism to bring it into the overall file.




#5102628 Vertex Buffer

Posted by Jason Z on 19 October 2013 - 08:36 AM

This gets the standard answer: It depends.  You need to try it out both ways and see which one works better for your situation.  If you are just using this for your particular laptop, then your driver, GPU, CPU, memory speed, etc...  all play into the equation.  You are essentially trading many buffer setting operations for many buffer data copy operations, and that may or may not help in a given system.

 

My advice is to design your renderer so that you can swap in and out the implementation easily - don't back yourself into a corner by hardcoding your rendering routines!




#5102006 Can't pass enumerated adapter in D3D11CreateDeviceAndSwapChain

Posted by Jason Z on 16 October 2013 - 07:10 PM

If are passing an enumerated adapter, then you should use D3D_DRIVER_TYPE_UNKNOWN for the driver type.  On the other hand, if you are passing nullptr for the adapter, then you can pass the desired driver type as you have shown above.

 

You can check out the RendererDX11::Initialize(...) function from Hieroglyph 3 for an example. 

 

Does that help your initialization call?




#5100389 How do you implement batching in DirectX?

Posted by Jason Z on 10 October 2013 - 07:39 PM

The mesh data all has to be in a single Input Assembler configuration, and then the draw call can span multiple meshes.  This typically means that you need to allocate one large buffer and fill it with pre-transformed vertex data, or create a transformation matrix array and pass that as a constant parameter.  The performance will depend on what else you are doing, so the best way is to try both implementations and see which one works best in your configuration.  It may also be faster to just use multiple draw calls too!




#5098165 Improving Graphics Scene

Posted by Jason Z on 01 October 2013 - 06:52 PM

I don't think there is a general answer to this question actually - it really depends on the type of scene you are going to be rendering.  In many cases, I think having good, high quality textures created by an artist is probably the single biggest thing you can do.  After that, the way that you light the scene is probably going to be the next most important thing.  If your lighting solution can create realistic shadows and semi-plausible global illumination estimation, then you will be pretty much as good as you will get without spending serious amounts of time and effort.

 

So that is my general list: 1. Textures, 2. Shadows, 3. GI approximation.  If you can nail all three, I think you will be doing pretty good.




#5095039 Direct3D 11 Questions

Posted by Jason Z on 18 September 2013 - 04:08 PM


I understand what you are saying and I just got the resource to create based on your suggestions.  My reference material is a book titled "Practical Rendering & Computation with Direct3D 11" by Zink, Pattineo and Hoxley and in their chapter on D3D11 resources they have a table where they describe various GPU/CPU access for each usage type.  For the staging type they put GPU/CPU access as full read/write.  This led me to assume I could bind a buffer with CPU read/write access to the pipeline.  However based on what you are saying (and what I just read through again on MSDN) a staging only has CPU read/write and must copy from another buffer (in my case default probably) which can bind to the pipeline.  I set my bind flag to 0 and the resource created properly.
Actually, MJP is the Pettineo and I'm the Zink from your list of authors :)  Can you please specify more precisely where in the book something is unclear about the resource usage?  If it would help others, then I can add a mention of something like that on the Hieroglyph 3 codeplex page.

 

Regarding the mouse detection, how will you set the boolean value from the pixel shader?  Are you writing to a render target and then you would evaluate that render target later?  I would think that you could achieve this more efficiently with occlusion queries, although both techniques would trigger a pipeline stall and force a CPU/GPU synch point for each test.  Even so, I think the query would give you what you want without too much trouble - take a look at them!




#5094255 AppendStructuredBuffer / ConsumeStructuredBuffer element count

Posted by Jason Z on 15 September 2013 - 11:03 AM

Unfortunately no, there isn't an intrinsic that you could use in this manner.  However, you can use the CopyStructureCount method to copy the value into a constant buffer which you could then use.

 

However, this will likely have little benefit to you during the execution of a compute shader in which you are actively appending or consuming against a buffer.  Since many threads are working on the buffer in parallel, the count will likely change in non-deterministic fashion depending on the underlying hardware and workload size.




#5093899 Beginning with cube maps

Posted by Jason Z on 13 September 2013 - 05:17 PM

You can take a look at the single pass environment mapping chapter of the D3D10 book: Programming Vertex, Geometry, and Pixel Shaders.  It actually contrasts cube mapping with sphere mapping and dual paraboloid mapping, so you can get lots of information out of it.  If you have any questions, feel free to ask here and I'm sure you will get some answers.

 

I hope that helps!




#5093446 Win32 C++ error

Posted by Jason Z on 11 September 2013 - 07:04 PM

 

Does this not define it?

LRESULT CALLBACK WndProc( HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam );

To be perfectly clear on the terminology, that line of code declares the function, and the function body is what defines it.




#5091174 Atomic functions in SM5

Posted by Jason Z on 02 September 2013 - 08:01 PM

Unfortunately I haven't tried to use this in person, so I can't give you an un-qualified answer.  However, when I was doing some research on how these instructions worked way back when D3D11 first came out, I seem to recall the resource name with a bracket notation to specify the address.  

 

This is also going out on a limb, but I also seem to recall an AMD presentation about order independent transparency that creates a linked list for each pixel in a render target.  The implementation was dependent on this type of updating of a resource in a synchronized way.  If you search for the presentation (from GDC perhaps?) there is sample code in the slides that shows how they authored it.  Sorry I can't give direct experience, but I think you will be able to find what you need there...

 

About the globallycoherent keyword - the atomic intrinsics will work with or without it.  I think it only deals with the thread synchronization intrinsics, but again I'm not speaking directly from proof by use...

 

I hope that helps!




#5091170 suggestions for generic optional drawground routine in game engine

Posted by Jason Z on 02 September 2013 - 07:49 PM

Maybe it is me, but I am having a hard time following your question (it's probably me...).  In general, I would treat your ground rendering methods in the same way that you treat any other object in your scene.  There should be a way to encapsulate one rendering operation into an object, and that object should be able to configure the pipeline for whatever technique will be used - regardless of splatting, or megatexture, or whatever, your scene and your rendering code shouldn't care - all that detail should be held in your terrain object.

 

If you do that, then the implementation details are completely abstracted away and isolated to a single class.  So if you wanted to switch to a brand new terrain rendering technique that you saw at SIGGRAPH 2020, then you just code up a new class that does it, drop it in as a replacement to your existing terrain class and you are done!




#5091167 ID3D11ShaderResourceView save to file

Posted by Jason Z on 02 September 2013 - 07:43 PM

Have you taken a look at the DirectXTK library?  I believe this method is already implemented for you, or at least it can provide you with a reference implementation to start from...






PARTNERS