Jump to content

  • Log In with Google      Sign In   
  • Create Account


MJP

Member Since 29 Mar 2007
Offline Last Active Yesterday, 08:42 PM

#4969884 Do You Have Any Favorite Computer Graphics Books?

Posted by MJP on 15 August 2012 - 11:45 AM

RTR 3rd edition is fantastic, I'd consider it a must-have for any graphics programmer. The other book that immediately comes to mind is Physically Based Rendering.

Also Khrom, the 3rd edition had a lot of cool material added by Naty Hoffman. So you might want to consider picking it up, even if you have the 1st edition.


#4968968 How to get the input slot of a bound resource ?

Posted by MJP on 13 August 2012 - 12:34 AM

There is no direct way to achieve what you want. If you want you can make some external mapping system that keeps track of all currently bound slots for any particular texture, and use that. Or you can ask the device for all currently bound shader resources for a particular shader stage, and look for a particular texture.


#4968587 Map and instancing questions

Posted by MJP on 11 August 2012 - 07:27 PM

I see now. Are you using D3D11? If you are, you can store your per-instance vertex data in a structured buffer. Otherwise you can store it in one or more typed buffers with the appropriate format. Then you can us SV_InstanceID and SV_VertexID in your vertex shader to fetch the appropriate vertex data, by doing index = SV_InstanceID * NumVertices + SV_VertexID.


#4968201 Reading texture pixel data

Posted by MJP on 10 August 2012 - 03:03 PM

Not all resources can be accessed by the CPU. It depends on the flags you passes when you created the resource, specifically the "Usage" and "CPUAccessFlags" members. There's a chart here that shows the valid combinations of these flags.

The D3DX texture loader will load your texture using D3D11_USAGE_IMMUTABLE, which gives you the fastest possible GPU performance for reading the texture. If you don't need to access the texture on the GPU, then what you can do instead is pass a D3DX11_IMAGE_LOAD_INFO structure through the pLoadInfo parameter, and specify that you want D3D11_USAGE_STAGING and D3D11_CPU_ACCESS_READ. Then you should be able to Map your texture and read it on the CPU. If you need to read the texture on GPU, you load a separate version of the texture that will use D3D11_USAGE_IMMUTABLE.

If you enable the debug version of the runtime, you should get error messages in the debug output telling you if make mistake like this. You can turn it on by passing the D3D11_CREATE_DEVICE_DEBUG flag when creating your device.


#4968169 Lambert BRDF question

Posted by MJP on 10 August 2012 - 01:03 PM

That one over pi factor is a normalization factor, which is used to enforce energy conservation. Energy conservation requires that incoming energy >= outgoing energy, and in the case of reflectance this ratio of outgoing energy / incoming energy is called the directional-hemispherical reflectance. For a given lighting direction, you can compute the directional-hemispherical reflectance by integrating about the hemisphere for all possible outgoing directions (where the outgoing direction is your "view" vector) and for each direction computing your BRDF(l, v) times the cosine of the angle between your view vector and the surface normal. In the case of lambert the BRDF is constant, and when you integrate cos(theta) about the hemisphere you end up with a value of pi. Thus you divide by pi to bring your ratio down to 1.0. Some people like to implicitly assume that their light intensities are already divided by pi and drop it from the shader, but if you do this you need to be careful to multiply other BRDF's by pi (for instance, normalized Blinn-Phong or Cook-Torrance). Otherwise your diffuse will be too bright relative to your specular.

There's a more in-depth explanation of this in Real-Time Rendering in Chapter 7.5, if you're interested. I can also go through the math in more detail if you'd like.


#4967939 Map and instancing questions

Posted by MJP on 09 August 2012 - 05:18 PM

You need to use one buffer for all of your instance data, otherwise you won't be able to draw all of your instances in a single draw call (which kinda defeats the purpose). What you can do is put all of your instance data into a shared array in CPU memory, then whenever that changes you can update the entire instance data buffer with Map and DISCARD.


#4966803 Problem with UV-Coord

Posted by MJP on 06 August 2012 - 02:44 PM

Your UV's are incorrect. You probably just want to set them up such that (0, 0) is the top left and (1, 1) is the bottom right. So the UV's for your vertices should be

(1, 1)
(0, 1)
(1, 0)
(0, 0)


#4966425 Deferred rendering tutorials for DirectX 11?

Posted by MJP on 05 August 2012 - 12:52 PM


There's at least one book with an example implementation. Practical Rendering and Computation with DirectX 11.


Yes. I have heard that it is supposed to be good. But as a student with no means, those 50 € seem horribly valuable!


You have to pay for the book, but the code is free. Posted Image

Aside from that, there's also Andrew Lauritzen's excellent sample on tile-based deferred rendering with compute shaders. I also have yet another sampleon my blog that implements both tile-based deferred rendering and light-indexed deferred rendering using compute shaders. However these last two samples are definitely more advanced, and assume that you understand the basics of deferred rendering.


#4966230 Video Card

Posted by MJP on 04 August 2012 - 07:46 PM

I doubt that your laptop actually has separate display outputs for each GPU. More likely your laptop has a power-saving feature where the weaker integrated GPU is used for most things, and the dedicated GPU only gets used for certain applications that require it. Check your performance and energy-saving configurations.


#4965903 why would you need a different shader doing the same for static/skinned/insta...

Posted by MJP on 03 August 2012 - 12:24 PM

Dynamic shader linkage can be used to solve certain cases of shader permutation problems, but not all of them. For instance you can't use it to change which inputs a shader uses, so you couldn't use it to solve skinning or instancing since those require different vertex shader inputs. Plus the API and shader syntax for dynamic linkage are pretty awkward to use, which I would suspect has contributed to the fact that almost nobody is using it.


#4965899 fatal error C1083: Cannot open include file: 'D3DX11async.h': No such...

Posted by MJP on 03 August 2012 - 12:17 PM

It's in the Include folder of the DirectX SDK install directory. Did you install the DirectX SDK?


#4965656 DirectX10 + HLSL Constantbuffer problem

Posted by MJP on 02 August 2012 - 04:08 PM

Your problem is that constant buffer sizes need to be multiples of 16 bytes, which in your case means that you need to round up the size from 28 to 32 bytes. You can round it up by doing (size + 15) / 16.

Either way if you enable the debug layer like Radikalizm suggests (which you do by passing D3D10_CREATE_DEVICE_DEBUG when creating your device), the runtime will tell you what the problem is.


#4965609 SpriteBatch with SharpDX

Posted by MJP on 02 August 2012 - 12:17 PM

I haven't seen anything, but Shawn Hargreaves (former XNA developer) did make a version of SpriteBatch for native D3D11. I'd imagine it shouldn't bee too hard to port that to SharpDX.


#4965260 PIX and shader model 5.0

Posted by MJP on 01 August 2012 - 11:56 AM

FYI PIX will debug vertex, geometry, and pixel shaders, but it won't debug hull, domain, or compute shaders. You can use vendor-specific tools to do that if you need to, although I'll warn you that Parallel Nsight requires a remote debugging target to debug shaders.


#4965257 Tips on abstracting rendering interfaces for multiple renderers?

Posted by MJP on 01 August 2012 - 11:46 AM

I've made a platform agnostic renderer using your method and abstract base classes, I found that it was a giant pain managing all the platform defines to make sure that the proper helper structures get included


We have our structures in one header file, with one other header file that includes the right header based on the platform. I can't imagine why you'd need more than that.

and I found that it was really difficult to abstract around all of the strange features of each renderer using the compile time solution.


How does compile-time polymorphism at all limit you in terms of your ability to abstract out higher-level features? You can do all of the same things you can do with abstract base classes (if not more), the only difference is you don't eat a virtual function call every time you need to do something. I mentioned dealing with the small, low-level building blocks of a renderer but you can also have different platform implementations of higher-level features.

Why do your prefer compile time to abstract base classes, and


Like I already mentioned, I prefer not having virtual function calls and indirections all over the place.

how do you handle platform scaling, like D3D11 feature levels or OGL levels?


I don't, because I don't care about them. I mainly deal with consoles, which obviously skews my preferences quite a bit.




PARTNERS