Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Quat

Member Since 15 Sep 2003
Offline Last Active Jul 01 2015 08:35 PM

#5199360 Radiance Question

Posted by Quat on 20 December 2014 - 09:53 PM

Okay I got time to review the radiometry terms again.  In the figure below I have drawn a spherical light source.  Are my explanations correct?  Figure (a) was hard for me to reason about.  I found that I wanted to think of intensity as radiance, but that I had to consider all areas on the sphere-light can emits photons in the set of direction defined by w.

 

 

radiance.jpg

 

Now assuming the above is correct, going back to real-time graphics mode.  When we define a point light source that emits photons equally in every direction we specify its radiance magnitude, say I_0.  Even though we think of radiance as a ray of light, it is really a thin cone.  So when the ray hits a surface, the photons in the ray have "spread out" based in the inverse-square of the distance, so to compute the irradiance at the surface we do: E = I_0/d^2 in our shader to get the irradiance from the point light source.

 

Now we apply some BRDF to find the outgoing radiance that will reach the eye, call it O for outgoing.  So O is a ray leaving the surface and reaching the eye.  But shouldn't O be attenuated based on the distance of the lit surface point and the eye?  As the distance increases, the photons will again "spread out" and cause less stimulii to a sensor in the eye. 




#5190003 General Programmer Salary

Posted by Quat on 29 October 2014 - 01:43 PM


What if (heaven forbid) the company folds, and I find myself looking for a new job? I will get low-balled by every company out there on the basis of my previous salary. In addition to that risk, I feel that they're essentially asking me to take a pay cut for the company, which wouldn't even be out of the question if I felt like it would be appreciated, but I don't think they see it that way. Lastly, we are a small company, but our overall costs run in the millions of dollars per year, and so even if the company is not doing well, I hardly think that a $15k salary bump for one employee is going to affect things very much.

 

First, I would just wait until you finish school.  Looking for a job and going to school won't be fun. 

 

Second, don't worry.  Every time I switched jobs in my lifetime, I've gotten a significant pay bump in doing so.  When you apply for the job, use market rates from salary.com and glassdoor.com in your area to determine what you should get based on your experience. 




#5147682 dllexport global variable

Posted by Quat on 17 April 2014 - 10:57 AM

This paper gives a way to force high performance rendering when working with optimus driver:

 

http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf

 

It says to use:

 

extern "C" {
    _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

 

I'm not too familiar with dllexport.  Does this need to be in a header file or just any .cpp file (like app.cpp)?

 

 




#5019998 FFT Water Shaders

Posted by Quat on 10 January 2013 - 02:37 PM

NVIDIA DX11 SDK has a compute shader implementation (OceanCS).  It follows the paper by Tessendorf. 




#5009638 Icosahedron Tessellation

Posted by Quat on 11 December 2012 - 07:30 PM

Assuming your icosahedron is centered about some coordinate system, then in that coordinate system you can project the vertex onto the unit sphere by normalizing it. To get a point on a sphere with radius R, just scale the unit vector by R.


#5009624 Icosahedron Tessellation

Posted by Quat on 11 December 2012 - 07:11 PM

You can build the triangle list for the isocahedron, and use 3 control points. Each input primitive is a triangle and the initial vertices are the control points. Then tessellate each triangle, and project the tessellated vertices back on to the unit sphere. The more you tessellate the better the sphere approximation.


#5006708 Expensive Post Processing Techniques

Posted by Quat on 03 December 2012 - 12:27 PM

SSAO with bilateral blur should do it.


#4906153 ssao noise/halo reduction

Posted by Quat on 25 January 2012 - 12:01 PM

Those artifacts looks like you are getting a whole bunch of self intersection (points on the same plane as P are working to occlude P). Even without blurring, the interiors of your walls and ground plane should be white (there are no occluders). You can scale your ambient occlusion by how much "in front" your random sample point Q is to the pixel P you are shading:

float s = max(dot(n, normalize(q - p)), 0.0f);

This is in the game programming gems 8 book by the starcraft guy.


#4862109 UpdateSubresource

Posted by Quat on 15 September 2011 - 09:50 AM

I noticed the Effects11 library uses UpdateSubresource to update constant buffers:

D3DX11INLINE void CheckAndUpdateCB_FX(ID3D11DeviceContext *pContext, SConstantBuffer *pCB)
{
	if (pCB->IsDirty && !pCB->IsNonUpdatable)
	{
    	// CB out of date; rebuild it
    	pContext->UpdateSubresource(pCB->pD3DObject, 0, NULL, pCB->pBackingStore, pCB->Size, pCB->Size);
    	pCB->IsDirty = FALSE;
	}
}

My question is: Is it better to use UpdateSubresource, or to make the constant buffer dynamic and Map it?


#4824133 AMD Fusion

Posted by Quat on 16 June 2011 - 10:48 AM

I started watching some of the AMD Fusion 2011 summit videos today.

So it sounds to me fusion is a hybrid CPU/GPU chip.

It sounds like they have removed the overhead from switching back and forth between "compute" mode and normal graphics rendering mode.

They say "a pointer is a pointer" and you can pass a pointer allocated in the C++ code over to GPU code and the GPU can just dereference it directly. So it sounds like this is a unified memory architecture??

What do you think about this architecture? Do you think NVIDIA will follow it?

Do you think this gives the best graphics, or is it more optimized for general applications that want to easily use compute power of GPU.


#425605 octrees for ray tracing

Posted by Quat on 25 November 2006 - 12:50 PM

Hi, I have a triangle based ray tracer which just traces each triangle one by one. It is very slow for all but simple meshes, so I want to add an octree to eleminate lots of wasteful tests. I just wanted to know if my algorithm is correct before I start working on the code. Here it is: Build one AABB that contains the whole scene. Subdivide box into 8 octants, and sort triangles amongst them. Repeat recursively until a leaf box contains no more than some some fixed number of triangles. So I guess this would be a "leafy" octree, where internal nodes just store an AABB, but the leaf nodes store a collection of triangles. Then once the data structure is built, I shoot a ray at the root box. It interesects it, so I test against each of its children boxes. Then continue recursively down each child box the ray intersects until I hit the leaf boxes--then do ray-triangle intersection tests. So as I recurse, I will miss entire boxes and therefore eleminate many tests with one AABB/ray test.


PARTNERS