Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Sep 2003
Offline Last Active May 04 2016 03:02 PM

#5275470 Basic TCP/IP Question

Posted by Quat on 12 February 2016 - 05:07 PM

I'm new to network programming and have to write a tool that runs on Windows and communicates with a Linux box over the network.  The linux box has a TCP/IP server setup using c++ with boost.  


My tool in Windows needs to connect.  For the Windows side, I am writing the tool in C# and looked at this tutorial: http://www.codeproject.com/Articles/10649/An-Introduction-to-Socket-Programming-in-NET-using


Basically, the linux box is going to send packets with "event data" to the Windows client at certain times.  What's the best way for the client to wait for incoming data?  The tutorial above uses a while loop to send/receive data over the network stream.  But is just looping and continuously polling for a packet the right design?

#5259182 [D3D12] Descriptor heaps and memory (de)allocation

Posted by Quat on 26 October 2015 - 02:33 PM

 But when designing an "general" (if that even exists) engine, you cannot really know the amount of descriptors your user will need...


I think you can for the most part.  If you have fixed level sizes (arena map, race track), you can pretty much know at load time what resources you have (number of objects, materials, textures, etc.), so you can size your descriptor heap appropriately.  You can then add a a fixed maximum count to support room for dynamic objects that will be inserted/removed on the fly.  Descriptors don't cost much memory, so it wouldn't be a big deal to over allocate some extra heap space.  For more advanced scenarios, you can reuse heap space that you aren't using anymore.


For a level editor type application, I'd imagine you could grow heaps sort of the way vectors grow.


Another problem I have is: when you no longer require a given buffer, you no longer need its associated descriptor. So the best solution would be to reuse its location within the descriptor heap. But this requires me to implement some kind of advanced memory allocation algorithm, which I would like to avoid if possible...


I don't think it needs to be too advanced.  Just keep track of "free" descriptors and whenever you need a new descriptor, pull the next free one.  A lot of particle systems use a similar "recycling array" like this. 


Imagine I have a set of drawable objects, each of them having their own world transform. I would use a constant buffer (one for each object) to pass the associated matrices to the vertex shader. Allocating them is easy, as I only need to use the location right after the last descriptor I allocated on the heap. But when I delete one of these objects, is it my responsibility to make sure that the space that is no longer used will be reused for the next descriptor?


I didn't really follow your question.  The way I would assume you would do is allocate the cbuffer memory.  Then you need to allocate CBVs that live in a heap that reference subsets of that cbuffer memory.  If an object is deleted, it would be easiest to just flag that cbuffer memory region and CBV as free so it can be used the next time an object is created.  

#5258015 Ray-triangle intersection on scaled model

Posted by Quat on 19 October 2015 - 05:27 PM

Do the ray/triangle test in the local space of the mesh.

#5254874 DirectX12's terrible documenation -- some questions

Posted by Quat on 30 September 2015 - 03:05 PM

Though it's still a bit unclear to me when one would pick a CBV over a SRV as SRVs can reference buffers which presumably can be accessed in shaders. And if they can be accessed in shaders then it must be in a similar manner?


This is a good question.  Yes you can put data you would typically put in a constant buffer in a structured buffer and then bind an SRV to the structured buffer and index it in your vertex shader.  You would have to profile and see if one performs more optimally.  


In the d3d11 days, I assumed constant buffers were distinguished in that they were designed for changing a small amount of constants often (per draw call), and so had special optimizations for this usage, whereas a structured buffer would not be changed by the CPU very often and would be accessed more like a texture.  


I'm not sure if future hardware will continue to make a distinction or if it is all the same.

#5199360 Radiance Question

Posted by Quat on 20 December 2014 - 09:53 PM

Okay I got time to review the radiometry terms again.  In the figure below I have drawn a spherical light source.  Are my explanations correct?  Figure (a) was hard for me to reason about.  I found that I wanted to think of intensity as radiance, but that I had to consider all areas on the sphere-light can emits photons in the set of direction defined by w.





Now assuming the above is correct, going back to real-time graphics mode.  When we define a point light source that emits photons equally in every direction we specify its radiance magnitude, say I_0.  Even though we think of radiance as a ray of light, it is really a thin cone.  So when the ray hits a surface, the photons in the ray have "spread out" based in the inverse-square of the distance, so to compute the irradiance at the surface we do: E = I_0/d^2 in our shader to get the irradiance from the point light source.


Now we apply some BRDF to find the outgoing radiance that will reach the eye, call it O for outgoing.  So O is a ray leaving the surface and reaching the eye.  But shouldn't O be attenuated based on the distance of the lit surface point and the eye?  As the distance increases, the photons will again "spread out" and cause less stimulii to a sensor in the eye. 

#5190003 General Programmer Salary

Posted by Quat on 29 October 2014 - 01:43 PM

What if (heaven forbid) the company folds, and I find myself looking for a new job? I will get low-balled by every company out there on the basis of my previous salary. In addition to that risk, I feel that they're essentially asking me to take a pay cut for the company, which wouldn't even be out of the question if I felt like it would be appreciated, but I don't think they see it that way. Lastly, we are a small company, but our overall costs run in the millions of dollars per year, and so even if the company is not doing well, I hardly think that a $15k salary bump for one employee is going to affect things very much.


First, I would just wait until you finish school.  Looking for a job and going to school won't be fun. 


Second, don't worry.  Every time I switched jobs in my lifetime, I've gotten a significant pay bump in doing so.  When you apply for the job, use market rates from salary.com and glassdoor.com in your area to determine what you should get based on your experience. 

#5147682 dllexport global variable

Posted by Quat on 17 April 2014 - 10:57 AM

This paper gives a way to force high performance rendering when working with optimus driver:




It says to use:


extern "C" {
    _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;


I'm not too familiar with dllexport.  Does this need to be in a header file or just any .cpp file (like app.cpp)?



#5019998 FFT Water Shaders

Posted by Quat on 10 January 2013 - 02:37 PM

NVIDIA DX11 SDK has a compute shader implementation (OceanCS).  It follows the paper by Tessendorf. 

#5009638 Icosahedron Tessellation

Posted by Quat on 11 December 2012 - 07:30 PM

Assuming your icosahedron is centered about some coordinate system, then in that coordinate system you can project the vertex onto the unit sphere by normalizing it. To get a point on a sphere with radius R, just scale the unit vector by R.

#5009624 Icosahedron Tessellation

Posted by Quat on 11 December 2012 - 07:11 PM

You can build the triangle list for the isocahedron, and use 3 control points. Each input primitive is a triangle and the initial vertices are the control points. Then tessellate each triangle, and project the tessellated vertices back on to the unit sphere. The more you tessellate the better the sphere approximation.

#5006708 Expensive Post Processing Techniques

Posted by Quat on 03 December 2012 - 12:27 PM

SSAO with bilateral blur should do it.

#4906153 ssao noise/halo reduction

Posted by Quat on 25 January 2012 - 12:01 PM

Those artifacts looks like you are getting a whole bunch of self intersection (points on the same plane as P are working to occlude P). Even without blurring, the interiors of your walls and ground plane should be white (there are no occluders). You can scale your ambient occlusion by how much "in front" your random sample point Q is to the pixel P you are shading:

float s = max(dot(n, normalize(q - p)), 0.0f);

This is in the game programming gems 8 book by the starcraft guy.

#4862109 UpdateSubresource

Posted by Quat on 15 September 2011 - 09:50 AM

I noticed the Effects11 library uses UpdateSubresource to update constant buffers:

D3DX11INLINE void CheckAndUpdateCB_FX(ID3D11DeviceContext *pContext, SConstantBuffer *pCB)
	if (pCB->IsDirty && !pCB->IsNonUpdatable)
    	// CB out of date; rebuild it
    	pContext->UpdateSubresource(pCB->pD3DObject, 0, NULL, pCB->pBackingStore, pCB->Size, pCB->Size);
    	pCB->IsDirty = FALSE;

My question is: Is it better to use UpdateSubresource, or to make the constant buffer dynamic and Map it?

#4824133 AMD Fusion

Posted by Quat on 16 June 2011 - 10:48 AM

I started watching some of the AMD Fusion 2011 summit videos today.

So it sounds to me fusion is a hybrid CPU/GPU chip.

It sounds like they have removed the overhead from switching back and forth between "compute" mode and normal graphics rendering mode.

They say "a pointer is a pointer" and you can pass a pointer allocated in the C++ code over to GPU code and the GPU can just dereference it directly. So it sounds like this is a unified memory architecture??

What do you think about this architecture? Do you think NVIDIA will follow it?

Do you think this gives the best graphics, or is it more optimized for general applications that want to easily use compute power of GPU.

#425605 octrees for ray tracing

Posted by Quat on 25 November 2006 - 12:50 PM

Hi, I have a triangle based ray tracer which just traces each triangle one by one. It is very slow for all but simple meshes, so I want to add an octree to eleminate lots of wasteful tests. I just wanted to know if my algorithm is correct before I start working on the code. Here it is: Build one AABB that contains the whole scene. Subdivide box into 8 octants, and sort triangles amongst them. Repeat recursively until a leaf box contains no more than some some fixed number of triangles. So I guess this would be a "leafy" octree, where internal nodes just store an AABB, but the leaf nodes store a collection of triangles. Then once the data structure is built, I shoot a ray at the root box. It interesects it, so I test against each of its children boxes. Then continue recursively down each child box the ray intersects until I hit the leaf boxes--then do ray-triangle intersection tests. So as I recurse, I will miss entire boxes and therefore eleminate many tests with one AABB/ray test.