Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 12 May 2010
Offline Last Active Sep 23 2016 09:52 AM

#5169296 How many APIs are for drawing text?

Posted by on 26 July 2014 - 08:25 AM

In a slightly more generalized answer: If you can attach PIX then that will help track down any specific DX API call used to perform an operation or render (as it gives you a breakdown of DX calls in a frame along with a before and after of the state and rendered frame -- in the frame capture mode). It will also help in debugging any issues you might have (though unfortunately MS saw fix to break PIX with a certain Win7 patch...). The various GPU vendors also provide similar free tools (NSight for nVidia, GPUPerfStudio for AMD and GPA for Intel).


The game might possibly call D3DPERF_SetOptions() to disable PIX, but its very easy to NOP/remove.

#5125284 how to traverse bvh on gpgpu (for ray tracing)?

Posted by on 21 January 2014 - 02:58 AM

Not explicitly for ray tracing, but this awesome article from nVidia might be very useful, for actual ray tracing have a look at this nVidia research paper. You might also want to check if Ingo Wold has any publications on the subject (you can find most of his papers here).

#5120162 XMVector3Project implementation

Posted by on 30 December 2013 - 05:05 PM

Why exactly can't you use DXMath? (or the older and now deprecated XNAMath, or the even older D3DX* math functions, each one has a project/unproject set).


Anyways, you can check out the source for D3DXVecProject here, which is the older, scalar version, and should be a little easier to understand and copy into an existing project, all you need is matrix by matrix and matrix by vector multiplies.



#5118723 How to make tile texture

Posted by on 22 December 2013 - 09:26 AM

You can do this using "wrap" for the texture addressing mode; it'll cause the texture unit to take texture u/v addresses mod 1.0, so essentially, 11.25 would map to 0.25 in the texture, see the example here.

#5117929 Material Layering

Posted by on 18 December 2013 - 01:44 PM

From the material layering video from the inside UE4 series, it seems that they use a masking texture and blend each material all in one pass using each of the four channels of the mask as the blend weight (though, I get the impression that some of it may be done "offline"/at startup and baked once into a final composite material). 


You might also find MJP's slides on Ready At Dawn's material layering/compositing system of interest as well.

#5111298 Should I use a Lua wrapper class in my engine?

Posted by on 22 November 2013 - 10:04 AM

If you can use LuaJIT, you might find it easier to write C wrappers for all the class functions and invoke them through the FFI.


If want to avoid all the C wrappers, the FFI can bind directly to the class members (see this), its a bit brittle though, however, if you are targeting only a single compiler+platform, its trivial* to generate a script to encode the symbols and bind everything in the FFI. Not only is that faster (dev wise), and leads to less clutter, but you get LuaJIT with all its awesome features :) (if you factor your code properly, you can even just parse a C or C++ header directly in Lua, meaning you only need to update a singular point).


I went the C++ template and macro magic route (not as heavily as Hodgman), found it horrible to work with (and by "work with" I just mean adding of new functions, enums, classes/objects etc) in the end, been meaning to try the above using the FFI (I use LuaJIT already), unfortunately I haven't had the time yet...



*the cdef generation part, decoration of symbols might be a little more tricky, esp. with MSVC 

#5093793 Optimizing Ray-Sphere Intersections

Posted by on 13 September 2013 - 10:09 AM

I've made a few ray tracers in the past, but they where normally focussed on small aspects, taking shortcuts for many things, however recently I needed to make one for a uni project and decided to go all out. So after setting up my math libs, getting all the ground work into place (which most importantly includes a 4DOF FPS camera) it came time to trace a few spheres to test both the ray casting and get ready to implement a BVH/BSP scheme.


First thing I tried was this snippet of code (take from this wonderful set of articles):

Vector dst = vCenter - r.GetOrigin();
float b = dst.dot(r.GetDirection());
float c = b * b - dst.dot(dst) + fRadiusSquared;

if(c < 0)

float d = std::sqrtf(c), r1 = b - d, r2 = b + d;
if(r2 < 0)

if(r1 < 0)
	fDistance = r2;

fDistance = r1;

it works, but far off objects come out very "fuzzy" and jagged, clearing up as you get closer:



After trying quite a few variants (such as this) which all suffered the same problem, I suspected a problem in my maths code; however, using the following code (adapted from here, based on an unoptimized quadratic formula, massively slows down rendering):

	//Compute A, B and C coefficients
	Vector o = r.GetOrigin() - vCenter;
	float c = o.length_squared_fast() - fRadiusSquared;
	if(c < 0)

	float a = r.GetDirection().dot(r.GetDirection());
	float b = 2 * r.GetDirection().dot(o);

	//Find discriminant
	float disc = b * b - 4 * a * c;
	// if discriminant is negative there are no real roots, so return 
	// false as ray misses sphere
	if (disc < 0)

	// compute q as described above
	float disc_sqrt = sqrtf(disc);
	float q = (b < 0) ? (-b - disc_sqrt) / 2.0f : (-b + disc_sqrt) / 2.0f;

	// compute t0 and t1
	float t0 = q / a;
	float t1 = c / q;

	// make sure t0 is smaller than t1
	if (t0 > t1)

	// if t1 is less than zero, the object is in the ray's negative direction
	// and consequently the ray misses the sphere
	if (t1 < 0)

	// if t0 is less than zero, the intersection point is at t1
	if (t0 < 0)
		fDistance = t1;
	// else the intersection point is at t0
		fDistance = t0;

it comes out correctly: HfTyOif.png


and for the life of me a can't figure out why the optimized variants seem to degrade so badly...

Anybody got any hints/advice as to whats going wrong here? is this just a side-effect from approximating it geometrically?

#5071598 Better to have separate shaders for each graphical option, or pass constants...

Posted by on 20 June 2013 - 03:23 PM

Use the preprocessor to generate the variants at compile time, keeping things together while removing unneeded runtime overhead (When you generate the shaders depends on your pipeline, but in effect you can generate every variant before distribution, and index them via a hash or fingerprint).

//my code with shadows...
//my code without shadows

you can read up more on the PP here.

#5070823 distance field landscape demo (with attachment)

Posted by on 18 June 2013 - 06:01 AM

I get about 7-10fps on my 660Ti when viewing far away stuff, when viewing only close stuff it bumps up to 35-44fps (at 1920x1080). Might be useful if you included a list of controls in a readme etc (took me a while to figure to dragging with the r-mouse handles the camera movement...).

#5045177 Interesting effects for particle physics...

Posted by on 21 March 2013 - 03:24 AM

There was a presentation for a Command & Conquer game floating around on PS, in which they discussed how they designed the system. Unfortunately, I cannot find the matching keywords right now. I hope somebody remembers this presentation as well and provides a link to it!

Was it not possibly this one? http://www.2ld.de/gdc2007/EverythingAboutParticleEffectsSlides.pdf


The UE4 presentation for siggraph 2012 also had some pretty cool stuff on particle vector fields and other particle related stuff using CS

#5013768 Fonts in OpenGL (possible options)

Posted by on 23 December 2012 - 03:39 PM

Give freetype-gl a look-through, its simple and has support for quite a few font features (as well as signed distance fields). It also has so code reuse features, so you can use the same code to render primitives and 2D-textured quads.

#5001099 Raytracing Artifacts

Posted by on 14 November 2012 - 10:28 PM

I've seen something very similar to this this, turned out to be precision issues with shadow rays, that is is the ray was intersecting with the surface it was shot from, even though it should be starting from the surface, adding a small offset/bias fixed the problem. it might be a similar issue in your case.

#4988632 Unreal 4 voxels

Posted by on 10 October 2012 - 01:42 AM

You should give their siggraph 2012 presentation a read, as well as Cyril Crassin's papers a read (where the technique is derived from). From what I understand of this technique, any voxel may inject or remove light based on the properties it stores and how you treat those properties.

#4982855 Cone Tracing and Path Tracing - differences.

Posted by on 23 September 2012 - 03:04 AM

If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher

I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique

I'd recommend a more up to date iteration of LPV, which includes corrections and annotations for the original paper, plus a DX10.1 demo with source :)

#4965211 Cone Tracing and Path Tracing - differences.

Posted by on 01 August 2012 - 08:30 AM

The paper about it isn't detailed and it's missing some info.

just FYI, the first part relating to the voxelization was released as a free chapter from the OpenGL Insights book, you can find it here (according to Cressin's Twitter page, the full source will be released soon, probably to the git repo here).