• entries
222
606
• views
592867

# CSG, fisheye and spotlights.

1732 views

One way of constructing solids is to use a method named constructive solid geometry (or CSG for short). I added two simple CSG operators - intersection and subtraction - that both take two surfaces and return the result.

In the above image, the surface that makes up the ceiling is created by subtracting a sphere from a plane. Of course, much more interesting examples can be created. For example, here is the surface created by taking a sphere and subtracting three cylinders from it (each cylinder points directly along an axis).

One problem with the camera implementation was that it couldn't be rotated. To try and aid this, I used a spherical to cartesian conversion to generate the rays - which has the side-effect of images with "fisheye" distortion.

The above left image also demonstrates a small amount of refraction - something that I've not got working properly - through the surface. The above right image is the result of the intersection of three cylinders aligned along the x,y and z axes.

To try and combat the fisheye distortion, I hacked together a simple matrix structure that could be used to rotate the vectors generated by the earlier linear projection. The result looks a little less distorted!

The final addition to the raytracer before calling it a day was a spotlight. The spotlight has an origin, like the existing point light - but it adds a direction (in which it points) and an angle to specify how wide the beam is. In fact, there are two angles - if you're inside the inner one, you're fully lit; if you're outside the outer one, you're completely in the dark; if you're between the two then the light's intensity is blended accordingly.

In the above screenshot, a spotlight is shining down and backwards away from the camera towards a grid of spheres.

If you're interested in an extremely slow, buggy, and generally badly written raytracer, I've uploaded the C# 3 source and project file. The majority of the maths code has been pinched and hacked about from various sites on the internet, and there are no performance optimisations in place. I do not plan on taking this project any further.

Building and running the project will result in the above image, though you may well wish to put the kettle on and make yourself a cup of tea whilst it's working. [smile]

This is really cool. Especially liking the CSG.

Hi Ben,

In my hobby project I'll need to texture map a sphere soon. I had a little look at the source code (Sphere.cs) to see how you did the mapping from polar coords to texture coords.

If I understand correctly, you more or less map it as if it were a cylinder, so that the texture will appear to pinch together at the top and bottom of the sphere, is that right?

Peter

Quote:
 If I understand correctly, you more or less map it as if it were a cylinder, so that the texture will appear to pinch together at the top and bottom of the sphere, is that right?
Absolutely correct; it uses simple trig to turn the cartestian coordinates of the struck point on the sphere's surface to polar coordinates, which it then divides by the requisite multiple of Pi to normalise it to the 0..1 range.

There are techniques for reducing the pinching, such as this one which requires that the 2D texture is distorted as a preprocessing step.

This series inspired me to stop being lazy try rewriting my own in C#. It's so much less painful (except for writing all the maths code again) than it was in C++ it's not funny.

I think I'm going to try implementing something I didn't get around to last time, like depth of field or fuzzy reflections.

I wanted to try CSG last time, interesting to see that you made it work. Nice job [smile]

Pictar

Great work mate. Turned out very.... feature-full [smile]

I too like the CSG.

Quote:
 Pictar
Looks good! [smile] I really couldn't think of a decent way to render soft shadows/soft reflections/depth of field effects, so I look forwards to seeing what you come up with!

ben, any plans to handle indexes (indices?) of refraction?

also, why not make the number of subsections it divides into configurable, to allow as many processors as the user wants to work on rendering?

Quote:
Original post by benryves
Quote:
 Pictar
Looks good! [smile] I really couldn't think of a decent way to render soft shadows/soft reflections/depth of field effects, so I look forwards to seeing what you come up with!

Well, the ideas I currently have are not really so great [wink] They all involve casting out tons of rays or jittering.

For fuzzy reflections the rays don't bounce off the surface in a straight line, but are jittered slightly. Whether this jittering will be random or uniform, I don't know. From my experience it needs a lot of rays to not look grainy.

For the "area shadows" shown, well, the filename gave it away - those are pretty fake. There's just 9 lights positioned close to each other [sad] It might be possible to do it instead by randomly jittering the point shadow testing is being performed on, perhaps based on the surface normal.

I guess the area in which the point can be jittered would be obtained using the size of the light and the distance from it, but I don't really see how exactly it would work. Maybe like this:

And depth of field can be done without too much effort, if I recall correctly. The first ray is cast, and the depth value it returns is used to manipulate the creation of the others. You simply offset the subrays outside of the actual pixel and it magically becomes blurry. There's probably some (hiss) mathematics in there somewhere, too.

Quote:
 Original post by MrEvil Well, the ideas I currently have are not really so great [wink] They all involve casting out tons of rays or jittering.
Going by the noisy quality of the depth of field effects in POV-Ray, I assumed some sort of jittering trick was being used, but didn't understand how. Your explanations make sense!

As for soft shadows; rather than projecting a line back from the surface point to the light I assumed one could project a cone, but then I wasn't sure how you'd then calculate its intersection with the other surfaces to find out how much of its area was blocked.

Quote:
 Original post by elfprince13 ben, any plans to handle indexes (indices?) of refraction?
There is some code in there to attempt that (each material has a RefractiveIndex property), but values other than 1 don't work very well (lots of noise) and if two surfaces interact with eachother then it's completely wrong (it only handles the ray moving in or out of "air").

Quote:
 also, why not make the number of subsections it divides into configurable, to allow as many processors as the user wants to work on rendering?
The Raytrace() method in MainInterface.cs has a variable at the top - int ThreadCount = 3; - that controls how many threads are created. 3 seemed a decent compromise between my two cores and splitting the image into a top/middle/bottom so you could see the more interesting stuff that was being rendered sooner.

Quote:
 Original post by benryves As for soft shadows; rather than projecting a line back from the surface point to the light I assumed one could project a cone, but then I wasn't sure how you'd then calculate its intersection with the other surfaces to find out how much of its area was blocked.

I do like the cone idea, if it could be made to work. It may be easier to use cones where possible, and fall back to jittering where necessary (assuming you can figure out when cones can be intersected with). It could also be interesting to intersect with cones for antialiasing too.

It could also be interesting to play with global illumination. I did this in my C++ attempt, tracing photons from the light and marking where they landing, interpolating them. It worked OK if you used enough samples, but it was slow.

One thing I'd really like to do is volumetric light/shadows (e.g. for seeing cones of light in the dust, etc.) but I have no idea on how I would do it. Probably something iterative.

## Create an account

Register a new account