Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 24 Jan 2013
Offline Last Active Today, 03:30 AM

Topics I've Started

Scenes with large and small elements

21 October 2014 - 11:47 AM



At the moment I'm working on a simple simulation of an Earth-orbiting spacecraft. This involves centring the camera on my spacecraft which is orbiting a few hundred kilometres above Earth and simultaneously rendering the planet itself out to a distance of around 12000km. Of course, it's unfeasible to render such a scene using metres as my base unit, as I have to specify the spacecraft's position in hundreds of thousands of metres relative to the centre of Earth, and using such massive numbers to position objects in Direct3D seems to cause problems.


So, my question is: how would you approach the problem of rendering small scenes embedded in much larger ones? Should I render the Earth first followed by the spacecraft using different units each time? If you know of any useful articles or tutorials, that would be great :)



n-dimensional noise generation

04 October 2014 - 05:43 AM

Hi everyone!


I'm trying to make a Perlin noise program but I'm having a bit of trouble coming up with a good noise function. I'm trying to create 2D noise to start with. I'm using the default_random_engine from <random.h> to fill a 2D array with noise, like so:

//Generate random floats between 0 and 1
default_random_engine gen(0);
uniform_real_distribution <float> line(0, 1);
//Fill the array image[][] using the random generator
void fill_noise()
    for (unsigned y = 0; y < yres; y++) 
        for (unsigned x = 0; x < xres; x++)
            float shade = line(gen);
            image[x][y] = Vec3(shade); //RGB are the same to give grayscale value

This works fine but I'd like to create a noise function which gives me the same result for a given x and y coordinate in the image every time I call it, i.e. something like this:

float noise_2d(int x, int y)
    //Generate a random number between 0 and 1 which is always the same for a given x, y

I hope that's clear enough! So how can I use <random.h> to create such a function? Thanks a lot smile.png

Stretched pixels in windowed mode

04 June 2014 - 05:15 PM

Hi all,


I've created a simple program using DX9 in windowed mode. It uses a simple diffuse shader to render a mesh in a 640x480 window. The program seems to work fine but there's a strange array of stretched pixels which distort the image. Here's a shot:


Attached File  pix1.png   36.7KB   4 downloads


If you can't see them, here's the same image after edge-detect:


Attached File  pix2.png   15.9KB   3 downloads


Is there anything I'm overlooking that needs to be done to render in windowed mode (I've been doing fullscreen stuff until now). Thanks!

Recalculating terrain normals

31 May 2014 - 04:42 AM

Hi everyone,


I'm trying to get a terrain shader working and I am having problems re-calculating the normals. Here's my shader at the moment:




I'm using a technique I discovered in a paper to determine the normal to the terrain using the heightmap data that sets the height of each vertex in a 2D plane. The problem is that the shading is still totally flat and the normals still point directly up, as they do in the original plane. Can anyone see why? Should I be doing the normal calculation in the pixel shader (which I also tried, doesn't work either). 




Here's how the terrain looks:


Attached File  tershade.jpg   53.79KB   4 downloads

Confused about monte carlo path tracing

08 May 2014 - 07:37 AM

Hi everyone,


I've tried to update my ray tracer to handle Monte Carlo path tracing rather than standard ray tracing, and I have got it working but I'd like to check my algorithm over as I'm unsure about a few things. Here's what I'm doing:


Base rendering routine: For each pixel (loop through y, x)  and for each sample (loop through sample number, e.g. 100 rays/pixel) shoot a ray through a random point in the pixel, add the results and then divide by the number of samples.


My trace function does the following:


- For each initial pixel ray, assign an attenuation vector with the value (1, 1, 1) which will be decreased as the ray propagates

- If there's no intersection, return a background colour

- If the intersected shape has an emissive colour, return that colour with 100% probability

- Define the following:


float kd = shape->material.diffuseColour.sum(), ks = shape->material.specularColour.sum(), kt = shape->material.transmitColour.sum();


...where '.sum()' adds together all the components of the vector


- Choose whether to spawn a diffuse, specular or transmitted ray like this:


diffuseAttenuation = attenuation*shape->material.diffuseColour; //Per-component multiplication

diffuseProbability = kd/(kd + ks + kt);

if (diffuseAttenuation.findmax() > cutoff && random(0, 1) < diffuseProbability) //'.findmax()' returns largest component of vector


- find a random (cosine weighted) direction

- spawn a recursive ray in this direction (its attenuation vector is now equal to diffuseAttenuation)

- multiply it by the current diffuse colour and return.





specularAttenuation = attenuation*shape->material.specularColour;

specularProbability = ks/(ks + kt);

if (specularAttenuation.findmax() > cutoff && random(0, 1) < specularProbability)


- spawn a recursive ray in a perfectly reflected direction (its attenuation vector is now equal to specularAttenuation)

- multiply it by the specular colour and return





transmitAttenuation = attenuation*shape->material.transmitColour;

if (transmitAttenuation.findmax() > cutoff)


- spawn a recursive ray in a perfectly refracted direction (its attenuation vector is now equal to transmitAttenuation)

- multiply it by the transmission colour and return






...OK, so here's what I want to ask about:


- I've seen some sites/papers say that I should divide each ray by its probability, e.g. the colour returned by a diffuse ray should be divided by diffuseProbability, etc... - should I be doing this?

- Are there any things you can see wrong with my algorithm? I don;t have any books on this so I'm just going based on what I can lift from papers and various PDFs from unviersities, none of which seem to tell the full story.




p.s. here's a picture it rendered which took about 45 minutes at 500 samples per pixel. Is this the kind of speed that I should be expecting with this type of path tracing?