Jump to content

  • Log In with Google      Sign In   
  • Create Account


s.howie

Member Since 10 Nov 2012
Offline Last Active Jan 27 2014 01:41 AM
-----

Posts I've Made

In Topic: Graphics Programming Exercises

09 January 2014 - 08:48 PM

Thank you, everyone, for your replies.

 

@Rld_

 

Thank you for the time you took to outline a sequential set of exercises that build one on top of the other. That's fantastic (and there is a lot of work there to keep me busy)!

 

@studentTeacher

 

Thanks for the inspirational anecdote of your own learning experience. I do find I learn better when I am exploring my own interests, so I will take this on board.


In Topic: Graphics Programming Exercises

08 January 2014 - 04:11 PM

Thank you, both, for your replies. They are greatly appreciated.

As far as reading materials go, I am very happy with the books and tutorials I have already sourced.

What I am interested in finding is a set of problems to exercise my grasp on the knowledge I've gained from the reading materials.

Some example exercises may be:

- Write a program that renders a cylinder that meets a specific set of parameters (easy with a framework, harder without).
- Write a program that renders a pyramid that meets a specific set of parameters.
- Write a phong shader.
- etc

In Topic: Graphics Programming Exercises

07 January 2014 - 03:16 PM

@d4n1

 

Thanks for your reply.

 

I understand the advantages of utilising a framework such as ThreeJS. However, in this case, I am not trying to make a game. Rather I am learning Graphics Programming. To this end I want to go closer to the metal. :)


In Topic: How was this achieved?

13 April 2013 - 05:30 AM

@Khatharr

 

Thanks for the further explanation.

 

I assumed that depth testing would usually be performed on graphics hardware. However, the skill set and tools I have immediately at my disposal wont allow me to program for it (I planned to play around with the idea in Javascript and my graphics card isn't supported by WebGL). Hence the inefficient CPU rendering of the image. Essentially my plan was to overlay the water image onto the environment by comparing the two depth buffers (rendered as images). I understand this is not what really happens when a scene is rendered to the screen.

 

However, your comment has helped me further understand the concept and for that I am grateful. Your description of how they would not clear the depth buffer before drawing the water but instead copy in the depth buffer of the environment for comparison against those of the water plane drove it home for me.

 

Your insights into the rendering process have given me a great appreciation of how a scene is rendered. I find it hard to believe how many calculations must be going on under the hood to do a raycast for each pixel on the screen against each triangle in the scene each frame and every frame. It does all this and runs at 60fps!

 

Mind = Blown

 

Obviously there must be optimisation strategies to only do tests that make sense (like spatial partitioning?), and maybe not recalculating depths for an object that has not moved if the camera is static (Though this wouldn't help keep a solid framerate whilst the camera is moving). And many other things I wouldn't even think of.

 

Thank you for all your time in trying to make sure I understand the concept. It has been very valuable. But I feel as if I am only becoming more and more curious ;P


In Topic: How was this achieved?

13 April 2013 - 03:12 AM

@Khatharr

 

Thank you for that succinct explanation. I feel I am starting to understand, at a very high level of abstraction, what is going on.

 

I'm inspired to play around with the concept to see if I can get a similar effect. I may cheat though and render an image. I'll render both environment model and water plane depth buffers as images then do a pixel by pixel comparison to see which pixel I should chose to draw into the combined image.

 

Thanks again.


PARTNERS