s.howie

Members
  • Content count

    12
  • Joined

  • Last visited

Community Reputation

138 Neutral

About s.howie

  • Rank
    Member
  1. Game-dev collective

      The dishwasher?
  2. Graphics Programming Exercises

    Thank you, everyone, for your replies.   @Rld_   Thank you for the time you took to outline a sequential set of exercises that build one on top of the other. That's fantastic (and there is a lot of work there to keep me busy)!   @studentTeacher   Thanks for the inspirational anecdote of your own learning experience. I do find I learn better when I am exploring my own interests, so I will take this on board.
  3. Graphics Programming Exercises

    Thank you, both, for your replies. They are greatly appreciated. As far as reading materials go, I am very happy with the books and tutorials I have already sourced. What I am interested in finding is a set of problems to exercise my grasp on the knowledge I've gained from the reading materials. Some example exercises may be: - Write a program that renders a cylinder that meets a specific set of parameters (easy with a framework, harder without). - Write a program that renders a pyramid that meets a specific set of parameters. - Write a phong shader. - etc
  4. Graphics Programming Exercises

    @d4n1   Thanks for your reply.   I understand the advantages of utilising a framework such as ThreeJS. However, in this case, I am not trying to make a game. Rather I am learning Graphics Programming. To this end I want to go closer to the metal. :)
  5. Hi all,   I've had an interest in graphics programming for some time, but I've been intimidated by the perceived complexity of it. As I am getting older (30 this year) and time is passing me by, I've decided to overcome these fears and for the last month I have been studying the WebGL graphics library and pipeline. (I chose WebGL simply because I have web dev experience and so it is the most accessible graphics library for me.)   I've bought a few books on the topic, and I'm slowly marching my way through them. But these books are more an introduction to the graphics library, imparting basic theoretical understanding of the WebGL pipeline and the WebGL API. This is great knowledge to have, but I crave an additional stream of learning that is more practical in nature.   When learning a new programming language I like to do exercises such as those found on Project Euler. This way I can learn about the language whilst putting it to practice solving interesting problems. Could anyone recommend some good graphics programming exercises that I could tackle to take my theoretical book knowledge and turn it into practical experience?   Any suggestions are greatly appreciated.
  6. How was this achieved?

    @Khatharr   Thanks for the further explanation.   I assumed that depth testing would usually be performed on graphics hardware. However, the skill set and tools I have immediately at my disposal wont allow me to program for it (I planned to play around with the idea in Javascript and my graphics card isn't supported by WebGL). Hence the inefficient CPU rendering of the image. Essentially my plan was to overlay the water image onto the environment by comparing the two depth buffers (rendered as images). I understand this is not what really happens when a scene is rendered to the screen.   However, your comment has helped me further understand the concept and for that I am grateful. Your description of how they would not clear the depth buffer before drawing the water but instead copy in the depth buffer of the environment for comparison against those of the water plane drove it home for me.   Your insights into the rendering process have given me a great appreciation of how a scene is rendered. I find it hard to believe how many calculations must be going on under the hood to do a raycast for each pixel on the screen against each triangle in the scene each frame and every frame. It does all this and runs at 60fps!   Mind = Blown   Obviously there must be optimisation strategies to only do tests that make sense (like spatial partitioning?), and maybe not recalculating depths for an object that has not moved if the camera is static (Though this wouldn't help keep a solid framerate whilst the camera is moving). And many other things I wouldn't even think of.   Thank you for all your time in trying to make sure I understand the concept. It has been very valuable. But I feel as if I am only becoming more and more curious ;P
  7. How was this achieved?

    @Khatharr   Thank you for that succinct explanation. I feel I am starting to understand, at a very high level of abstraction, what is going on.   I'm inspired to play around with the concept to see if I can get a similar effect. I may cheat though and render an image. I'll render both environment model and water plane depth buffers as images then do a pixel by pixel comparison to see which pixel I should chose to draw into the combined image.   Thanks again.
  8. How was this achieved?

    @Khatharr   Thanks for your explanation and the links.   My understanding of what I am reading on z-buffering is that the depth value is distance from the camera. If depth value alone was used to determine where the water was rendered then would it not render in a plane that faces the camera rather than a plane facing up the y axis?   Perhaps I am coming at this from the wrong angle. I am thinking of the water as a flat texture rather than a plane that has also been rendered and has its own z-buffer. If this were the case, then this planes depth buffer could be shifted to shift the water height. Then the two depth buffers can be checked against each other, rendering the pixel closest to the camera.   Am I getting close?
  9. How was this achieved?

    @ L. Spiro   Thanks for your reply. My 5 year old brain is struggling to understand all of what you said but I think I have a tentative grasp on the basic idea.   What I took away from your explanation was as follows:   1) When the 2D image of the environment is generated, a depth buffer is returned along with the RGB value of each pixel. 2) This depth information per pixel can then be used to somehow mask the water effect.     I got lost where you mention "reversing the actual height of each pixel".     I have a few assumption on how step 2 is achieved:   1) I assume that the depth buffer for each pixel is the distance from the camera, and that the z height of the pixel could then be found from knowing this depth value and the angle and position of the camera. 2) Once the z depth of a pixel is known, I assume when the water texture is rendered it checks each pixel against the corresponding pixel z height of the environment image, only rendering when the height is equal to or below the water height. 3) Knowing next to nothing about graphics programming, I'm assuming these operations, being per pixel, are pushed to the GPU by a shader.     Thanks again for shedding some light on what is going on under the hood. Please let me know if I have horribly misunderstood your explanation.
  10. How was this achieved?

    I watched this video that demonstrates some graphics tech Obsidian are developing for Project Eternity.   http://youtu.be/AUleDEFkUtE?t=1m48s   I'm really curious of how they achieved 3D dynamic water levels and lighting on a 2D image.   If anyone could explain it to me like I am 5, I would greatly appreciate it.
  11. Object pooling and managed languages

    For those interested in the topic, I asked the same question on the Javascript subreddit and had some interesting responses. I am posting the link below for the sake of closure: [url="http://www.reddit.com/r/javascript/comments/12z99j/object_pooling_best_practices/"]http://www.reddit.com/r/javascript/comments/12z99j/object_pooling_best_practices/[/url]
  12. I am considering implementing Object Pooling into my game. The game is written in a managed language (Javascript). The goal of implementing Object Pooling is to reduce garbage collection by reusing objects. However, my worry is that Object Pooling would make things fragile. The danger I am particularly concerned about is that it is possible to ignorantly keep a reference to an object and manipulate it after it has been returned to the pool. This could cause bugs that would be very difficult to track down. For example, imagine I have a pool of vectors and the following happens: - ClassA gets a vector from ClassB. - ClassB returns the vector sent to ClassA back to the pool. - ClassC gets a vector from ClassB (the same object as stored by ClassA) and stores it. The issue here is that one can never be sure that an object received from a pool will not be returned and recycled without their knowledge. I'm having difficulty deciding whether the risk is worth the reward, or if there is a way to mitigate it. Any advice would be greatly appreciated.