Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About draktheas

  • Rank
  1. draktheas

    SSAO woes

    Ok, based on the other thread that MJP posted a link to (thanks MJP), I have attempted my own version of SSAO. I am getting very strange artifacts and I am sure it has to do with my far frustum corner calculation. Any help would be appreciated. Here is the rendermonkey project: Thanks, Drak
  2. draktheas

    SSAO woes

    Harry, if you figure out what the problem is please let us know what you did to solve it. I have a similar problem with my implementation of SSAO. Thanks, Drak
  3. draktheas

    Deferred shading - Materials attributes

    Wolf, can you ellaborate on your "Light indexed renderer or Light Pre-pass Renderer" a little? There are no details about it in your blog and your blog appears the be the only google result for those terms. Any concrete info would be greatly appreciated. Drak
  4. draktheas

    Overview of HDR

    Are there any papers, blogs, or other articles or sources of information that talk broadly about the considerations for HDR? Things like LogLUV compression, light sources, gamma, etc? I can find plenty of information about the basics of implementing HDR, such as different tone mapping algo's and full screen effects that enhance the look of HDR. But there seems to be a lot more to it than just rendering to a floating point buffer and doing some post processing. I am looking for information related to the whole system point of view. Also any information on supporting both HDR and LDR in the same engine. Thanks for any help, Drak
  5. draktheas

    Image class for raytracing advices

    The way I have done it and seen it done many times is in addition to width and height, the base image class has a pixel depth or pitch with tells the user of the class how many bits or bytes each pixel takes. Also a format, which tells the user what format the buffer is in. Typically you have multiple formats even within the same image space. For example, you may have a 24bit image that is represented as R8G8B8 or R6G6B6A6. I would recommend staying away from a get pixel call, as it will be the bottleneck in your system. It is better to have a GetData() method that returns a void* or char* to the raw data and then let the user of the class choose how they want to iterate over the data. This way performance and ease of use can be balanced on a case by case basis. By providing a pointer to the raw data, a pixel depth, and a format you are giving the user of the class all of the information they need to optimize it for their particular situation. Drak
  6. We are having some problems with our depth buffer based post process effects and it is mostly due to transparent (or alpha blended) objects in the scene. This seems like quite the common problem with post process effects and wanted to see if there is an easy way to solve the problem. Basically I want the depth values for alpha objects to write their z values to the depth buffer as if they were not transparent. Any suggestions? Thanks, Drak
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!