Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

106 Neutral

About RealtimeSlave

  • Rank

Personal Information

  • Interests
  1. Sorry for the late reply but I was not able to be here in the last days. At least in the referenced McIntosh12 paper MAX seems to be used for union. But still don't know how they compute the sample offsets for the 3x3 flood fill pass. Do they just sample all neighbours around each center pixel? Or is there again some sample offset array and a scaling by COC factor involved? Are you really sure this is the case? Because the wording they use does not really fit to this algorithm above. They say "Downscale COC target k times (k = tile count)". If we downscale a renderbuffer of arbitrary dimensions "k times", how can we have "k tiles" in the end? Maybe the wording is just wrong. How is k actually chosen? Does this also mean that later, when looking up min/max COC from this downscaled buffer, the shader has to map current half res x,y position to the downscaled x,y position? Afaik the purpose of a bilateral filter is to preserve edges so why should they give "blurry" pixels more weight when generating the half res input? Honestly it does not make any sense to me. I would guess they really just want a good quality downscaled half res input and they use a bilateral filter that preserves edges to avoid intensity leakage artifacts. As above, not sure this is correct =/ Cool man! I might take a look once I have more details on the original algorithm.
  2. Hello, found a discussion about the DoF implementation https://de.slideshare.net/TiagoAlexSousa/graphics-gems-from-cryengine-3-siggraph-2013 in Cryengine 3 here https://www.gamedev.net/forums/topic/675214-dof-near-field-bleeding/ but unfortunately the thread was already closed. Have some questions and hopefully someone is still around who can answer them 1. Slide 36: I guess this describes how to compute the 49 sample offsets for the 7x7 kernel? I.e. how to compute the final morphed radius and angle for a sample relative to the current center pixel? So this could be done once on the CPU, compute radius and angle for all 49 samples and upload a uniform array with the x,y offsets for each position derived from the angle and radius? 2. Slide 37: How exactly does the flood-fill-pass work? What are the offsets for taking 3x3 samples or do we just sample the 3x3 pixels around each current center pixel? And do we take the maximum/brightest color of the 9 samples to approx boolean union? If yes, how do we comoute which of the 9 RGB colors is "the brightest"? 3. Slide 41: Honestly I don't get what this "tile count" stuff is? What tiles are they talking about? Where does the tile count come from? I guess we have some R8G8 renderbuffer which we want to fill with something. Our input renderbuffer contains the fullres COC as A8 or float texture (don't know what format is proposed to store the COC values)? What exactly are we doing when downscaling tilecount-times? What is this downscale shader doing during each tilecount-pass? Does it start with fullres or halfres input which is mentioned in the slides? 4. Slide 42: How does this custom bilateral filter look like? Is there an implementation available? Does this filter downscale only color or also COC? Does "weight samples with far COC" mean each sample color is multiplied with the COC of the sample (taken from half res input)? What does "pre-multiply layer with far COC" mean? Is "layer" the half res color input? Or the full res input before the bilateral downscale? Do we AGAIN multiply colors with COC although we do this already when we "weight samples"? 5. Slide 44: How does the upscale bilateral filter work? For each fullres pixel we take 4 taps from halfres and do some magic based on the COC values? We do this "somehow" in a bicubic quality manner? And then blend based on far COC? Is far COC betwee 0.0 and 1.0? Ok a lot of questions I know ;( But help is really appreciated!
  3. I am also not registered to any newsletter. So they probably hacked the gamedev.net database which contains the user mail addresses because this is the only spam/phishing I received in the last 3 months...
  4. Hello, second time I got this mail:     Nearly all links in the mail point to http://cts.vresp.com/u?*******************   Is it spam/phishing or your official newsletter? Thanks
  5. RealtimeSlave

    Light rays through windows

    Thanks for your replies! Yep, that is exactly what I mentioned with the provided link. Don't trust the still images. When moving through the rays it does not look like a "cheap" transparent mesh approach :) Great! I'll try that. Thanks
  6. Hello! Does anybody know how this light ray effect below is achieved? It does not seem to be a postprocessing god ray effect, instead it has the following characteristics: - Even if the window/light source is not visible, the rays are still there (so it does not work as described here: light scattering) - When walking through the rays it looks like real volumetric foggy light (no cheap textured light cones used) - There are also some dust particles flying in the rays but I am mainly interested in the basic idea not in the particles   Are there any recent "state of the art" docs/tutorials/papers on this topic?   What is the correct term for this effect?   I use OpenGL 3.2 core profile.   Help is really appreciated!
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!