• Advertisement

Necrolis

Member
  • Content count

    76
  • Joined

  • Last visited

Community Reputation

1464 Excellent

About Necrolis

  • Rank
    Member

Personal Information

  • Interests
    Art
    Audio
    Design
    DevOps
    Programming
  1. What do with compute shaders?!

    I'm surprised no one has mentioned particle simulation yet, nvidia has a nice demo available as part of Gameworks (there is also a compute based water simulation as well in the samples collection). Compute is also great for parallel sorting, though for a game this is a little less applicable unless you sort and then use the data purely on the GPU. 
  2. nVidia is finally putting Crassin's work to use with there voxel power GI (VXGI), but other than this, it seems the trend is to supplement conventional polygonal rendering with certain voxel-based techniques.
  3. In a slightly more generalized answer: If you can attach PIX then that will help track down any specific DX API call used to perform an operation or render (as it gives you a breakdown of DX calls in a frame along with a before and after of the state and rendered frame -- in the frame capture mode). It will also help in debugging any issues you might have (though unfortunately MS saw fix to break PIX with a certain Win7 patch...). The various GPU vendors also provide similar free tools (NSight for nVidia, GPUPerfStudio for AMD and GPA for Intel).   The game might possibly call D3DPERF_SetOptions() to disable PIX, but its very easy to NOP/remove.
  4. Computer Graphics Stackexchange

    Committed   <offtopic> If stack exchange isn't the most annoying implementation of single-sign-on ever deploy to customers... </offtopic>   I bound mine to a google account, never had problems since
  5. Unreal Engine 4

    They are releasing the Elemental Demo with UE4.1 (for free, see here), however I still have not been able to figure out if it contains the SVOGI impl., or if its been converted to LightMass or the LPV tech they settled with for dynamic GI (the Fable: Legends blog had a post on this in the past week).
  6. Unreal Engine 4

    Its actually a metric-ton of operator overloading done on multiple classes from what I can tell, its really weird to look at, but makes sense in a visual way as it more closely matches the nesting hierarchy of the elements.   The shaders are very well structured and nicely commented, so it should be a (relatively) simple task to add in new models (with the way they segregate a lot of it; it should be easier than from scratch as you get to reuse a lot of the common infrastructure), same goes for the C++ code as well. same goes for customizing the BDRF (which already has quite a lot of options built in).
  7. Emulating SetGammaRamp in a shader

    It gets mapped back to a WORD before being used to filled the D3DGAMMARAMP (the reason for the size mapping is cause there is also a palette-based software renderer, but it ignores the gamma ramp -.-), as for the off by one error, thats probably my fault along the line somewhere, so thanks for catching that   As for how it gets mapped: its literally casted to a WORD as the range mapping is done inside GenGammaTable (I folded in the range value, fMaxGamma, cause I'm only concerned with the D3D gamma, originally this was a parameter): double GammaTable[256]; D3DGAMMARAMP ramp; GenGammaRamp(myGammaValue,myContractValue,GammaTable); for(int i = 0; i < 256; i++) { WORD Gamma = (WORD)GammaTable[i]; ramp.red[i] = ramp.green[i] = ramp.blue[i] = Gamma; } pD3ddevice->SetGammaRamp(0,D3DSGR_NO_CALIBRATION,&ramp); Ah so it is just a straight off "scaled-index", originally I had tried using "Out.Color = pow(In.Color,gGamma/2.2)" but I had no clue how to add in the contrast, this also washed out the colors very quickly as apposed to the original ramp.   I'm already using 1D LUT's to emulate 8-bit palettized color, so technically I should be able to remap the palette LUT to account for the gamma If I understand this correctly; though I think its probably best to first have it working with the double LUT. Your note about the texel centering reminds me that I didn't do this for my palette LUTs, so that fixes something else as well
  8. Unreal Engine 4

    Its actually good, means faster processing than include guards (which they also use here an there as well, but make they the mistake of using a double underscore prefix, which to be pedantic, you shouldn't ever do).   Spelunking around the UE4 source is quite interesting to say the least, especially some of the nifty "don't do this cause the debug layer explodes" comments.
  9. I've been trying to figure out a way to map a gamma ramp generation function I have (obtained through RE) to an HLSL/GLSL function/operator, in an effort to emulate the "look and feel" of an older game I fiddle with in my spare time.    however, I'm failing to get anywhere because I'm not sure how the gamma ramp set by D3DDevice9::SetGammaRamp gets used when outputting a pixel. what I'm looking for is: if I have the RGB tuple "x" what operations are performed on x's channels using the ramp that yield the final pixel rendered to the back buffer?   The ramp generation looks like so if it helps in any way: void GenGammaRamp(long dwGamma, double fContrast, double* pRamp) { double fGamma = (double)dwGamma; double fFractGamma = 0.01 * fGamma; double fGammaPercent = 100.0 / fGamma; double fRampEnd = 255.0; double fMaxGamma = 65535.0; double fGammaFactor = 1.0 / fRampEnd; for(double fRamp = 0.0; fRamp < fRampEnd; fRamp += 1.0) { double fGammaStep = fGammaFactor * fRamp * fMaxGamma * fFractGamma; fGammaStep = fGammaStep > fMaxGamma ? fMaxGamma : fGammaStep; double fFinalGamma = (pow(fGammaFactor * fRamp,fGammaPercent) * fMaxGamma * (100.0 - fContrast) + fGammaStep * fContrast) * 0.01; pRamp[(int)fRamp] = fFinalGamma; } } (the values get converted to back to 8/16/32 bit integers just before the are sent off to the driver).
  10. DirectX 12 Announced

    there is also a MSDN blog post up on DX12: http://blogs.msdn.com/b/directx/archive/2014/03/20/directx-12.aspx
  11. Unreal Engine 4

    I really like the fact that they decided to open source this on GitHub (seems the repo isn't public yet, even though their site claims so...), I love spelunking through AAA engines; the tutorials look pretty great as well.   What I can't understand is if/where you are able to download the UE4 UDK without paying the fee (as the registration says you can continue to use it even with a cancelled sub, you just won't get updates), ie: if I just want to bugger around and don't plan on releasing anything, am I still due for a "once-off" $20 payment?.     EDIT: I think I get the github thing now, seems you need to register through the UE portal, then link your existing github account to the UE portal, for some reason I thought the page was showing people how to sigh up to github...
  12. Without seeing what you are doing, the best advice is to just point you to some "best practice" stuff, here are the slides to a great talk from nvidia at steam dev days on speeding up your opengl code (the video is on youtube if you want the audio guide). In particular, pay attention to the buffer management portion and probably the draw indirect stuff.
  13. Not explicitly for ray tracing, but this awesome article from nVidia might be very useful, for actual ray tracing have a look at this nVidia research paper. You might also want to check if Ingo Wold has any publications on the subject (you can find most of his papers here).
  • Advertisement