Jump to content
  • Advertisement

tuccio

Member
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

540 Good

About tuccio

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming
  1. tuccio

    Injection LPV Geometry Shader

    I'm not sure about the error messages you're getting, but I'll go ahead and say some things I've noticed although they won't fix those. The reason people draw a pointlist rather than a quad to sample the RSM is that the sampling needs to be done in the vertex shader in LPV, and if you want the whole NxN RSM to be sampled you need NxN vertices, while a quad has 4. You need to sample in the vertex shader, because you need the VPL position information in the geometry shader to decide in which cell of the grid to write. As in the 2D render targets, you can use SV_Position to decide the x and y you're writing to, but to pick the z of the grid, you need to write your cell's z (the actual integral index, not normalized to [0, 1], treating the 3D texture like an array of 2D textures) to SV_RenderTargetArrayIndex in the geometry shader, you wouldn't need the geometry shader at all if you didn't need to write to SV_RenderTargetArrayIndex actually.
  2. tuccio

    Mesh library for 3D SDF

    Well I hoped to find something solid and working because I'm a bit in a hurry.
  3. Not sure if this is the place to ask, but I'm looking for a library to do a signed distance field calculation for triangle meshes, the only one i've found is OpenVDB but it doesn't support VC++ as a compiler apparently, and i need that. Any tip?
  4. tuccio

    Injection Step of VPLs into 3D Grid

    Two passes won't accomplish much, each pass a cell will propagate to its neighbors, that's too local to call it global illumination. You will need a bunch of passes (depending on the 3D texture resolution), and to do so you can use the sets of 3d textures you described, using ping pong buffering (first pass you read from A and write to B, then read from B and write to A and so on).   This is quite an expensive procedure, and to reduce the iterations the crytek paper introduces cascaded LPV.
  5. tuccio

    Injection Step of VPLs into 3D Grid

    The 3D texture in LPV represents an actual AABB in world space, in your example it's represented with min/max corners and you're trying to calculate the cell you need to write in (i.e. the cell where the VPL is located), the 0.5 * normal shift might be confusing, it's a bias to reduce bleeding. You can find most the information you need here
  6. tuccio

    Injection Step of VPLs into 3D Grid

    You mean you're only using the flux? How do you decide in which cell of the 3d grid you're going to write (so basically your SV_Position and SV_RenderTargetArrayIndex ) if you don't use the position from the RSM? You can get it from the depth or from the position texture, you don't need both of them. Also you need the normal to create the cosine lobe, you're probably using zonal harmonics coefficients, but you need to rotate the cosine lobe so that it faces towards your normal.
  7. I think you should divide v by w right after line 25 multiplication, like v = mul(v,InvViewProjection); v.xyz = (v.xyz / v.w) - CamPosition.xyz;
  8. Is the shader debugger supposed to work at all on either Nsight or VS? From what I read it doesn't seem to be the case for Nsight, but should work on VS/DXCap.   I did the mentioned Windows 10 update after reading the thread, since, before the update, DXCap wasn't working at all for me: "capture" was fast and resulted in a "playback failed" when analyzing the log on VS. I don't think it captured anything at all, even though it did output something in the log judging on its size.   After the update DXCap seems to work, now the capture is slow (as, from what I read, it's supposed to be I guess), and I can open the logs, see the calls and the pixel history, but when I try to launch the shader debugger on a pixel shader I get this message: "The Graphics DIagnostics engine failed to build the shader trace because it did not find modified pixels. Shader debugging is not available.". When I try on a vertex shader all the inputs are set to 0.   Also frame capture from VS graphics debugger hangs and crashes for an access violation so it doesn't seem to be an option either.
  9. tuccio

    Unity-like transform component

    yes non uniform scaling   i'm not sure about this either, if this is the best you can do, it's not enough   edit: ok, this makes it clearer (quote from here)  
  10. tuccio

    Unity-like transform component

    i think so too, probably i should just drop it, my goal was like having a transform i can update from physics but even unity just either uses transforms or physics component actually   so whatever, thank you
  11. tuccio

    Unity-like transform component

      i am not sure what you mean, i just have a transformation stored as 2 vectors for scaling and position and 1 quaternion for rotation, i don't have any dependency for that   i want it to have one or no parent, thus forming a hierarchy of transformation   i can easily compute the world matrix for a node in the hierarchy by multiplying the transform (T * R * S) for each transform 4x4 matrix along the hierarchy   but what i'd like and i'm not sure of is how to implement the same methods Unity transform component has that allow to read and write the global rotation translation and scaling of any "node" in the hierarchy   this means to me i have to somehow decompose the transform matrix (obtained multiplying the ancestor matrices) in the T, R, S components, but this doesn't seem to me like a task i want to do each time i reposition something in the hierarchy   i'm actually curious if it can be done more efficiently, or how Unity provides this decomposition, if anybody has an idea about it
  12. I'd like to implement a transform component just like Unity's transform, i.e. a transform hierarchy which allows rotation, translation and non uniform scaling stored as quaternions and vectors   the task shouldn't be too hard, but there's an aspect i'm not sure of   Unity allows programmers to access world position, rotation and scaling individually and i'm wondering what's a good way to implement this   my naive solution would be just computing the 4x4 transform matrix through the hierarchy and decompose it somehow (polar decomposition or singular value decoposition should do the job i think but i don't know much about this yet)   however, this way i would not make use of the decomposition i already have for each transform in the hierarchy so i wonder if there's some way to do it faster
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!