Jump to content
  • Advertisement

51mon

Member
  • Content Count

    548
  • Joined

  • Last visited

Community Reputation

342 Neutral

About 51mon

  • Rank
    Advanced Member

Personal Information

  • Role
    Voxel Artist
  • Interests
    Art
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It occurred on both dev & release builds. It happened close to a deadline so I haven't done too detailed investigation yet. In this case the high numbers of CS-dispatch was due to some unoptimized setup on the art side so could be "fixed". However I'm planning to do some closer investigation soon, make some tests, swap with ATI card etc. I just wanted to check beforehand in case other people had an opinion. So cheers for the reply
  2. 51mon

    What I need to develop a game in c++??

    A lot of people like unreal & unity. If you use these you can develop a lot with gui, i.e. you move around stuff in an editor instead of typing code. If you are 1 person you should go for an engine which offers easy solutions to the things you will have in your game. Maybe try both of these engines and see which one you like the most.
  3. Hey I'm working in an engine. Whenever we run dispatch calls they are much more expensive than draw calls for the drivers on nvidia cards(checked with nsight). Same cost maybe for running 5000 draw calls as a few 100 dispatch. Is this normal?
  4. I think there is a mistake in the pseudo code above. I think you should use. float3 result = float3(Noise3d(coord.xyz + offset1.xyz), Noise3d(coord.xyz + offset2.xyz), Noise3d(...)). You can also look into curl noise as well. It's often used for 3d and has some benefits
  5. Hey I'm dealing with ribbons following the shape of multiple spline segments. It's straightforward to compute the direction at any point along the spline. However the ribbon also got a flat shape and I'm struggling with finding a way to compute the angle of the ribbon in the plane perpendicular to the direction. To illustrate what I mean here's a piece of code that almost worked: float3x3 rotMtxFromSpline; rotMtxFromSpline[1] = normalize(splineDir); rotMtxFromSpline[0] = normalize(cross(float3(1, 0, 0), rotMtxFromSpline[1])); rotMtxFromSpline[2] = cross(rotMtxFromSpline[0], rotMtxFromSpline[1]); // Rotate rotMtxFromSpline[0] in the rotMtxFromSpline[0]-rotMtxFromSpline[2]-plane to align with float3(0, 0, 1) dir rotMtxFromSpline[0] = normalize(dot(rotMtxFromSpline[0], float3(0, 0, 1)) * rotMtxFromSpline[0] + dot(rotMtxFromSpline[2], float3(0, 0, 1)) * rotMtxFromSpline[2]); rotMtxFromSpline[2] = cross(rotMtxFromSpline[0], rotMtxFromSpline[1]); The problem with this code is when the spline segment becomes perpendicular to (0,0,1)-dir as the orientation switch from one side to the other very easily. The approach above is kind of a global approach and I'm thinking if there's a way to append some info to each spline segment to remedy the issue. Anyhow I wanted to post this question in case anyone had a similar problem that they solved or maybe anyone know some web resource dealing with this issue? Thanks!
  6. I want to change the sampling behaviour to SampleLevel(coord, ddx(coord.y).xx, ddy(coord.y).xx). I was just wondering if it's possible without explicit shader code, e.g. with some flags or so?
  7. Thanks for the replies! SH could be an option and it did cross my mind. It's 2D indeed. The domain when using SH is usually a sphere. In my case I want to use it on a quad. Would that have an impact? I understand that spherical coords can be regarded as a square. I'm just wondering if the domain has any impact & if there would be some other basis functions more suited for an actual quad. @JoeJ your suggestion looks similar to a fourier series which also crossed my mind. The fact that sin/cos operations are expensive on GPUs made me a little less keen. The general idea of treating the problem as some sort of curve is good though. I could use something like a power function, that could be encoded in 4 params - uv intensity multiplier & uv exponent, given that I pass on the actual colour of 1 of the corners (on the other hand this approach would only be able to depict gradients). I'm not 100% of how much detail I need to encode in the quad but preferably as much as possible for as little cost
  8. Hey I want to try shade particles by compute a "small" number of samples, e.g. 10, in VS. I only need to compute the intensity of the light, so essentially it's a single piece of data in 2 dimensions. Now I want to compress this data, pass it on to PS and decompress it there (the particle is a single quad and the data is passed through interpolators). I will accept a certain amount of error as long as there are no hard edges, i.e. blurred. The compressed data has to be small and compression/decompression fast. Does anyone know of a good way to do this? Maybe I could do something fourier based but I'm not sure of what basis functions to use. Thanks
  9. I think it can be used in many different contexts. Right now I'm going to use it for refraction. To only copy part of the backbuffer that's covered by the objects BB.
  10. Hi   I want to find an optimized way to compute a bounding rectangle in screen space from a bounding box in 3D.   The best way I've been considering so far is to operate in homogeneous coords and project the BB vertices to screen space. For vertices behind the camera I will find the line-intersection point with the XY-plane. Then I use Liang–Barsky to compute the bounds in screen space. I would also capitalize on optimization opportunities wherever I can find them, e.g. don't look for intersection points when 2 connecting BB vertices are behind the camera.   So I just wanted to check if anyone knows of some better idea, maybe something you used? Maybe there are some methods specifically aimed for finding the BR from a BB?     Thanks!
  11. 3D view direction reflected about the normal (aka reflection direction), plus the surface roughness (3D cubemap lookup, plus mipmap level == 4D lookup table).http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf Got ya! I was thinking if there was a way to index the view and reflection vector somehow so we don't have to make the v=n=r assumption, but true the roughness is also a dimension
  12. A simple reflection map is not physically based. A complex IBL system can be. e.g. instead of just taking an environment map and texturing it on there to get a gloss effect (which will intuitively look good, but is not going to match the real world), if you ray-traced that environment as a surrounding sphere and integrated a thousand different rays using a physically-based BRDF and importance sampling, then your results would come close to matching the real world. The key is using a Physically-based BRDF to represent the surface, as 'shading' is all about surfaces. Modern games do do the above, but with some clever approximations. The result of importance sampling that sphere can be done ahead of time and stored in a big look up table... The problem is that it's many-dimensional and would be way too big, so the common approximation basically stores it as a 4D cube array plus a 2D error correction table. The final implementation looks like simple reflection mapping, but the inner workings are based on physically plausible BRDFs, ray-tracing and importance sampling (or unbiasedMonte carlo sampling if you like), and most importantly, in many cases the results get quite close to the ground truth, similar to the hideously expensive true ray tracing renderers but at a fraction of the cost.     I have a question about this:   4D - which 4 dimensions does this lookup use for input? Is there any papers describing this method?   Thanks!
  13. Hey I work on a particle system with the following transformation to align the particles with the view point:   vY = particleToCameraVec   (in world space) vX = Vec3( vY.y, -vY.x, 0 ) vZ = cross( vX, vY )   These 3 vectors form the orthogonal basis that the particle sprites are transformed with. As you can see vX is aligned to the XY plane. The benefit from using this setup is that the particles appear to preserve their relative orientation even when the camera rotates. This works all fine except from when viewing the particle system along the Z axis. When panning across the emitter the particle orientation gets rapidly flipped (as vY.x & vY.x is insignificant compared to vY.z).   I've tried to work out the math so that the orientation can be improved when panning along the XY plane while looking in the Z direction but none of my attempts have been fully satisfactory yet.   Does anyone know how to solve this? Or maybe you would have done it differently altogether?     Thanks!
  14. Sure I'm fully aware of the limited size of the cache and you can also run methods in the code during asset loading/conversion that attempt to optimize the mesh in relation to cache hits. We currently miss the cache 10% of the time by the way which doesn't seems too bad.   Of course only concentrating on the vertex-per-face ratio would be stupid, I do not disagree with that! Well demonstrated in your example. However I don't see a problem in consider it as well. I fully understand that different kind of meshes will have different average. On the other hand if it turns out that all or many of the objects in a scene has unusually high ratio this would indicate that things can get optimized.   With this thread I was mainly interested in hearing peoples opinion in this matter. If people have rough ideas of where the numbers should be at, e.g. you usually have a budget for how many verts a main character should have and in case the real number diverts considerably from this the asset will get iterated. For example when I sorted our meshes according to ratio some high poly meshes had over 2.5, to me that number is way higher than what you should usually expect. From what I gathered from people replying so far, it has not been of their concern. If you're modeling in a way were the ratio is naturally pretty low that is not an issue of course
  15. On the GPU there's this thing called post transform vertex cache. Before the vertex shader process a new vertex it first check if it's in the cache already. It's much faster to fetch a vertex from the cache than to process the vertex shader. So if there's a lot of sharing of the vertices between the faces this means a lot more vertices are fetched from the cache and the GPU works faster. So I disagree! 
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!