Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

399 Neutral

About eyyyyyyyyy

  • Rank

Personal Information

  • Interests
  1. eyyyyyyyyy

    OpenGL Opengl World Position From Depth

    oh so that explains it then. Thank you!
  2. eyyyyyyyyy

    OpenGL Opengl World Position From Depth

    I can't believe this - it was something so simple: my glDepthRangef was set to 0.1f and 1000.0f setting it to 0.0f and 1000.0f fixed the position reconstruction argh!   Thanks for the help anyway! :)
  3. eyyyyyyyyy

    OpenGL Opengl World Position From Depth

    screenuvs are just 0-1 uvs for a screen space quad like the usual deferred setup When i use an explicit world space position buffer everything renders perfectly. So I am sure it is reading from the correct uvs I tried cpu inverse view proj and it didn't make any difference. It is so weird, because it looks almost okay until the camera is moved far away!
  4. Hi, I am having a lot of trouble trying to recover world space position from depth. I swear I have managed to get this to work before in another project, but I have been stuck on this for ages   I am using OpenGL and a deferred pipeline I am not modifying the depth in any special way, just whatever OpenGL does and I have been trying to recover world space position with this (I don't care about performance at this time, i just want it to work): vec4 getWorldSpacePositionFromDepth( sampler2D depthSampler, mat4 proj, mat4 view, vec2 screenUVs) { mat4 inverseProjectionView = inverse(proj * view); float pixelDepth = texture(depthSampler, screenUVs).r * 2.0f - 1.0f; vec4 clipSpacePosition = vec4( screenUVs * 2.0f - 1.0f, pixelDepth, 1.0); vec4 worldPosition = inverseProjectionView * clipSpacePosition; worldPosition = vec4((worldPosition.xyz / worldPosition.w ), 1.0f); return worldPosition; } Which I am sure is how many other sources do it... But the positions seem distorted and get worse as i move the camera away from origin it seems, which of course then breaks all of my lighting... Please see attached image to see the difference between depth reconstructed world space position and the actual world space position Any help would be much appreciated! K
  5. Hello!   I have been trying to implement a technique to mask areas of rain from occluders. The obvious choice for this is bog standard orthographic shadow mapping.   However, I hit a snag when it comes to the sides of buildings. Due to precision error and whatnot, some completely vertical sides of buildings get marked as in shadow! This isn't ideal. (see shadow_issues.jpg)   I had then thought, maybe I could offset the normals in the shader, but then duh, it doesn't work for flat planes as the normals point upwards.   In the STALKER GDC 2009 presentation they say:   Use shadowmap to mask pixels invisible to the rain - Draw only static geometry (so that means they are not using a seperate pre done mesh) - Snap shadowmap texels to world space <- what does this mean? I think this is important... - Use jittering to hide shadowmap aliasing and simulate wet/dry area border.   Any thoughts on how to solve this would be much appreciated!   Thanks
  6. Awesome thank you very much, I think that covers everything I need to know :)
  7. Currently I am not sure how big the scenes are going to be - im trying to research how i am going to handle shadows before deciding haha! But they have potential to be quite long, as the levels are going to be designed missions which will have different environments etc but all from the standard city builder perspective (mine isn't a builder game, but the perspective is the same)   There may be like 20 dynamic objects.   I had thought about doing a big static shadow map, and updating it with dynamic objects, but that seems excessively large as I was hoping to target mobile platforms!   just seems so overkill to have dynamic object shadows, but alas, i cannot think of a better way of doing it :(
  8. Hi All,   I have been trying to plan out a system which will have static pre baked lightmaps for static objects and to then combine it with real time shadows from dynamic objects.   I'm not worried about the mathematical blending between the static and dynamic, as I think I have it figured out - my actual problem is how to handle dynamic objects.   I want the dynamic objects to receive and cast shadows.   The only way I can think of doing this is to render the entire scene and do realtime shadow mapping - but this totally defeats the purpose of doing lightmapping for the shadows surely? I still have to incur the cost of all of that rendering!!   The reasons I think I have to do it that way are: - How can a dynamic object receive static lightmap shadow? (I don't think you can?), so I have to do a full shadow mapping pass - How can I cast accurate shadows from dynamic objects? I would have to use the shadow mapping technique again too!   Assuming I have to use full shadow mapping I thought of a very basic solution: - render the entire scene into the shadow depth buffer - create a temp copy - render dynamic objects into buffer and then use that - restore the copy for next frame (so we do not have to re render the scene, only dynamic objects)   Which is all fine and dandy, unless the scene is too large and I have to use something like cascaded shadow maps which would then mean I would have to render per frame? Which could be too much for mobile?   My scene is going to be city builder style/angle, so the shadows are going to breakdown quite quickly if I want the entire view to have shadows (and if I just have the entire shadow volume be the size of the map, it is going to look terrible).   Argh! I don't know what to do! How do other engines solve this?   Unity 5 doesn't blend the lightmaps with dynamic shadows so you get this dual shadow artifact which looks awful and is what I am trying to avoid. I also do not know how they do their shadows for dynamic objects, without incurring the cost of re- rendering the scene.   I have no idea how Unreal Engine does this either   I hope you understand what my problem is, it's a little difficult to explain clearly! I guess I really have two problems here...   Thanks  
  9. eyyyyyyyyy

    Foliage Collision

    Thank you all for your ideas, plenty of ideas to work with! The link that tonemgub has a really useful description in it for a more simpler approach. I'm currently not sure how much foliage will be in the application I want to make, so that will affect what I can do with it and how realistic I can afford to make it... And yeah kalle_h, I think I will need to do that anyway for the simpler systems. Doing sphere collision is quick and easy in a shader :) As long as I can get it looking vaguely good that should be enough haha! Thanks again guys :) Hopefully I can start working on it when I have some time :P
  10. eyyyyyyyyy

    Foliage Collision

    Hi everyone, Does anyone know how the foliage collision/bending works in cryengine? Its a nice effect which I want to create in my own applications.   I'm not talking about wind or just simulated movement, I'm talking about the physical bending when the player/object goes into the foliage. I have a few ideas how they have implemented it, like doing CPU side simulation then uploading the matrixes but this seems awfully expensive if you have lots of collisions going on. I was hoping to get this effect on mobile... It just needs to look vaguely accurate, so I don't mind any shortcuts! I'm currently gathering research on this topic so I can think about the implementation for later :) Thanks  
  11. eyyyyyyyyy

    scientific programming advice

    wow thanks for the responses!   @Alvaro   Thanks for the suggestions, gives me something to look at. I have just finished first year of my programming undergraduate degree, so a PhD would be way off anyway! I just want to use my time effectively...   @Buster2000   Yeah London does seem like the place where it all happens, a lot of VFX programmer jobs are there too...   @Buckeye   Interesting comment thanks :) See the things is, I find the scientific field fascinating but I don't think I'm cut out to do advanced things like nuclear engineering, but translating it into a program would be so cool. I'm experimenting a little with OpenCL because I think GPU compute languages could be really useful for research as it could have great performance on relatively cheap hardware (just a thought, of course I may be totally wrong)   @Glass_Knife   I guess I should look at some actual hardware<-->software programming like the rasberry pi, because it looks cool but it seemed to only be used as a media center!!! I'll have to investigate. And yes research and development sounds awesome, i'm currently headed more towards the VFX industry of research and development but it just seems to be epic! :D   Overall, it seems that I won't need a PhD which was honestly my main concern, it could be something later in life to do, but I really want to give it a go in industry for a while (placement year next year so hopefully I'll have something by then!!!)
  12. Hi,   I'm currently learning C++ and I have been focusing on the computer graphics area (raytracers/pathtracers that kind of thing) which I have enjoyed thoroughly, but I am still thinking about other career avenues (could be for the immediate or long term future not sure yet) but I have no idea where to start with the scientific area.   Another thing is that I don't have a scientific background, I only did Maths and Physics at A-Level and I'm not sure I could commit to a PhD in science without going into industry first.    Would anyone be able to give me some advice on what to do?:   Can someone without a PhD work in the scientific area as a programmer for a research team?   If so, what skills or projects should candidates have? (Linux, Python and things like Matlab have been suggested to me before)   I have been trying to think of some projects, but I haven't had any ideas. With graphics is that there are a lot of cool projects to do in order to learn the graphics elements like raytracers, pathtracers and real-time engines... And then I look at things in science like folding@home and think nope can't do that!   Finally, are there science jams/hackathons in the UK that people regularly go to? (I'm based in the South-West so closer would be preferable!)   Thanks :D
  13. @Aressera   Woah, that aabb codes looks great thankyou! This if the main area now slowing mine down as it is a rather naive octree test system and still has to test a lot of triangles.   I don't suppose I could see your equivalent of a Vector4 implementation? I see you are accessing the components directly (.x, .y etc) so are you using a union? I have to access my __m128 as .m128_f32[3] (for x as it reverses the order in which it stores the floats annoyingly) which doesn't help with code portability but I was unsure of the impact of using a union to access the elements.   @Matias   That link looks interesting, I think there are a fair amount of mispredictions and loading waits to potentially happen in my code so I'll check it out :)
  14. Oops, my bad - it was a problem with my AABB checks, my SIMD version wasn't working correctly. I feel like such a mug, urgh! (thanks Krypt0n)   Its now roughly the same speed as non-simd, this is probably because I'm temporarily extracting the __m128 as a float[4] performing the scaler comparisons which I know is very costly for performance (but at least it damn works). I just need to work out a good way of SIMD aabb-ray checks :D   either way, thanks for the help guys - the tips on memory and costly indirections was really useful! (I hope I didn't waste your time, I should of checked my AABB earlier :/ )
  15. Okay so I got some comparisons with my non-SIMD to SIMD code:   Implicit sphere based scene (so little memory access)   NO-SIMD: 15s SIMD: 11.5s (yay slight improvement!)   Large tribased scene (a tank, so lots of memory access)   NO-SIMD: 6.6s SIMD: 600 seconds !!!!!!!!!!   both of them show similar stall areas, bur very simply my SIMD version (everything is the same size) makes everything implode on tri based scenes and take significantly longer on stall prone areas. Which is something I really don't get! Non SIMD uses 4 floats x,y,z,w and the SIMD version uses __m128. Both produce the same images, so I know the code is accurate, but just very very slow.   @Ohforf sake   Uh oh I'll have a look at removing the indirection   With your suggestion of basically batch rays, would then the compiler auto optimise, what would a standard operation look like?   So like:   // 4 ray batch   ray1 = ray1 + anotherRay; ray2 = ray2 + anotherRay; ray3 = ray3 + anotherRay; ray4 = ray4 + anotherRay;   and the addition operation is just (rather than fancy SIMD): ray1.x = ray1.x + anotherRay.x ray1.y = ray1.y + anotherRay.y etc   Then I would assume, that if the rays do something completely different to each other (like reflect, and the other continues onwards or something) I would batch them up separately? (This approach would be better for GPUs too I guess?)
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!