Jump to content
  • Advertisement

elect

Member
  • Content Count

    4
  • Joined

  • Last visited

Community Reputation

122 Neutral

About elect

  • Rank
    Newbie

Personal Information

  • Interests
    Programming
  1. Hi, ok, so, we are having problems with our current mirror reflection implementation. At the moment we are doing it very simple, so for the i-th frame, we calculate the reflection vectors given the viewPoint and some predefined points on the mirror surface (position and normal). Then, using the least squared algorithm, we find the point that has the minimum distance from all these reflections vectors. This is going to be our virtual viewPoint (with the right orientation). After that, we render offscreen to a texture by setting the OpenGL camera on the virtual viewPoint. And finally we use the rendered texture on the mirror surface. So far this has always been fine, but now we are having some more strong constraints on accuracy. What are our best options given that: - we have a dynamic scene, the mirror and parts of the scene can change continuously from frame to frame - we have about 3k points (with normals) per mirror, calculated offline using some cad program (such as Catia) - all the mirror are always perfectly spherical (with different radius vertically and horizontally) and they are always convex - a scene can have up to 10 mirror - it should be fast enough also for vr (Htc Vive) on fastest gpus (only desktops) Looking around, some papers talk about calculating some caustic surface derivation offline, but I don't know if this suits my case Also, another paper, used some acceleration structures to detect the intersection between the reflection vectors and the scene, and then adjust the corresponding texture coordinate. This looks the most accurate but also very heavy from a computational point of view. Other than that, I couldn't find anything updated/exhaustive around, can you help me? Thanks in advance
  2.   I had the same feeling..     I decided to follow your suggestion and go on with the depth peeling. Currently I am trying to implement a simple version of it, that is just the original algorith (no dual) without occlusion query..   I just wonder if I can implement it without shaders.. is it possible? I ask because I have only a rough knowledge about and this would require a break to go deeper through shaders and then moving on
  3. I don't know shaders, but I have a basic idea of their concept.   I need to implement depth peeling and so I would like to know if first I should go deeper into the shader world or it could be implemented without shaders, just using smartly the glDepthFunc..  
  4. Hi people ,   I saw the presentation at the High-Perf Graphics "High-Performance Software Rasterization on GPUs" and I was very impressed of the work/analysis/comparison.. it looks amazing..   http://www.highperformancegraphics.org/previous/www_2011/media/Papers/HPG2011_Papers_Laine.pdf   My background was Cuda, then I started learning OpenGL two years ago to develop the 3d interface of EMM-Check, a field-of-view-analyze program to check if a vehicle is going to fulfill a specific standard or not. essentially you load a vehicle (or different parts), then you can move it completely or separately, add mirrors/cameras, analyze the point of view and shadows for the point of view of the driver, etc..   We are dealing with some transparent elements (mainly the field of views, but also vehicle themselves might be), therefore I wrote some rough algorithm to sort on fly the elements to be rendered (at primitive level, a kind of Painter's algorithm) but of course there are cases in which it easily fails, although for most of cases is enough..   For this reason I started googling, I found many techniques, like (dual) depth peeling, A/R/K/F-buffer, ecc ecc   But it looks like all of them suffer at high resolution and/or large number of triangles..   Since we also deal with millions of triangles (up to 10 more or less), I was looking for something else and I ended up to software renderers, compared to the hw ones, they offer free programmability but they are slower..   So I wonder if it might be possible to implement something hybrid, that is using the hardware renderer for the opaque elements and the software one for the transparent elements and then combining the two results..   Or maybe a simple (no complex visual effect required, just position, color and simple light) ray-tracing algorithm in cuda/opencl might be much simpler from this point of view and give us also a lot of freedom/flexibility in the future?   I did not find anything on the net regarding this... maybe is there any particular obstacle?   I would like to know every single think/tips/idea/suggestion that you have regarding this     Ps: I also found "Single Pass Depth Peeling via CUDA Rasterizer" by Liu, but the solution from the first paper seems fair faster
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!