Advertisement Jump to content
  • Advertisement

NiGoea

Member
  • Content Count

    245
  • Joined

  • Last visited

Community Reputation

104 Neutral

About NiGoea

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. NiGoea

    Vector operation

    You're perfectly correct ! It works. Thanks, I didn't notice one can do that because I was reasoning in another way, without thinking about the dot product. I'm working on a simple 2D circle collision system (funnily enough, last year I wrote a way way more complex 3D ellipsoidal collision system), so if I happen to have another question, I'll put it here THANK again man, bye, -NiG
  2. NiGoea

    Vector operation

    Hi guys, I'd have to resolve this: | P + V*t | = d where P and V are 2D vectors, and 't' and 'd' are scalar values. Is it possible to isolate 't' ? I founded out that t = (d*d - P*P) / V*V is wrong. How could I resolve ? THANKS A LOT -NiG
  3. NiGoea

    Memory Exception !!!!

    I have the same problem in my project, when I generate lots of entities. I don't think there is much to say: - be careful to have a 'delete' for each 'new' - be careful to have a 'delete[]' for each 'new[]' - if you're using smart pointers, make certain that you don't use them in weird situations where they fail. It should be easy for you to detect WHEN and WHERE you allocate memory. It has to be when you call 'new' (or some old style malloc).
  4. thanks for the link Quote:Original post by MJP Crysis doesn't use any GI. Just direct lighting, shadow maps, and plenty of SSAO. I can't believe it. The only way in order for they to achieve such graphics is an extensive use of lightmaps. But how can they with that big areas ? Quote: Tons of games use radiosity + light maps. Yes, light maps turn out to be a good solution, but I personally don't like them. So are they the final solution used from AAA games to obtain realistic graphics ? I remember I read somewhere that Far Cry 2 uses some sort of indirect lightning
  5. Hi all, I was thinking about what algorithms are nowadays used to go beyond the classic direct illumination algorithms. I talk about games like Crysis and Stalker. What do they use to obtain such a good graphics on open spaces ? Shadow maps and SSAO are classic approaches (and maybe simple), but very limited expecially in open spaces. On the other hands, direct light approaches are very limited. And those games seem to have something more than the classic point and spot lights model. I know algorithms like photon mapping, instant radiosity and PRT, but I don't know if they are actually used of not. I'm working on an FPS. Let's talk about it ;)
  6. This is how works my engine. I have an emissive texture for materials, if the material has one. During GBuffer phase, the emissive value is writen into a channel. In material phase, that value is read and added to the light value. Therefore, emissive materials are lighted bright regardless of the external lightning condition, and yes, emissive in a 8Bit value. Actually, in my engine it is compressed in 3 bits. -NiG
  7. NiGoea

    Spotlight attenuation

    It's a simple algebra. I started with this: angle = arccos(pow(LIGHT_SPOT_THRESOLD, 1 / exponent)) with LIGHT_SPOT_THRESOLD a small value near to zero. I don't remember how, but eventually I reached this: m_spotExponent = log(LIGHT_SPOT_THRESOLD) / log(cos(halfFOV * DEGRAD)); So given halfFOV, you obtain the exponent. It works fine in my engine.
  8. Deferred renderer. I first perform a z-pre pass (turned out to be very fast). Then the usual GBuffer pass, which performs some computations for each pixel. Ih this pass, however, Z buffer in ON with func=LESSEQUAL. So I expect the GBuffer pass to be fast regardless of the scene complexity, because rasterization IS fast, and invisble pixel are discarded because the zbuffer is already filled. But this doesn't happen. A scene with 100.000 tris is WAY WAY slower than a scene with 5.000 tris. Vertex processing and rasterization is supposed to be fast, and with zbuffer already filled, the pixels should be computed exactly one time. I even tried rendering a stupid fullscreen plane with a very low depth, causing all the geometry to be discarded. But this doesn't change anything. GBuffer pass keeps being slow, depending of tris number. Doesn't have sense for me. I really can't understand WHY ! Could you help ?? THANKS
  9. NiGoea

    Deferred Rendering Problem

    Quote:Original post by DAaaMan64 So that means that the maximum brightness I can get to is the colorTex value. Unless, I am supposed to rely completely on specularLight to increase brightness. If I understood correctly, you talk about the problem that it's impossible to obtain very bright colors (very bright textures) in the composed frame. You can overcome it by changing color format (LUV maybe), or by using a scaling factor, which is a very simple solution (yet not perfect) that I'm currently using. You divide by K when updating the color buffer, and multyply by K in the pixel shader in order to retrieve the corret light amount. Obviously, K > 1. hope this helps
  10. NiGoea

    Compute plane in ellipsoid space

    Actually, I need the real distance. It is part of an formula like "(1 - D) / A", so perhaps i'll find out a way to use the square of D. Really thank you for your reply, it was usefull! -NiG
  11. NiGoea

    A bandwidth consideartion

    Quote:Original post by AgentC I mean storing a value in the G-buffer that tells the shading model one should use for that pixel (ie. phong, subsurface scattering, something else) then reading that in the light pixel shader and branching off to totally different lighting code paths based on it. Yes, it's a good way to go. In fact, many AAA games use a similar solution. However, for the purpose of my engine, I wont do that. Thank you anyway ! ;)
  12. NiGoea

    Software renderer: clipping

    Quote:Original post by maxest I have a question regarding clipping after vertex transformation stage. The problem is that all vertices that were behind the eye of virtual camera end up having big Z values after transformation. So a triangle that spans virtual camer'a eye plane would be rasterized in a wrong way. I know that one solution is to do world-space clipping to near plane of the camera, but I guess hardware renderers (like D3D9 or OGL) don't do that and do all the clipping in clip-space. Two years ago I finished my software renderer game engine. I did this for each triangle: - compute view space vertexes - 3D clip the vertexes in case they are beyond the near clip plane. Normally this leads to new triangles. - project all the vertexes - 2D clip against the viewport the projected points - Rasterization If you don't clip against the near clip plane, the points will have a Z with a negative sign. DirectX does both 3D and 2D clipping transparently.
  13. NiGoea

    Compute plane in ellipsoid space

    Thanks ! That was something I was thinking of too, but now I realized I can't use this method because later I compute the plane-point distance, so I have anyway to perform a sqrt :'( For the same reason, I think that I can't skip the normalization even if I start from an already computed plane. Am I correct ? thank you !
  14. hi ! Talking about ellipsoid space, very used in collision detection routines. I have a triangle, so three points and a plane already computed for it. Right now I convert the points into ellipsoid space and compute the new ellipsoid-space plane (for this new triangle). Question: is there a way to avoid recomputing the plane, exploiting the already computet world space triangle plane ? --- Computing a plane from three points requires a cross product and a normalization (for the normal), plus a vector-vector mul to compute the 'D' factor. Sounds slow when done in CPU for tons of triangles per frame. Since I don't need the normal to be normalized, is there a faster way to compute the very same plane ? I mean, skipping the normal normalization. THANKS GUYS
  15. NiGoea

    A bandwidth consideartion

    It is not a light prepass, just a "true" deferred :D But I don't feel like using another pass for the whole geometry just for a value like the luminance... it sounds very inefficient to me, yet giving optimal results. What do you mean with "i don't recommend branching in the light shader" ? thanks !
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!