Trenki

Members
  • Content count

    374
  • Joined

  • Last visited

Community Reputation

345 Neutral

About Trenki

  • Rank
    Member
  1. IDE for Linux

    I'd say take a look at Eclipse which also can support C++. Code completion is as good as in Visual Studio and syntax highlighting is even better than in Visual Studio combined with Visual Assist X.
  2. Hi! I've seen the thread about accumulated screen space ambient occlusion (ASSAO) where the SSAO is computed over several frames. I also know about a paper where they do pixel correct shadow maps by spreading the computations over several frames. Using a G-Buffer and doing the lighting as a post process can be a very demanding task for the GPU because it can be very memory intensive. Do you guys think it would be possible to also spread the while lighting calculation over several frames just like in ASSAO? What are your thoughts?
  3. I don't see why OpenGL should be a problem especially if you only need basic stuff which should be supported on older cards too. Off screen rendering can be done with FBOs and if those are not supported with the crappy pbuffers. If you really want a software renderer you could take a look at my homepage: www.trenki.net It includes a vertex and pixel processing pipeline and accepts triangles. It does all the necessary clipping etc. You can hook up C++ template classes for "vertex and pixel shaders" and do your custom processing in there. You can have any number of per vertex attributes you want and you can also have multiple varying parameters that are interpolated across the triangle surfaces. My renderer does only handle gouraud shading though, if you want flat shading you have to submit the triangles one by one. The current implementation uses fixed point math because I used this renderer on the 200MHz GP2X console for a Demo.
  4. priority queue in C++?

    There seems to be a priority_queue class in #include <queue>. It does not seem to allow changing the priority of the queue elements. And iteration of the elements is also not allowed. One can always use the STL algorithms for heaps and construct a heap for priority queue handling.
  5. Vertex buffer management.

    You don't need a different vertex buffer for every rendered object! I doubt you will have 100000 different trees on your map.
  6. Why do you need to delete the same object from different threads? The object should only be deleted once and only one thread or other object should be the owner and responsible for deleting. Your function does not actually help with the problem since the operation it performs is not atomic. Another thread still can read the pointer value before you are able to set it to 0. The thread that is about to delete the object then can get CPU time and actually delete the object and when the other thread gets its time slice it has a pointer to the deleted object. It won't work this way, you need thread safe stuff (semaphores, mutexes, locks etc.) for this to work which adds runtime overhead and programming overhead. Just delete the object only from a single spot when it is sure that it will no longer be required/accessed by anything else.
  7. You don't actually have to do the saturating add for each pixel. When stepping along the edges it is possible to too small (negative) or too large color values. I can't exactly remember how I did it but I think for the color values I simply adjusted the values at the start and the end of the span and made sure that by stepping in the x direction the value would never get too large (maybe by also adjusting the increments for the colors? I can't remember but I know Mesa does this also). When using texture coordinates this "trick" does not really work since the texture coordinates don't have a real meaning.
  8. Hi! I took a look at my implementation which I began coding over a year ago and yeah, it's also not too easy to follow my own code. (Must have a bad week :-) ). When I strip out the perspective correction stuff mine is still shorter even though I use 40 lines for the sorting in y. When developing my code I followed the "Perspective Texture Mapping" articles from Chris Hecker. In my implementation I consider a clipping rectangle and compute the deltas DX12 DX31 etc. When sorting i also determine which vertex is the top, middle and bottom vertex. This somehow made it easier later when drawing the half triangles because it made it very easy to select which edges had to be used to draw. I have a struct/class for the gradients, since I do not only handle color but a user defined number of vertex attributes of arbitrary meaning. The dx and dy gradient for each attribute is computed with a formula taken from Mesa :) which actually does nothing more than compute a cross product from the correct vectors. I have another class representing an edge of the triangle which i adapted from the articles. This made life much easier for me since this way i didn't have to deal with some many free variables. I the end the drawing with these edges is very simple. I generate the spans and call a callback function with the required data to process each span. I used template programming for these callback so the compiler can actually inline everything which made things a lot faster on the 200Mhz GP2X. for the vertex coordinates I use 28.4 fixed point numbers, the interpolated vertex attributes are signed 32 bit integers with no special meaning attached to them. In my implementation when interpolating 8 bit colors i simply shift the [0, 255] value to the left by 16 or more bits. This way the fractional part gets interpolated smoothly and before writing the data to the color buffer i shift back the propper amount. Sometimes it is necessary to clamp the color data at the edges of the spans (Mesa does it too). I encourage you to take a look at my code (see signature for link). I would really like to know if you find it easy enough to read and understand. Even though I understood my code well enough at the time I wrote it it seems now a lot harder :(
  9. Hey! I'm sorry, i didn't even bother reading all your code and simply skimmed over it. I implemented a software renderer myself which also does gouraud interpolation of vertex attributes so I know your code can be simplified a lot. In the 2D case where you don't have to account for perspective correction it is even simpler. I computed the gradient in the x and y direction of the vertex attributes i.e. color, then sorted the vertices in the y direction. Then you can simply scan convert the triangle (consiting of two half triangles) into spans and then use the x gradient to step the color values from one pixel to the next and assign the appropriate colors to the pixels. I used fixed point math, so I also had to take subpixel precision into account, when using floats this is probably even easier. EDIT: ah, now I actually spotted the comments. You also seem to do what I do in my renderer but somehow the code locks a lot more complicated. [Edited by - Trenki on December 10, 2008 4:33:41 PM]
  10. @Sc4Freak: I am also not sure why the Dinkumware library implements the insert the way it does but I think it may have to do with the fact that the objects stored in the vector do not need to be default constructible.
  11. You also have to use -lmingw32 as a linker flag to tell the linker to link to the mingw32 library. The order in which you link the libraries is important. This particular library has to be linked before the SDL libraries. If you want a windows application without the console window you may also have to add -mwindows to the linker flags.
  12. GLSL vertex shader issue

    Vertex texture fetch is generally not very fast but 3fps sounds excessively slow. You should take a look at the shader info log that is generated on the slower card. Maybe that gives you a clue on why it is that slow. And also make sure there are no gl errors in your application.
  13. Why do you even want to use a singleton here? Is it an error if you ever instantiated another SurfaceCache? Probably not. If you really need global access to a SurfaceCache a global variable which gets initialized on program startup should be all you need.
  14. Requesting peer review of 3D Vector class.

    Quote:Original post by bitshifter In C i would use... typedef struct { union { GLfloat _3fv[3]; struct { GLfloat x; GLfloat y; GLfloat z; }; }; } GLtype3fv; In Cpp i would use... class CType3fv { public: union { GLfloat m_3fv[3]; struct { GLfloat m_x; GLfloat m_y; GLfloat m_z; }; }; }; Now the data type can be used in more ways than just a simple vector. And also can be used as args to functions that expect ptrs EX: glVertex3fv() I wouldn't change the C code to something else since it already works fine for C++ as well. The anonymous union and struct are not standard C or C++ so compilers could produces errors. Most of the compilers support it though. In my vector math library i simply used a C++ conversion operator to T* which also allows me to index the vector.
  15. OpenGL OpenGL via Win32

    I don't understand why you would want to do your model->view->NDC transformations in software if that can be done with OpenGL with a single matrix multiplication. You also don't have to map your NDC coordinates to pixels. This is what OpenGL can do. With a standard orthographic projection you can directly pass your NDC coordinates to OpenGL provided that the viewport has been setup correctly. If you insist in wanting to pass vertex values with abs() > 1 you simply have to setup the ortho projetion matrix accordingly. Also, OpenGL does the z-buffering for you, why would you want to do it yourself? Doing it yourself defeats the purpose of using OpenGL. Then you could aswell write your own software rasterizer.