Advertisement Jump to content
  • Advertisement

hellraiser

Member
  • Content Count

    34
  • Joined

  • Last visited

Community Reputation

134 Neutral

About hellraiser

  • Rank
    Member
  1. hellraiser

    GLSL just-started questions

    Quote:V-Man wrote: It's a flexibility offered by the API. You can have pieces of your vertex shader in different shaders, compile them, attach them. Attach fragment shaders too. Then link the entire thing to make a valid program object. It might give a speed boost when you have many shaders to compile. So you're saying that attaching multiple shader objects to one single shader program removes the need to render a given object multiple times? I have another question. I have recently looked into an open source project (forgot its name) that essentially wraps the GLSL functionalities, in particular shader object and program creation, and noticed that it always creates shader objects in pairs requiring both a vertex and a fragment shader source. My question is, are shaders always developed in pairs, ie a vertex and a fragment shader, or are there situations where a vertex or a fragment shader alone might prove enough? In the latter case, could someone provide me with an example so I understand it better? PS: If I had a better book other than just the red book I wouldn't trouble you guys with these sort of questions. Googling this subject up doesn't dig out many useful resources as well. Thanks for all your patience and comments.
  2. hellraiser

    GLSL just-started questions

    First off, thanks to everybody for their replies. Quote:You run one shader, draw the object that uses it, then switch to a new shader (or no shader at all) and repeat. So basically, I will have to render each object n times, n being equal to the number of lights plus shader effects attached to the object? For instance, n would be equal to 3 if in a scene there were two lights and a given model had a parallax bump mapping effect attached to it. What is the purpose then of having the ability to attach several shader objects to one shader program? I'm sorry if I'm being annoying but I want to understand this right before I go about writing generic code to support this beautiful feature! :) Thanks again.
  3. Hi, I've just started learning the GLSL and am still to implement my very first shader program. I only know the concepts introduced in the red book in chapter 15, so please bear with my ignorance. :) As far as I understand it there can be only one shader program running at a time, which may have many shader objects attached. Now, suppose you have a scene that consists of many models and, say, a couple of non-ambient lights that are always on and that each model has a shader effect of its own. How on earth do you run the lights shaders as well as a shader for each of the models? The only way I think this can be done is by attaching/dettaching shader objects to the running shader program as the renderer walks through the scene graph, though I want to believe I am wrong as it doesn't sound all that efficient. Also, how can a shader for a point light and another for a spot light, for instance, run at the same time? Must each of the shaders loop through all enabled lights and inspect their properties? ie. light_position.w==0, light is directional.
  4. hellraiser

    GL_EXT and GL_ARB

    Quote:Brother Bob wrote: EXT is usually an extension used by many vendors. ARB is taking it one step further and recognized by the ARB as something that could, and often will, end up in the core in one way or another. That's great, thanks!
  5. hellraiser

    GL_EXT and GL_ARB

    Hello, What is the difference between extensions starting with the prefix GL_ARB (OGL Architecture Review Board?) and GL_EXT in terms of how widely supported they are by hardware manufacturers? In my understanding GL_ARB extensions are guaranteed to be supported providing the hardware has the capabilities required to implement a given ARB extension, but does the same apply to GL_EXT extensions? Many thanks.
  6. Quote:Phantom wrote: <snip> Once a vertex has been transformed by a vertex shader its output data is store in this array and the 'key' is set to the index. When the graphics card next goes to pull a piece of data for processing it will use the index of the vertex its about to deal with and first check if its in the cache. If it is then it reuses that data, if not then it fetches the data and performs the transform. <snip> Say you have a square grid of x by y tiles where each tile is painted with one texture, what would you say is the most efficient way to render it? As I see it vertices can't be shared among adjacent tiles because the texture coordinates differ at adjacent vertices, which means there will have to be unique vertex data for every single tile in the mesh. Would you say using glMultiDrawElements is the most efficient way to render this grid? I think this would imply having 6 indices for the 4 vertices (2 tris) in each tile and then a consecutive list of indices for each tile. BTW, brilliant explanation on how the vertex cache actually operates, Phantom. Thanks!
  7. Quote:I'm slightly confused by this statement "The only differences now is that the vertex buffer is larger so as to accomodate all the triangle vertices of the skydome ((vertices-2)*3). The vertex buffer shouldn't need to be any bigger, a triangle list in strip format uses the same amount of vertex data as a triangle strip does, the only difference is that it uses more index data. So, for two shared triangles both method would have 4 vertices defined, however the triangle strip would have an index buffer of [0,1,2,3] and the triangle list would have an index buffer of [0,1,2,0,2,3]. The fact you don't mention an index buffer in any of your posts makes me doubt you are even using one; you should. You are absolutely right; I'm not! :) That's why I've expanded the vertex buffer to store 3 vertices/triangle. I can now see what an idiot I was. Quote:Simply setting positional information the same isn't enough to make use of the post-T&L cache; at data look up time, without an index, the GPU has no way of knowing that the data at position 4 is the same as the data at position 0. What the index list does is allow the GPU to say 'I know this data is the same, therefore I can use this stored result'. So that's how the vertex cache works... In all honesty I always thought using index lists was an unnecessary waste of bandwidth, but then again I never quite understood the benefits from using them in the first place. Quote:I suspect you are rendering with glDrawArrays() [...] Again, right on the money! Quote:[...] which is the slowest of the vertex array functions (well, the ones which don't pick the data one vertex at a time anyways), you should be using glDrawElements() or glDrawRangeElements(), these are MUCH faster due to the use of the index buffer (I don't have the results to hand right now, but I'm pretty sure in a vertex shader heavy scene I was seeing ~10x improvement between glDrawArrays and glDrawElements for the data submission). In short; - You need to use indices - You don't need to generate more data Thank you ever so much for the eye opener. There's not much I can say but to slap myself on the wrist... You have no idea how helpful your post was to me, Phantom! Thanks again! [Edited by - hellraiser on September 18, 2007 8:31:27 PM]
  8. Quote:Original post by Palidine That's an insignificant difference in time. 865 fps = 1.15ms per frame 780 fps = 1.28ms per frame that's a difference of 0.15ms (i.e. one one hundredth of one one thousandanth of a second. i.e. there is effectively no difference in framerate. -me Very much true but what's troubling me is the decrease in FPS in the first place. What happens when my scenes grow in complexity and likewise my graphics engine does too? Should I now maybe be thinking about changing algorithmic strategies and focus more on generating tri-strip meshes rather than triangles? Is this an isolated issue related to my graphics card alone? I mean, I've got so many questions right now and no answers that it's making me doubt everything I've done so far in my modest graphics engine . Thanks for your reply. :-) PS: I've edited my original post and added some more info at the bottom.
  9. Hello all, I've just converted a class that generated a triangle-strip mesh of a skydome to generating it with triangles. I did this as I've read a couple of articles that state that rendering triangles is slightly faster than strips because the GPU is able to take advantage of its fast vertex cache. Also, numerous posts here on GameDev by many gurus state just the same. However, after converting the class I tested it with the old tri-strip (a) and triangle (b) skydome meshes hoping I would get an increase in FPS (if only small.) Results: a) Tri-strip ================================================ Viewport: (0, 0, 1024, 768) Run time: 36470ms , ~36s Total frames: 29984 Highest frame rate: 865 Lowest frame rate: 759 Average frame rate: 832 b) Triangle mesh ================================================ Viewport: (0, 0, 1024, 768) Run time: 90693ms , ~90s Total frames: 67752 Highest frame rate: 780 Lowest frame rate: 692 Average frame rate: 752 The two test programs are release builds, were ran in 1024x768 full-screen res and render about 20000 triangles, though the skydome mesh alone consists only of 5180 triangles (5184 tri-strip elements in a; 15540 vertices in b.) I let test b) run for longer because I couldn't believe the (significant) drop in FPS and was hoping for some miracle to happen... My graphics card is an ATI Mobility Radeon x700 (128MB, PCIe.) What could be the reason to the drop in FPS? <edit> The new skydome generating algorithm is in essence the same as it was before when it generated a tri-strip mesh. The only differences now is that the vertex buffer is larger so as to accomodate all the triangle vertices of the skydome ((vertices-2)*3) and also each vertice is stored at every 3rd position in the vertex buffer after the first 3 elements (vertexbuffer[n*3]=triangleVertice, n>3, 1 being the lowest index.) Then I iterate through the vertex buffer to finalize the triangles by using OpenGL's rules when rendering triangle_strips {odd=(n,n+1,n+2);even=(n+1,n,n+2)}. All this to say that the algorithm isn't suffering from some lack of floating-point precision because two vertices for every triangle in the mesh are shared between adjacent triangles. Therefore, the GPU's vertex cache should be kicking in and I shouldn't be seeing a decrease in FPS. </edit> [Edited by - hellraiser on September 18, 2007 6:32:32 PM]
  10. hellraiser

    Meshing strategies

    First off, many thanks for your reply! Quote:Original post by TheAdmiral Would it be possible to create the three extra quadrants by rotating the original, or do you have constraints on symmetry? Rotation within a plane is guaranteed to be direct, and so the resulting triangles will have the correct winding. There aren't constraints on symmetry but the texture coords have to be recalc'ed. Quote:So, assuming that the first quadrant has all its triangles winding correctly, you need to flip exactly those triangles who have undergone an odd number of reflections. If you are doing things the easy way, this will mean that the two adjacent quadrants will need inversion, while the opposing quadrant will have taken care of itself (the two reflections will have 'cancelled out'). Exactly the results I'm getting! Quote:If you don't already know, you can switch the winding order of a triangle by simply swapping any pair of its vertices. Alright, that's what I'll do. I started this thread believing there was some other 'better' way of doing this but now I see how pointless that was... :-) Again, thanks very much for your reply. It has been very helpful!
  11. hellraiser

    Intersection point between lines

    Just to say I've found out what the problem was and it is not related to any of these functions. They're working just fine!
  12. Hi, I'm creating a circular mesh centred at the origin by generating the mesh for only one quadrant, as the other three quadrants are derived from the first (-x; -x,-z; -z.) I'm now having trouble with the triangles because two of the quadrants are getting culled from being back-facing. Turning off back-face culling is not an option, which leaves me with no alternative but to generate the triangles in a different way. In your opinion what would be the best way to overcome the following problem, ie: 1) In the first quadrant I generate the triangle {(0,0,0),(10,0,5),(10,0,0)} 2) Inverting the triangle horizontally gives {(0,0,0),(-10,0,5),(-10,0,0)} which is back-facing. Is there a mathematical, simple way to overcome this issue so that all triangles are front-facing or have I no choice but to write a specialized createTriangle function that does conditional checks on the 3pt coordinates passed in and correctly creates a front-facing triangle?
  13. Hi, I'd like someone to have a look at two functions I created that calculate the intersection point between a line and a vertical line parallel to the y axis or a horizontal line parallel to the x axis respectively. The functions seemed to work but now I'm getting unexpected behaviour in my program and I believe one of these functions is the culprit. Any comments would be welcome. float _xIntersect(float y, const Line2f& line2) const { float A2 = line2.y2-line2.y1, B2 = line2.x1-line2.x2, C2 = A2*line2.x1 + B2*line2.y1; _ASSERT(A2 != 0); return(-(B2*y - C2)/A2); } float _yIntersect(float x, const Line2f& line2) const { // A1=1, B1=0, C1=x, det = B1 float A2 = line2.y2-line2.y1, B2 = line2.x1-line2.x2, C2 = A2*line2.x1 + B2*line2.y1; _ASSERT(-B2 != 0); return((C2-A2*x)/B2); } [Edited by - hellraiser on September 14, 2007 11:18:41 PM]
  14. Quote:This is rarely, if ever a bottleneck in practice, though. Also, depending on what kind of transformation information you need, there are operations to concatenate quaternions, invert them, and transform vectors by them without converting to a matrix first. I'm more interested in the bottleneck bit. If, say, you have a scene that is composed of a few thousand nodes, those being objects composed of spatial representation, wouldn't using quats be actually significantly slower than using 3x3 matrices? With the link FippyDarkpaw supplied (thanks!) we can see that 3x3 matrices require 24 less operations than quats to calculate a rotation. So if you multiply that by 3 for rotations on all axes it becomes a substantial difference , though it is true that only a few objects will actually need to change orientation on a regular basis - most of them are transformation-wise static - so not much of bottleneck there. Where I think the performance penalty may be significant is when either a parent node or the camera change in position/rotation, as that causes every single descendant node to recalculate their model view matrices, which requires 39 ops to convert from a quat+pos vec representation to a 4x4 matrix one. I'm concerned that doing this for a few thousand nodes may create a bottleneck that could possibly impact on the FPS rate. Is there or is there not a significant performance penalty in using quaternions? Quote:Anyhow, sory for how long-winded this got. Hope it helps, though. Not at all, you were very helpful and have in fact helped reassuring me that I'm not in the wrong path after all. :) Thanks everybody for their posts!
  15. Hello, I'm currently using quaternion-based rotations in my simple graphics engine but am now unsure whether there are any good reasons to actually use them. The reason being that, as I understand it, all quaternion transformations have to be converted to a matrix - 4x4 in my case - so that the model view matrix that represents the position and orientation of a set of spatial geometry can be loaded onto the GPU. This implies a processing overhead when calculating the matrix from the quaternion, which wouldn't exist if matrices were used instead. In addition, since I'm using a scenegraph to hierarchically represent objects in the virtual world, whenever a parent object or the camera change in position/orientation, all of its siblings have to reconvert their quaternion orientations to matrix _plus_ multiply by the parent's model view, whereas were matrices used instead of quaternions only the multiplication operation would take place. Please note I've not conducted any benchmarks to assess whether I'm incurring a significant performance penalty, and would rather trust the feedback I get from the community. As I see it, there's nothing better than to ask those who've got the all-mighty experience and knowledge. Are there any good reasons why I should stick to quaternion-based rotations? Maybe SLERP is a good enough reason?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!