Jump to content

  • Log In with Google      Sign In   
  • Create Account

Grumple

Member Since 25 Jan 2010
Offline Last Active Oct 23 2014 07:16 AM

Topics I've Started

Sharing transformed vertices in subsequent render stages?

20 October 2014 - 02:28 PM

Hello,

 

I am working on a problem where I want to render 3D objects in pseudo 2D by transforming to NDC coordinates in my vertex shader.  The models I'm drawing have numerous components rendered in separate stages, but all components of a given model are based from the same single point of origin.  

 

This all works fine, but each vertex shader for the various stages of the model render redundantly transform from cartesian xyz to NDC coordinates prior to performing work.  Instead, I'd like to perform an initial conversion stage, populating a buffer of NDC coordinates, such that all vertex shaders can then just accept the NDC coordinate as input.   I'm also looking to avoid doing this on the CPU as I may have many thousands of model instances to work with.

 

So, with an input buffer containing thousands of Cartesian positions, and an equal sized output buffer to receive transformed NDC coordinates, what is my best options to perform the work on the GPU?  Is this something I need to look to OpenCL for?

 

Being fairly unfamiliar to OpenCL, I was thinking of looking into ways of setting things up so that the first component to be rendered for my models will 'know' it is first, have the vertex shader do standard transform to NDC, and somehow write the results back to an 'NDC coord buffer '.  All subsequent vertex shaders for various model components would use the NDC coord buffer as input, skipping the redundant conversion effort.  

 

Is this reasonable?


glDrawElementsInstanced with subsets/offsets of larger data sets?

01 October 2014 - 11:45 AM

This is probably a silly question, but I've managed to get myself turned around and I'm second guessing my understanding of instancing.

 

I want to implement a label renderer using instanced billboards.  I have a VBO of 'label positions', as 2D vertices, one per label.  My 'single character billboard' is what I want to instance, and is in its own VBO.  Due to engine/architectural reasons, I have an index buffer for the billboard, even though it is not saving me much in terms of vertex counts.  

 

For a various reasons I still want to loop through individual labels for my render, but planned to call glDrawElementsInstanced, specifying my 'billboard model', along with the character count for a label.   However, I can't see how I can tell glDrawElementsInstanced where to start in the other attribute array VBO's for a given label?  So, if I am storing a VBO of texture coords for my font, per-character, how do I get glDrawElementsInstanced to start at the first texture coord set of the first character of the current label being rendered?

 

I see that glDrawElementsInstancedBaseVertex exists, but I'm getting confused about what the base vertex value will do here.  If my raw/instanced billboard verticies are from index 0..3 in their VBO, but the 'unique' attributes of the current label start at element 50 in their VBO, what does a base vertex of 50 do?  I was under the impression that it would just cause GL to try to load billboard vertices from index+basevertex in that VBO, which is not what I want.

 

I guess to sum my question up, if I have an instanced rendering implementation, with various attribute divisors for different vertex attributes, how can I initiate an instanced render of the base model, but with vertex attributes starting from offsets into the associated VBO's, while abiding by attribute divisors that have been set up?
 

EDIT: I should mention, I've bound all the related VBO's under a Vertex Array Object.  By default I wanted all labels to be sharing VBO memory to avoid state changes, etc.  It seems like there must be a way to render just N instances of my model starting at some mid-point of my vertex attrib arrays.


GL 3.3 glDrawElements VAO issue...AMD bug or my mistake?

20 August 2014 - 08:40 AM

Hello,

 

I'm running out of ideas trying to debug an issue with a basic line render in the form of a 'world axis' model.

 

The idea is simple:  

 

I create a VAO with float line vertices (3 per vertex), int indices (1 per vertex), and unsigned byte color (3 per vertex)

I allow room and pack the array such that the first 12 vertices/indices/colors are for uniquely colored lines representing my +- world axis, and then a bunch of lines forming a 2D grid across the XZ plane.  

Once data is loaded, I render by binding my VAO, activating a basic shader then drawing the model in two stages.  One glDrawElements call is made for the axis lines after glLineWidth is set to 2, and the grid lines drawn through a separate glDrawElements with thinner lines.  

Whenever I Draw this way, the last 6 lines of my grid (i.e. the end of the VAO array) show up as random colors.  However, the lines themselves are correctly positioned, etc.   If I just do one glDrawElements call for all lines (ie world axis and grid lines at once), then the entire model appears as expected with correct colors everywhere.  

 

This is only an issue on some ATI cards (ie radeon mobility 5650), but works on NVidia no problem.

 

I can't see what I would have done wrong if the lines are appearing fine (ie my VAO index counts/offsets must be ok for glDrawElements), and I don't see how it could be that I'm somehow packing the data into the VAO wrong if they appear correctly via a single glDrawElements call instead of two calls separated by changes to glLineWidth()?

 

Any suggestions?  glGetError, etc return no problems at all...

 

Here is some example render code, although I know it is just a small piece of the overall picture.  This causes the problem:

    TFloGLSL_IndexArray *tmpIndexArray = m_VAOAttribs->GetIndexArray();

    //The first AXIS_MODEL_BASE_INDEX_COUNT elements are for the base axis..draw these thicker
    glLineWidth(2.0f);
    glDrawElements(GL_LINES, AXIS_MODEL_BASE_INDEX_COUNT, 
                   GL_UNSIGNED_INT,  (GLvoid*)tmpIndexArray->m_VidBuffRef.m_GLBufferStartByte);

    //The first remaining elements are for the base grid..draw these thin
    int gridLinesElementCount = m_VAOAttribs->GetIndexCount() - AXIS_MODEL_BASE_INDEX_COUNT;
    if(gridLinesElementCount > 0)
    {
        glLineWidth(1.0f);

        glDrawElements(GL_LINES, gridLinesElementCount, GL_UNSIGNED_INT, 
                       (GLvoid*)(tmpIndexArray->m_VidBuffRef.m_GLBufferStartByte + (AXIS_MODEL_BASE_INDEX_COUNT * sizeof(int))));
    }

This works just fine:

    glDrawElements(GL_LINES, m_VAOAttribs->GetIndexCount(), GL_UNSIGNED_INT, 
                    (GLvoid*)tmpIndexArray->m_VidBuffRef.m_GLBufferStartByte);

GLSL 330 sampler2D array access via vertex attribute?

13 August 2014 - 02:06 PM

Hello,

 

After a lot of programming and debugging I feel like a dumb ass.   I set up an entire billboard shader system around instancing, and as part of that design I was passing a vertex attribute representing the texture unit to sample from for a given billboard instance.   

 

After setting all of this up I was getting strange artifacts, and found that GLSL 330 only officially supports constant (compile-time) index into a uniform sampler2D array?

 

Is there any nice way around this limitation short of compiling against a newer version of GLSL?  Is there at least a way to check if the local driver supports the sampler index as vertex attribute through an extension or something?  I tested my implementation on an NVidia card and it worked despite the spec, but ATI (as usual) seems more strict.

 

For now I have patched the problem by manually branching *shudder* based on the index and accessing the equivalent sampler via a constant in my fragment shader.  

 

For example:

        if(passModelTexIndex == 0) 
        { 
            fragColor = texture2D(texturesArray_Uni[0], passTexCoords); 
        } 
        else if(passModelTexIndex == 1) 
        { 
            fragColor = texture2D(texturesArray_Uni[1], passTexCoords); 
        } 
        else 
        { 
            fragColor = vec4(0.0, 0.0, 0.0, 1.0); 
        } 

Orthographic-like billboards in perspective projection?

30 July 2014 - 01:27 PM

Hello,

 

I've got an old billboard rendering implementing in fixed-function openg.   The billboards technically represent locations in 3D space, but I want them to render the same pixel dimensions no matter how far away they are in my 3D (perspective projection) scene.

 

The way I handled this in the current (ancient) implementation is as follows:

  • Render my scene in standard 3D perspective projection without my billboards.  
  • Switch to orthographic projection, Loop through the billboards client-side, transforming their 3D position into 2D Screen coords
  • Render all billboards as 2D objects with a constant size used in pixels.

This is horrible for a number of reasons, first being it uses the oldest of old GL functionality.  I'm switching to shaders and using opengl 'instancing' as the basis for the update.  

 

One of my main goals is to eliminate the client-side step of projecting the billboards into 2D screen space, and rendering via a second pass in an orthographic projection.  

 

As far as I can tell, the only way to render the billboards within the perspective projection, while having them all maintain a fixed pixel size on screen, is to re-size each billboard dynamically from within the vertex shader, based on distance from eye .  

 

Is the above assumption correct?  Is there any simpler way to go about this that eliminates the client-side transforms/orthographic render stage while maintaining constant pixel dimensions for billboards in a 3D scene?

 

For what it's worth I'm intending to render the shader-based billboards using OpenGL 'instancing', such that I can store a vertex array with each point representing a unique billoard, and my 'instance' data containing the base corner offsets.

 

Thanks!


PARTNERS