Jump to content

  • Log In with Google      Sign In   
  • Create Account


V-man

Member Since 02 Mar 2002
Offline Last Active Feb 06 2013 09:01 AM

#4875595 What is the vertex limit number of glDrawArrays?

Posted by V-man on 23 October 2011 - 06:37 AM

Careful, there are two limits. The first, obvious hard limit as pointed out by YogurtEmperor.

The second is a soft limit denoted by GL_MAX_ELEMENTS_VERTICES. You can draw more than this limit, but an implementation is not required to perform at maximum possible speed (and some will not).

Note that this limit is not documented for glDrawArrays, but it is well-documented for glDrawRangeElements, which performs an almost identical operation (with indices in addition to a range). Both functions must lock a range of vertices to draw them, and the amount of vertices the driver can lock is obviously finite. It is therefore reasonable to assume that the same limit applies to both draw calls.


The only thing that I have seen is that GL_MAX_ELEMENTS_VERTICES applies to glDrawElements and glDrawRangeElements (and perhaps glMultiDrawElements).
glDrawArrays doesn't have a limit. It is possible that the driver does some yo-yo and juggling behind the scene that causes performance loss if you send too many vertices but there is nothing in the GL specification about limits.


#4873104 Capture OpenGL window screen in Real-time ?

Posted by V-man on 16 October 2011 - 06:08 AM

You arent being very clear. Are you using glReadPixels or glBindTexture?


#4873102 OpenGL 3.x/4.x static libraries?

Posted by V-man on 16 October 2011 - 06:00 AM

OSX will support GL 3.2 according to apple. But that doesn't matter. You still need to link to a dll or whatever it is and I'm assuming that by default, it is 1.1 or 1.2.
If you need higher functions, you have to get function pointers.

I think the only different thing about OSX is that if you want gl 3.2, you can only make a forward compatible context.


#4869348 how to remove texture aliasing .. ? Anti-aliasing techniques required..

Posted by V-man on 05 October 2011 - 05:11 AM

This call is invalid
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST_MIPMAP_LINEAR)

also, call glGetError() to catch errors.


#4868563 Terrain multitexturing problem

Posted by V-man on 03 October 2011 - 08:33 AM

Yeah, I am sure most of my targets can't run shaders. I hate those on board video chips. Most of them only support up to OpenGL v1.4(maybe 1.5).That's why I have to figure out the way without shader. Anyway, still really thanks for the advices.


Those would be Intel and they most likely CAN run shaders. Use DirectX9 and shaders and it will work great.
As for GL, Intel doesn't update or put in much effort in their GL driver. Their drivers tend to be buggy (search these forums and you'll find plenty of posts).

Good luck.


#4863773 Rendering to Cubemap for Environment Mapping

Posted by V-man on 20 September 2011 - 05:23 AM

It works. No, it doesn't add other attachements. If you want to add others, you use GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, etc. whatever the max is for your GPU.


#4863024 writing a GLSL compiler form scratch?

Posted by V-man on 18 September 2011 - 06:33 AM

He is talking about GL_ARB_vertex_shader and GL_ARB_fragment_shader extensions. These extensions specified "common" shader ASM format similiar as to DX9 ASM shaders. Only disadvantage over modern GLSL is that ATI decided to stop supporting ARB shaders at shader model 2.0 (using DX terminology). Only Nvidia continued up to SM5.0 (including tesselation shaders). So if you don't care about non-nvidia GPU cards then you can use ASM shaders.

In OpenGL modern cross-vendor (Nvidia/AMD/Intel) shader can be written only in GLSL. Actually same is true for Direct3D. For modern D3D (10/11) you can use only HLSL to write shaders. ASM syntax is deprecated and not used anymore. HLSL compiler is good enough to rely on it.


No, GL_ARB_vertex_shader and GL_ARB_fragment_shader is the first version of GLSL (v 1.00).

The ASM extensions are GL_ARB_vertex_program and GL_ARB_fragment_program which can be found at
http://www.opengl.org/registry/
http://www.opengl.or...tex_program.txt
http://www.opengl.or...ent_program.txt

and there are others from nVidia (GL_NV_blab bla)

They aren't true ASM shaders. The compiler still transforms them to GPU instructions. The same can be said about D3D9 ASM shaders.

There was a tool from ATI (back when they were a separate entity) called Ashley which would convert your GLSL code and show you real GPU instructions in text form. It could also handle D3D shaders.You could select the target GPU and see the small differences between the real asm codes.


#4862848 writing a GLSL compiler form scratch?

Posted by V-man on 17 September 2011 - 11:15 AM

The GLSL shader compiler is part of the GL driver and it would be hardware specific. I suggest that you look into the Mesa3d project and look at some open source Linux drivers. I don't know of any tutorials about writing Linux drivers or gl drivers or glsl compilers.


#4862789 Windows OpenGL and choosing from multiple video cards

Posted by V-man on 17 September 2011 - 07:23 AM

I think that is because opengl32.dll send calls to the real driver, which is written by the vendor. So if your primary monitor is nvidia, it is the nvidia GL driver that execute commands and it certainly isn't made for your secondary card if it is from another vendor.

If both cards are from the same vendor, then it should work (AMD or nvidia).They have some "special" coding in their driver to handle multi card situations.

If they are different, then I think for the second card, it just runs the default ms driver (opengl32.dll). Am I correct?


#4862046 Passing TexCoords to GLSL

Posted by V-man on 15 September 2011 - 06:56 AM

From this

glBindBuffer(GL_ARRAY_BUFFER, objects[i]->buffer);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(vert), BUFFER_OFFSET(0));
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(vert), BUFFER_OFFSET(12));

I would say that yo have interleaved your vertices and texcoords. Which is fine.
I didn't see how you are creating your VBO so you should check that out.

Also, I didn't see it but you need to use glGetAttribLocation to get the location for our "in vertex" and "in texcoord".


#4861819 Projecting a Texture Under Mouse

Posted by V-man on 14 September 2011 - 06:39 PM

Well you cant just draw a quad over some smoothed terrain because it wont warp. So you either have to use a shadow map method which is pretty stupid or if you are using a heightmap. Have a grid mesh for the mouse quad. And as the position changes, sample the terrain heightmap so that you can have the mouse quad vertices map exactly like the terrains vertices because they are using the same heightmap.


I don't think he is drawing a quad over a terrain.
I think he says he is projecting a texture onto a terrain and getting some clamping mode behavior. Which clamp mode are you using? GL_CLAMP, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_BORDER?

You can try GL_CLAMP_TO_BORDER and use a border color of {0, 0, 0, 0} so it comes out black.


#4860784 Problem rendering geometry with shaders

Posted by V-man on 12 September 2011 - 12:16 PM

glVertexAttribPointer(vloc, 3, GL_FLOAT, GL_FALSE, 0, 0); // note last argument not used for vbos

FYI, actually, the last parameter is used. It is the start address relative to 0.


#4859285 Why VBO dosen't help much for sprites?

Posted by V-man on 08 September 2011 - 05:53 PM

The performance is probably not any better because of state changes that you are performing. Perhaps you are changing blending state, perhaps you are calling glUseProgram for each quad rendered, perhaps glBindTexture. VBO is not a magic solution that solves all problems. You have to understand a little bit of the graphics pipeline and stop just blindly following tutorials.


#4859079 [?] Leaving Immediate Mode (drawing a VBO)

Posted by V-man on 08 September 2011 - 09:39 AM

Are VBO's, VAO's, and Vertex Arrays all the same thing?


No, the 3 are different. vertex array is basically your vertices in RAM. GL would send them to the video card every time you make a draw call.
VBO can potentially be stored in VRAM. The driver decides what to do with it and there is no way to know what the driver is doing.
===Just use VBO like everyone else is doing.

VAO was introduced in GL 2.0 as an extension I believe (glGenVertexArraysARB and glBindVertexArrayARB).
Then they went into core with GL 3.0 (glGenVertexArrays and glBindVertexArray) but you are not forced to use them. GL 3.1 forces you to render everything with a VAO bound.
===A VAO is a wrapper object for glBindBuffer and the gl****Pointer calls. It is suppose to increase performance since the driver would have less validation work to do every time you call to glBindVertexArray.

VBO
http://www.opengl.or...x_Buffer_Object

VAO
http://www.opengl.or...ex_Array_Object

More VBO
http://www.opengl.org/wiki/VBO_-_more
http://www.opengl.or...-_just_examples

and good old vertex arrays
http://www.opengl.or...i/Vertex_Arrays

and Tutorials for the GL 3.x generation
http://www.opengl.org/wiki/Tutorials

so yes, you will be using VAO in conjuction to VBO just like those guys in the Tutorials.


#4858548 [?] Leaving Immediate Mode (drawing a VBO)

Posted by V-man on 07 September 2011 - 04:55 AM

What version of OpenGL am I using? Use glGetString to find out what the driver supports.

VBO require GL 1.5
Also, you seem to be using GLSL 1.10 which requires GL 2.0

How could I draw just the vertex array? glDrawArrays

Is an Array Buffer a VBO ? Yes, it is a VBO for vertices.

Is this the best method to do all this? Yes

Will this attain better speed than Immediate mode? That depends on where the bottleneck is. Were you CPU limited by all the immediate calls? Is your game the next generation of 3d shooter like Doom 4 or Half-Life 3 or Crysis or Batman or Ghostbusters?

Why does the primitive draw flat? (I specify different Z coords)
gl_Position = vec4(position, 0.0, 1.0);





PARTNERS