Sign in to follow this  
MARS_999

OpenGL SM4.0 and OpenGL

Recommended Posts

MARS_999    1627
I seen now the info on the Geforce 8800 is out and I noticed that one can now have 128 textures per pass vs. 16? So I am assuming this would relieve the need for texture atlases? What is everyone elses thoughts on SM4.0 support and OpenGL... Let the ideas flow.

Share this post


Link to post
Share on other sites
Red_falcon    151
I have a feeling that in the last months OpenGL isn't developing forward. Since Khronos took it in their hand, there are no life signs. So a forecast is a bit difficult.

And as someone on this forum said that glsl is further than DX9 HLSL(i can't deny nor i can agree, cause i have not enough knowledge of HLSL), cause smX.X is bound to DX. But if it so than we can only hope that the new references will come out as soon as possible.

Share this post


Link to post
Share on other sites
zedzeek    528
theres a thread at www.opengl.org about the new extensions coming to opengl stuff like geometry shaders + texture arrays, texture buffers etc. (no need to have vista to play with them as well :) )
though theres not gonna be much info out until the geforce8800 comes out (which is this month i believe, so info should be soon forthcoming)

Share this post


Link to post
Share on other sites
Enrico    316
Quote:
Original post by MARS_999
I seen now the info on the Geforce 8800 is out and I noticed that one can now have 128 textures per pass vs. 16?

The G80 has 128 stream processors, which does not mean, you can use 128 textures. It has 64 texture units, which provide 32 pixels per clock 2xAF filtered. I am not sure, if that means you can use 32 textures now...

Quote:
So I am assuming this would relieve the need for texture atlases? What is everyone elses thoughts on SM4.0 support and OpenGL... Let the ideas flow.

NVidias launch demos are written in OpenGL, so we can expect the extensions are already done :)

Share this post


Link to post
Share on other sites
soconne    105
Quote:
Original post by Enrico

NVidias launch demos are written in OpenGL, so we can expect the extensions are already done :)


Reasons why I love Nvidia.

Share this post


Link to post
Share on other sites
Ravyne    14300
On of the beautiful things about OpenGL is that it can be extended by vendors and other parties directly, which allows them to expose new functionality at their whim, rather than with the next DirectX release.

As for the "no progress" comment someone made about GL now that Khronos has taken the reigns, I think thats to be expected. Khronos has already made public their general plans to alter the API pretty radically in GL 3.0. Their basic plan is to strip out all the unnecesary fluff from the hardware/runtime layer leaving only a very lightweight, do-it-the-fastest-way-only, aimed at modern 3D hardware core of OpenGL. They'll move a compatibility layer with old GL as more of a software library - entirely seperate from the core GL functionality. This is the same approach that the very successful OpenGL|ES has taken, and the same general approach that Microsoft has taken with Direct3D 10.

It'll take some time, but GL 3 will be worth the wait IMHO.

Share this post


Link to post
Share on other sites
MARS_999    1627
Just confirmed these added ext to the 8800 drivers release 95

gl_ext_framebuffer_blit
gl_ext_framebuffer_multisample
gl_NV_framebuffer_multisample_coverage
wgl_nv_GPU_AFFINITY

Share this post


Link to post
Share on other sites
griffin2000    214
The following was posted on the OGL forum:

ttp://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=3;t=014831


Quote:


GL_ES
GL_EXTX_framebuffer_mixed_formats
GL_EXT_Cg_shader
GL_EXT_bindable_uniform
GL_EXT_depth_buffer_float
GL_EXT_draw_buffers2
GL_EXT_draw_instanced
GL_EXT_framebuffer_sRGB
GL_EXT_geometry_shader4
GL_EXT_gpu_program_parameters
GL_EXT_gpu_shader4
GL_EXT_packed_float
GL_EXT_shadow_funcs
GL_EXT_texture_array
GL_EXT_texture_buffer_object
GL_EXT_texture_compression_latc
GL_EXT_texture_compression_s3tc
GL_EXT_texture_integer
GL_EXT_texture_sRGB
GL_EXT_texture_shared_exponent
GL_EXT_transform_feedback
GL_EXT_ycbcr_422
GL_NVX_conditional_render
GL_NV_depth_buffer_float
GL_NV_framebuffer_multisample_ex
GL_NV_geometry_shader4
GL_NV_gpu_program4
GL_NV_gpu_shader4
GL_NV_parameter_buffer_object
GL_NV_texture_compression_latc
GL_NV_texture_compression_vtc
GL_NV_transform_feedback
GL_OES_conditional_query
WGL_EXT_framebuffer_sRGB
WGL_EXT_pixel_format_packed_float
WGL_NV_gpu_affinity


And from 'cass' (Mgr. at Nvidia):
Quote:

One of the traditional NVIDIA OpenGL Extensions docs is being readied and should be on developer.nvidia.com Real Soon Now.

We'll be following up with Cg 2.0 support for the new programmability shortly thereafter.

Share this post


Link to post
Share on other sites
Spoonbender    1258
Quote:

One of the traditional NVIDIA OpenGL Extensions docs is being readied and should be on developer.nvidia.com Real Soon Now.

We'll be following up with Cg 2.0 support for the new programmability shortly thereafter.

Hmmm, Cg is available from within DX as well as OGL, right? (Never really played with Cg, so not sure)

Does this mean we'll get geometry shaders and everything in DX9 by that route?

Share this post


Link to post
Share on other sites
zedzeek    528
yes u can use cg with directx
though as theres gonna be no more releases of directx for winXP, sm4.0 wont be available with winXP with directx (only opengl) unless MS have a change of heart + release d3d10 for winXP (not likely)

fear not, vista is coming out janurary30 + SM4.0 will available then under directx

Share this post


Link to post
Share on other sites
ze moo    192
Quote:
Original post by griffin2000
...
GL_NVX_conditional_render ?
...
GL_OES_conditional_query ?


any information on those 2 available yet?
conditional_query sounds extremly interesting...

Share this post


Link to post
Share on other sites
MARS_999    1627
These plus the GS are interesting to me

ext_texture_array
nv_depth_buffer_float

Now from the sounds of it with the texture array would this allow a simpler mosaic but instead you use 3D textures? And this will allow for mipmapping and higher quality of filtering?

And the depth buffer float, I am assuming this will allow for better quality shadows as the depth buffer doesn't clamp 0-1 now and should allow for a great precision? Would this get rid of jagged edges on shadows?

Looks like this card is here today already. :)

Share this post


Link to post
Share on other sites
Enrico    316
Quote:
Original post by MARS_999
These plus the GS are interesting to me

ext_texture_array
nv_depth_buffer_float

Now from the sounds of it with the texture array would this allow a simpler mosaic but instead you use 3D textures? And this will allow for mipmapping and higher quality of filtering?

It allows to bind an array of textures to a sampler for a shader (mhm, is this confusing? :D )

Quote:
And the depth buffer float, I am assuming this will allow for better quality shadows as the depth buffer doesn't clamp 0-1 now and should allow for a great precision? Would this get rid of jagged edges on shadows?

This is a depth buffer with more precision than [0,1].

See the thread at opengl.org for more information about all the extensions. Mr. Cass from Nvidia has posted there, too ;)

Share this post


Link to post
Share on other sites
MARS_999    1627
Quote:
Original post by Enrico
Quote:
Original post by MARS_999
These plus the GS are interesting to me

ext_texture_array
nv_depth_buffer_float

Now from the sounds of it with the texture array would this allow a simpler mosaic but instead you use 3D textures? And this will allow for mipmapping and higher quality of filtering?

It allows to bind an array of textures to a sampler for a shader (mhm, is this confusing? :D )

Quote:
And the depth buffer float, I am assuming this will allow for better quality shadows as the depth buffer doesn't clamp 0-1 now and should allow for a great precision? Would this get rid of jagged edges on shadows?

This is a depth buffer with more precision than [0,1].

See the thread at opengl.org for more information about all the extensions. Mr. Cass from Nvidia has posted there, too ;)


No the array idea isn't, but if I want to pack 32 2D textures into this 3D array and have it act like a 2d texture instead of texture atlases. I am doing this now and has its limits. I will check out opengl.org

The great thing about this for OpenGL is you can now have DX10 gfx on XP without the need for Vista! So if game developers want to have a large platform to keep developing on get in line. Once the 8600/8200 comes out more users will have this ability on XP. Really bright future for OpenGL IMO, now I can hold off getting Vista! :) Well until Gears of War is out, if Microsoft forces you to have Vista like Halo2...

[Edited by - MARS_999 on November 9, 2006 9:51:22 AM]

Share this post


Link to post
Share on other sites
MARS_999    1627
Well the beast is here and here is what I have so far...

Cubemaps 8192x8192
3DTextures 2048x2048x2048

I posted the gl info to delphi3d so hopefully Tom updates it, for a full complete listing.

Share this post


Link to post
Share on other sites
ze moo    192
Quote:
Original post by ze moo
Quote:
Original post by griffin2000
...
GL_NVX_conditional_render ?
...
GL_OES_conditional_query ?


any information on those 2 available yet?
conditional_query sounds extremly interesting...

/oops
looks like the GL_NVX_conditional_render extension's been in the drivers for quite
some time already (at least since 2004)

there's even a demo in the nvidia sdk ...

GL_OES_conditional_query seems to be the "approved" version of this extension
(pretty cool, since you can render based on an occlusion query result without
having to wait for the result in the client :)

Share this post


Link to post
Share on other sites
SimonForsman    7642
If anyone wants to try out the new features without spending alot of $$$ on a new card you can always download NVEmulate from http://developer.nvidia.com

Performance will be bad but atleast you'll be able to test things and even implement and verfiy your shaders without spending alot of money on new hardware.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Quote:
Original post by MARS_999
3DTextures 2048x2048x2048


Can that be correct? It's over 8 gig, in what memory would such a beast be placed?

Share this post


Link to post
Share on other sites
_the_phantom_    11250
Well, if you had the resources you could texture over the PCIe bus [grin]

However the point is these are just the limits of what the card can do based on the hardware, apart from custom things you simply wont use such resources.

Besides, give it a few years and 8gig on a gfx card won't seem that far fetched [wink] (ATI's high end R600 is rumored to have 1gig of GDDR4 ram on a 512bit bus)

Share this post


Link to post
Share on other sites
wahoodra    100
Quote:
Original post by Anonymous Poster
Quote:
Original post by MARS_999
3DTextures 2048x2048x2048


Can that be correct? It's over 8 gig

Yes, however 2048x2x2 is just 8 kilo.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now