• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By racarate
      Hey everybody!
      I am trying to replicate all these cool on-screen debug visuals I see in all the SIGGRAPH and GDC talks, but I really don't know where to start.  The only resource I know of is almost 16 years old:
      http://number-none.com/product/Interactive Profiling, Part 1/index.html
      Does anybody have a more up-to-date reference?  Do people use minimal UI libraries like Dear ImgGui?  Also, If I am profiling OpenGL ES 3.0 (which doesn't have timer queries) is there really anything I can do to measure performance GPU-wise?  Or should I just chart CPU-side frame time?  I feel like this is something people re-invent for every game there has gotta be a tutorial out there... right?
       
       
    • By Achivai
      Hey, I am semi-new to 3d-programming and I've hit a snag. I have one object, let's call it Object A. This object has a long int array of 3d xyz-positions stored in it's vbo as an instanced attribute. I am using these numbers to instance object A a couple of thousand times. So far so good. 
      Now I've hit a point where I want to remove one of these instances of object A while the game is running, but I'm not quite sure how to go about it. At first my thought was to update the instanced attribute of Object A and change the positions to some dummy number that I could catch in the vertex shader and then decide there whether to draw the instance of Object A or not, but I think that would be expensive to do while the game is running, considering that it might have to be done several times every frame in some cases. 
      I'm not sure how to proceed, anyone have any tips?
    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL ATI/AMD HD SM4.0 Features

This topic is 3831 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Anyone know what SM4.0 features are supported and working on the ATI/AMD HD 2xxx series cards? A glxinfo printout (or something similar) of the supported opengl extensions would be an excellent answer! (example below). Sorry, would seem like something that would be easy to google, but just couldn't get good enough info. One specific thing I am wondering about is if the newest ATI/AMD cards support the transform feedback extension or something similar. Example of glxinfo from a GeForce 8600, OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce 8600 GTS/PCI/SSE2 OpenGL version string: 2.1.1 NVIDIA 100.14.03 OpenGL extensions: GL_ARB_color_buffer_float, GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_fragment_shader, GL_ARB_half_float_pixel, GL_ARB_imaging, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_pixel_buffer_object, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_shader_objects, GL_ARB_shading_language_100, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_dot3, GL_ARB_texture_float, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle, GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object, GL_ARB_vertex_program, GL_ARB_vertex_shader, GL_ARB_window_pos, GL_ATI_draw_buffers, GL_ATI_texture_float, GL_ATI_texture_mirror_once, GL_S3_s3tc, GL_EXT_texture_env_add, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array, GL_EXT_Cg_shader, GL_EXT_bindable_uniform, GL_EXT_depth_bounds_test, GL_EXT_draw_buffers2, GL_EXT_draw_instanced, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_object, GL_EXTX_framebuffer_mixed_formats, GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4, GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4, GL_EXT_multi_draw_arrays, GL_EXT_packed_depth_stencil, GL_EXT_packed_float, GL_EXT_packed_pixels, GL_EXT_pixel_buffer_object, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_two_side, GL_EXT_stencil_wrap, GL_EXT_texture3D, GL_EXT_texture_array, GL_EXT_texture_buffer_object, GL_EXT_texture_compression_latc, GL_EXT_texture_compression_rgtc, GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic, GL_EXT_texture_integer, GL_EXT_texture_lod, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_sRGB, GL_EXT_texture_shared_exponent, GL_EXT_timer_query, GL_EXT_vertex_array, GL_IBM_rasterpos_clip, GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region, GL_NV_blend_square, GL_NV_copy_depth_to_color, GL_NV_depth_buffer_float, GL_NV_depth_clamp, GL_NV_fence, GL_NV_float_buffer, GL_NV_fog_distance, GL_NV_fragment_program, GL_NV_fragment_program_option, GL_NV_fragment_program2, GL_NV_framebuffer_multisample_coverage, GL_NV_geometry_shader4, GL_NV_gpu_program4, GL_NV_half_float, GL_NV_light_max_exponent, GL_NV_multisample_filter_hint, GL_NV_occlusion_query, GL_NV_packed_depth_stencil, GL_NV_parameter_buffer_object, GL_NV_pixel_data_range, GL_NV_point_sprite, GL_NV_primitive_restart, GL_NV_register_combiners, GL_NV_register_combiners2, GL_NV_texgen_reflection, GL_NV_texture_compression_vtc, GL_NV_texture_env_combine4, GL_NV_texture_expand_normal, GL_NV_texture_rectangle, GL_NV_texture_shader, GL_NV_texture_shader2, GL_NV_texture_shader3, GL_NV_transform_feedback, GL_NV_vertex_array_range, GL_NV_vertex_array_range2, GL_NV_vertex_program, GL_NV_vertex_program1_1, GL_NV_vertex_program2, GL_NV_vertex_program2_option, GL_NV_vertex_program3, GL_NVX_conditional_render, GL_SGIS_generate_mipmap, GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow, GL_SUN_slice_accum

Share this post


Link to post
Share on other sites
Advertisement
These are AMD/ATI Radeon HD 2900 XT extensions with Catalyst 7.10 drivers (I've got Radeon HD 2900 XT). Maybe i've got old glext header file (it's probably possible - version 39). Ati/Amd has got very bad drivers. Amd says, that when OpenGL 3.0 come, they'll release OpenGL 3 drivers for Ati Radeon HD 2xxx series. Looking forward to Longs Peak.

ATI Technologies Inc.
ATI Radeon HD 2900 XT
2.0.6956 Release
GL_ARB_depth_texture GL_ARB_draw_buffers GL_ARB_fragment_program GL_ARB_fragment_shader GL_ARB_multisample GL_ARB_multitexture GL_ARB_occlusion_query GL_ARB_point_parameters GL_ARB_point_sprite GL_ARB_shader_objects GL_ARB_shading_language_100 GL_ARB_shadow GL_ARB_shadow_ambient GL_ARB_texture_border_clamp GL_ARB_texture_compression GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar GL_ARB_texture_env_dot3 GL_ARB_texture_float GL_ARB_texture_mirrored_repeat GL_ARB_texture_non_power_of_two GL_ARB_texture_rectangle GL_ARB_transpose_matrix GL_ARB_vertex_buffer_object GL_ARB_vertex_program GL_ARB_vertex_shader GL_ARB_window_pos GL_ATI_draw_buffers GL_ATI_envmap_bumpmap GL_ATI_fragment_shader GL_ATI_meminfo GL_ATI_separate_stencil GL_ATI_texture_compression_3dc GL_ATI_texture_env_combine3 GL_ATI_texture_float GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_func_separate GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_compiled_vertex_array GL_EXT_copy_texture GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_framebuffer_object GL_EXT_gpu_program_parameters GL_EXT_multi_draw_arrays GL_EXT_packed_depth_stencil GL_EXT_packed_pixels GL_EXT_point_parameters GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_specular_color GL_EXT_shadow_funcs GL_EXT_stencil_wrap GL_EXT_subtexture GL_EXT_texgen_reflection GL_EXT_texture3D GL_EXT_texture_compression_s3tc GL_EXT_texture_cube_map GL_EXT_texture_edge_clamp GL_EXT_texture_env_add GL_EXT_texture_env_combine GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic GL_EXT_texture_lod_bias GL_EXT_texture_mirror_clamp GL_EXT_texture_object GL_EXT_texture_rectangle GL_EXT_vertex_array GL_KTX_buffer_region GL_NV_blend_square GL_NV_texgen_reflection GL_SGIS_generate_mipmap GL_SGIS_texture_edge_clamp GL_SGIS_texture_lod GL_WIN_swap_hint WGL_EXT_swap_control

Share this post


Link to post
Share on other sites
Quote:
Original post by Vilem Otte
These are AMD/ATI Radeon HD 2900 XT extensions with Catalyst 7.10 drivers (I've got Radeon HD 2900 XT). Maybe i've got old glext header file (it's probably possible - version 39). Ati/Amd has got very bad drivers. Amd says, that when OpenGL 3.0 come, they'll release OpenGL 3 drivers for Ati Radeon HD 2xxx series. Looking forward to Longs Peak.
...


Thanks for the info.

Wow! If this is correct, they have ABSOLUTELY NO OpenGl Shader Model 4.0 support in their current driver!

Share this post


Link to post
Share on other sites
Quote:
Original post by TimothyFarrar
Wow! If this is correct, they have ABSOLUTELY NO OpenGl Shader Model 4.0 support in their current driver!

It's worse than that - they don't even expose features that R600 trivially has that don't even require and extension. For example, you may not that the VERTEX_TEXTURE_IMAGE_UNITS is still zero , even though R600 is fully capable of vertex texture fetch.

As far as OpenGL is concerned, R600 has the same features as R580. Quite disappointing, but ATI/AMD seem determined to ditch OpenGL until the next Doom/whatever high-profile game comes along that needs it. At that time, they'll add just the features that the game needs.

I really tend to like ATI/AMD, but their OpenGL driver situation is pathetic and has been for quite some time.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
As far as OpenGL is concerned, R600 has the same features as R580. Quite disappointing, but ATI/AMD seem determined to ditch OpenGL until the next Doom/whatever high-profile game comes along that needs it.

GL2 is going to be legacy in not too long. They're probably concentrating on the GL3 implementation. Since the new API forces them to rewrite large portions of the driver, I really hope they take the opportunity to do it right this time.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
GL2 is going to be legacy in not too long. They're probably concentrating on the GL3 implementation. Since the new API forces them to rewrite large portions of the driver, I really hope they take the opportunity to do it right this time.

I certainly hope so too, as it's great to have some API options. Still, even after having read through the specs for GL3.0, I just can't get too excited compared to D3D10 (it's seems clear that GL is now the one playing "catch-up"). Ironically, ATI pulling out a working OpenGL driver may be more interesting than the API update itself ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
I certainly hope so too, as it's great to have some API options. Still, even after having read through the specs for GL3.0, I just can't get too excited compared to D3D10 (it's seems clear that GL is now the one playing "catch-up"). Ironically, ATI pulling out a working OpenGL driver may be more interesting than the API update itself ;)


As for the D3D10 vs GL2, at least with the NVidia cards, it seems to me as if all the new D3D10/SM4.0 functionality is exposed in GL2 via extensions (which are very easy to use and work perfectly in the framework of GL2, BTW). And for those wanting to use D3D10/SM4.0 features in Windows XP, GL2 is the way to go as NVidia's drivers have GL2 SM4.0 across the board (Linux and Windows XP/Vista).

Even Apple, now that the 8600M cards are in the MacBook Pros, is starting to support SM4.0 features (transform feedback and geometry shaders I think are in the newest update according to stuff said on apple's mac-opengl mailing list). Now if they would just support texture buffer objects, and SM4.0 would become really useful on MacOSX.

I think Apple is probably mainly waiting on the GL2 SM4.0 common standards to be formalized since they are supporting GL_EXT_transform_feedback (which hasn't been standardized yet) instead of NV_transform_feedback.

Would be quite funny if Apple ended up with better GL drivers for the R600/HD2xxx cards than those put out by ATI/AMD for Windows!

It is really too bad AMD/ATI is sandbagging like this, I'd bet that the lack of them working on GL2 SM4.0 driver support is a major holdup for the standards process (of course I could be 100% wrong here too :)

As for GL3, I'm going to take a stab that GL3 is a long way off (>1 year, >2 years?) to the point where there is enough vendor support to make it practical for us GL people. Sure NVidia will probably be there right away, but Apple and ATI/AMD will probably take some time.

Share this post


Link to post
Share on other sites
Quote:
Original post by TimothyFarrar
As for the D3D10 vs GL2, at least with the NVidia cards, it seems to me as if all the new D3D10/SM4.0 functionality is exposed in GL2 via extensions (which are very easy to use and work perfectly in the framework of GL2, BTW). And for those wanting to use D3D10/SM4.0 features in Windows XP, GL2 is the way to go as NVidia's drivers have GL2 SM4.0 across the board (Linux and Windows XP/Vista).

Certainly many of the features are exposed in the new extensions, and if you're on XP you really have no choice :) Still, it's a bit of a myth that GL+extensions == D3D10. For instance:

1) GL cannot rendering without a vertex buffer which is important for scatter operations, and useful for full-screen quads, etc.

2) GL does not support disjoint samplers/textures, which are just useful period.

3) You still can't render to 1/2-component FBOs in GL! (Try it: last I checked trying to use LUMINANCE or LUMINANCE_ALPHA as a target for FBO rendering resulted in FRAMEBUFFER_INCOMPLETE regardless of the target texture format. I'm still baffled that this doesn't work though, so I really want someone to post a snippet and prove me wrong!)

4) NVIDIA's drivers currently have some major issues with some of the new features, one example being indexed temporaries in GLSL.

5) GL doesn't have the resource/views model of D3D10 which can result in some unnecessary data copying to convert types, etc.

There are a few more, but the above have accounted for some pretty major problems when trying to do more advanced (GP)GPU implementations. So while the G80 extensions are great and open up a lot of functionality, they don't actually bring GL into parody with D3D10. However since you have to use GL to get *any* of the new features on XP/Mac/Linux, there's still a place for it I think... as long as ATI updates their drivers soon!

Share this post


Link to post
Share on other sites
Andy, thanks for the info!

Quote:
3) You still can't render to 1/2-component FBOs in GL! (Try it: last I checked trying to use LUMINANCE or LUMINANCE_ALPHA as a target for FBO rendering resulted in FRAMEBUFFER_INCOMPLETE regardless of the target texture format. I'm still baffled that this doesn't work though, so I really want someone to post a snippet and prove me wrong!)


BTW, just tried this, and got GL_FRAMEBUFFER_COMPLETE_EXT. So it seems to work fine with my NVidia 8600 GTS (Linux driver) attaching a single channel FP32 texture generated as follows,

glTexImage2D(GL_TEXTURE_2D,0,GL_LUMINANCE32F_ARB,x,y,0,GL_LUMINANCE,GL_FLOAT,d);



Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement