• Advertisement
Sign in to follow this  

OpenGL OpenGL for Linux and MacOSX?

This topic is 1881 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello OpenGL gurus,

I’ve been developing with OpenGL exclusively on Windows for several years now and I’m entering the cross-platform compiling world.

So far, I use skaslev-gl3w [url="https://github.com/skaslev/gl3w"]https://github.com/skaslev/gl3w[/url] to generate gl3w.h, gl3.h and gl3w.c. I found OpenGL32.lib within Visual Studio… it’s a wonderful world indeed… but when compiling with g++ on Linux, gl3w.c is requesting glx.h… and cough at glXGetProcAddress... Oups! There is no trace of this file on opengl.org and Googling on this topic is rather confusing so I decide to ask here.

1 - Why GLX? I thought pure GL was king on Linux... I'm confuse! (I surely dont want to rewrite my code)

2 - Where to find the official OpenGL package to develop on Linux and MacOSX?

Thx Edited by Neosettler

Share this post


Link to post
Share on other sites
Advertisement
I can't answer about MaC OS X, it's been too many years for me. Linux isn't an OS so there's no definitive answer there, but I can tell you about Debian-derived OSes (like Ubuntu), which are GNU-based OSes that can sometimes use the Linux kernel: try installing the libgl1-mesa-dev package. If you're asking about Android, another common OS that uses the Linux kernel, the GLES libraries should come with your devkit.

The answer to why your third-party software (skaslev-gl3w) uses GLX, well, you'd have to ask whoever developed it, but it's a handy and standard way to create a GL context on an X11 server.

Share this post


Link to post
Share on other sites
Thank you Bregma,

Your answer almost gave me a heart attack. For all the porting a need to do, I thought OpenGL would be a breeze.

I’m currently working with CentOS/Redhat and I tried yum install libgl1-mesa-dev but the package is not found. So I’m guessing there’s extra steps to be made.

I simply need a pure OpenGL context 4.x, no extra lib, no fancy stuff, only the core… I’d really appreciate if anyone could help clear this up.

Thx again

Share this post


Link to post
Share on other sites
On MacOSX read this about how OpenGL development works there: http://www.geeks3d.com/20121109/overview-of-opengl-support-on-os-x/

Share this post


Link to post
Share on other sites
On CentOS/RHEL, you're probably after the [b]mesa-libGL-devel[/b] package. :)

http://rpm.pbone.net/index.php3/stat/4/idpl/17834939/dir/centos_5/com/mesa-libGL-devel-6.5.1-7.10.el5.i386.rpm.html

Share this post


Link to post
Share on other sites
[quote name='Neosettler' timestamp='1353619415' post='5003320']
I simply need a pure OpenGL context 4.x, no extra lib, no fancy stuff, only the core… I’d really appreciate if anyone could help clear this up.
[/quote]
Problems is there's not really such as thing as a pure OpenGL context. You're going to need [i]something[/i] to draw to.

Even if you're going full screen, you're not directly programming the GPU and you're not driving the video output chips, you need to go through the OS kernel. A typical modern OS kernel is design to prevent you from getting at the hardware directly: you need to ask it nicely at the very least if it will do you the favour of accessing the hardware on your behalf. On a GNU/Linux system, that means getting the X11 server to share nicely. On Mac OS X that means getting the Quartz service to share nicely. Even on Microsoft Windows you need to share nicely with other applications calling into the embedded display service. On Android you use EGL to share the flinger nicely.

There are cross-platform libraries that will get you an OpenGL context. That's what they're for and why they're there. SDL, for example, works very well on all the above platforms. If you insist on doing without such a library, the source is free so you can see how it works and reimplement it yourself. Of course, it might turn out SDL uses glx/wgl/egl underneath, because to do otherwise would reimplement all those ioctl/X11/port/SYS calls. Might be worth checking, though, or even just using the library directly.

Share this post


Link to post
Share on other sites
The closest you can get to a "pure" OpenGL rendering context on X11 is by just using the X11 library itself.
Some of the older OpenGL books go into this but unfortunately it is becoming a bit of a lost art in the newer literature.

A good example is http://content.gpwiki.org/index.php/OpenGL:Tutorials:Setting_up_OpenGL_on_X11
As you can see, this uses GLX (which is what provides the rendering context).

Some more typical rendering contexts include...

glut
wxGLWidget (wxWidgets)
QtOpenGL (Qt)
Gtkglext (Gtk)
SDL

Personally, even though I am a C++ developer, I still highly recommend Glut. It is really portable and is already installed on most Linux distros. The 100% free implementation is called FreeGlut. Edited by Karsten_

Share this post


Link to post
Share on other sites
I use SDL and OpenGL for my game. I found porting my game from windows to Linux was a breeze I didn't even need to edit any code and the computer seemed to have all the files I needed. As far as I know there is no such thing as "pure" OpenGL you will always need some way to window it. I also find using Glew helps but to each his own.

Share this post


Link to post
Share on other sites
If you want something portable that gives you only the OpenGL context, and nothing more, then I recommend [url="http://www.glfw.org/"]glfw[/url]. Well, it gives you a little more, like interface to I/O (e.g. mouse, keyboard). In my opinion, glfw is a stronger and cleaner library than glut.

Share this post


Link to post
Share on other sites
[quote name='Neosettler' timestamp='1353619415' post='5003320']
Thank you Bregma,

Your answer almost gave me a heart attack. For all the porting a need to do, I thought OpenGL would be a breeze.
[/quote]

Note though that all these libraries are only for setting up the GL context and getting a window to draw in, and some input (and possibly a few utility functions)

Those of course have to be OS specific.

All the actual rendering is the same standard OpenGL, as long as you use the same profile, so no need for heart attacks :)

(some drivers might have bugs, or allow some invalid behaviour that breaks on other drivers, but that is a separate story... ) Edited by Olof Hedman

Share this post


Link to post
Share on other sites
Thank you all for your inputs, I find out that VMware doesn't use the host GPU at all so it has all be in vain... I'm stuck with GL 2.1. sigh...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
  • Advertisement