• Advertisement
Sign in to follow this  

OpenGL SlimDX, MDX or OGL?

This topic is 3314 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I had a look around the forums, so forgive me if Ive missed something about this already. Ive been programing in C# for the past year or so, and Ive started looking at trying my hand at building a game. However, I'm not exactly sure what which way to go. Ive heard alot of people say to use C++ instead of C#. But I already know a decent amount of C# and am comfortable with it. So, should I stick with C# or try C++? Should I try and use the OpenGL, DirectX/MDX or SlimDX?

Share this post


Link to post
Share on other sites
Advertisement
Take my words with a grain of salt, as I'm a C++ DirectX programmer (before using DX I was on OGL side), so I'm not into MDX/SlimDX, etc.

As for OpenGL/DirectX you can find many threads here. In the end it's up to you.

AFAIK MDX is dead, so I wouldn't suggest you to use it. SlimDX could be a better option but if you already know C#, I suggest you to have a look at XNA.

Share this post


Link to post
Share on other sites
i wonder why you say he should take a look at xna. Xna isn't exactually an industry standard way of working, it is a premade sort of engine in which a lot is not possible because it is based on development for the 360, and secondary for the PC.

so i do not agree that XNA is better if you already have knowledge of C# over SlimDX, i'd rather take the opposite. If you have little knowledge of C# XNa is great to get lots done in notime, but SlimDX gives you way more freedom because it has nothing premade, except the fact that you can do all the DX calls like is done in C++.

but thats my word.

Share this post


Link to post
Share on other sites
I'd suggest sticking with C# and learning XNA too.

C# is perfectly fine for game development, although it's limited to Microsoft platforms. C++ is still the main language used in the industry, but it's best to learn one thing at a time and learn XNA, then look at C++ if you want.

And if you're a C# person, you'll most likely be comfortable with XNA/DirectX than with OpenGL.

Share this post


Link to post
Share on other sites
I myself am a .NET developer, but I've been programming with C++ for years as well, so I can give you some advice. Your productivity will increase a lot if you use C# instead of C++. Some reasons to do so:
1) First, and most important of all, the .NET framework. In C++ you need a threading library, a sound library, a graphics library, etc... Plus, boost sucks. It just tries to emulate things that C++ wasn't meant for. In .NET, however, everything is there for free - just name it: garbage collector, network library on top of sockets (plus them if you need them), continual improvement, ridiculously easy multithreading and synchronization... If you want to build your graphical tools - WinForms and WPF are there waiting for you. And everything is well integrated.
2) Productivity. C++ is not meant to be a robust language (I mean for RAD). C#, however, is designed for that. The least of all is garbage collection. In C#, you can remove all elements of a collection given some predicate in just one line of code (IEnumerable's extension methods + lambda expression). You will never worry about simulating a sprintf behaviour in C# - it is just adding the params keyword before the last argument. These are just very simple examples - but in C++ you don't even have descent exception mechanism, using exceptions is even considered highly ineffective. For example, there is no such thing as finally. In C++ your destructor is not always called. So, your program crashes and there is a pretty good chance to leave lots of resources unfreed. Cool, huh?
3) Events and component-driven programming. In C++ you can emulate this behaviour with boost (which sucks IMO), Qt's signals and slots, Gtkmm's signals and slots (the last two being extremely intuitive and nice to use; boost claims to have improved gtkmm, but in reality it just made something very nice into a complete disaster). However, the resulting code is ugly and it is not very easy to get started, because all the examples online are extremely simple and if you need something complicated, you are left on your own and you will lose lots of time of research for something that is a-matter-of-fact in C#. Look at System.Collections.ComponentModel.ObservableCollection and see what I mean. You can do it in C++, but you will lose a lot of time before you even start coding for something as simple as that. In boost's signals you don't have the full control of an event's add and remove overides (just as a property's get and set if you've never customized an event). Something essential for component-driven programming (well, objects in games to tend to be componets with events, properties and all) is data binding, which is something you have to emulate on your own in C++ (not that it is difficult, but every time you have to emulate a feature, there is a chance that you make a mistake somewhere, and most importantly - you lose time).
4. Visual studio. It is free for both C++ and C#. However, code completion in C++ isn't good at all. It is the best there is (Eclipse CDT is very close, however), but it still isn't as good. Microsoft is investing a lot of money in .NET, and I'm left with the feeling that they just neglect C++. There are .net-only features in Visual Studio, which I find invaluable. For example, before you start coding, you have some sort of design for your game. Well, you can visually (via drag&drop) create a class diagram, and the tool generates the skeleton code for you. Every method is "implemented" with throw new NotImplementedException(), so you can't forget to implement them :D. There are very nice shortcuts: automatic add of a using statement, make a property out of a field, rename using refractor, etc. Templates are very time-saving as well: type for, the double-press tab, and it generates the skeleton for you. The same for all code structures. There is a shortcut for automatic interface and abstract class implementation (skeleton).

In conclusion, productivity in C++ is just not good enough once you've been used to C# and .NET. I would use C# or Python for my projects over C++ anytime. If you want to use C# and OpenGL, you can try OpenTK - it seems very SDL-like (and is therefore very nice). For DirectX, you were told: XNA is very beginner-friendly.

Share this post


Link to post
Share on other sites
Just to add, SlimDX is nowdays good to use for development. It is (at least in my experience) stable and is not going to change _hughe_. But to be sure some of their developers should tell you these things.

if you are a beginner XNA is great. It depends on your needs as well. Are you going to develop an engine or are you going to make a simple scene with some effects, without bothering how it all actually works inside out...

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
C# is perfectly fine for game development, although it's limited to Microsoft platforms.


Come on Steve, you're smarter than that. C# isn't limited to Microsoft, I've seen C# games running on a Mac OS. Portability depends on what APIs you use, not the language. You'd have to use Mono instead of .NET and use Tao instead of DirectX.

SlimDX - Windows
XNA - Windows, XBOX360, Zune
OpenGL - Windows, Mac, Linux

Not trying to say SlimDX is bad, I actually prefer it to the others. It just isn't as portable. If you don't care about that than give it a try.

Share this post


Link to post
Share on other sites
Quote:
Original post by Scet
Quote:
Original post by Evil Steve
C# is perfectly fine for game development, although it's limited to Microsoft platforms.


Come on Steve, you're smarter than that. C# isn't limited to Microsoft, I've seen C# games running on a Mac OS. Portability depends on what APIs you use, not the language. You'd have to use Mono instead of .NET and use Tao instead of DirectX.
You're right, I was having some sort of brain fart [embarrass]

I should have said, it's not available on as many platforms as C++ is.

Share this post


Link to post
Share on other sites
Quote:
Original post by FeverGames
i wonder why you say he should take a look at xna. Xna isn't exactually an industry standard way of working, it is a premade sort of engine in which a lot is not possible because it is based on development for the 360, and secondary for the PC.


Whoa whoa whoa whoa let's hit the breaks and back up a bit here.

XNA is not an engine or anything close to it. It has a few optional components like the Game class and the Content Pipeline that could be parts of a much larger engine, but if so they would be a small part of it.

What is it that you think is "not possible" with it? On the PC side of things the Graphics component is a wrapper of D3D9. The only things you can't do are the vendor-specific hacks, which off the top of my head included Nvidia's hardware shadow maps, ATI's R2VB, and any of the old methods for binding the depth buffer as a texture. Not exactly deal breakers, IMO (especially when you consider that you gain compatibility with a console for those things).

As for your assertion that it's "based on development for the 360", I'm a little curious as to what leads you to say that. If you ask me or any of the other guys who develop for the 360 using XNA, I doubt you'll find any that wouldn't agree that the PC is a much better choice for running a managed framework.

Share this post


Link to post
Share on other sites
i say this because if you start implementing things like low level audio api's or world wide used physics engines you are not going to be able to deploy it on the 360 anymore, which is kind of a big part of the whole xna thing. Also it doesn't allow you to easily swap to stuff like DX10 or DX11 in future, if not never.

ok so XNA might not be an engine, but it does do lots of things for you which you might not want.

but this might go a bit to much offtopic. I am just not a big fan of XNA :) It's a great way to start fast with 3D and get results fast on both the Xbox and pc. Just don't pass the lines that microsoft gives you (audio specific, physics specific, scripting languages other then written in .NET) :)

Share this post


Link to post
Share on other sites
Awesome, thanks guys :) Explained alot there Dilyan, ty for that :) I was always told that C++ was the better language, but your explanation has alot more backing it up that the 'just cause' that I was told when I asked y :)
So, that leaves it at XNA/SlimDX/OpenGL. Is XNA a library(like MDX)? or more like MSVS? If I decided to use XNA/SlimDX whats compatabillity like with OGL? Can I create my own functions that utilise both without too much difficulty, or should they stay more-or-less apart?

Share this post


Link to post
Share on other sites
Quote:
Original post by mickliddy
Awesome, thanks guys :) Explained alot there Dilyan, ty for that :) I was always told that C++ was the better language, but your explanation has alot more backing it up that the 'just cause' that I was told when I asked y :)
So, that leaves it at XNA/SlimDX/OpenGL. Is XNA a library(like MDX)? or more like MSVS? If I decided to use XNA/SlimDX whats compatabillity like with OGL? Can I create my own functions that utilise both without too much difficulty, or should they stay more-or-less apart?


XNA is a library like Managed DirectX. The only caveat is you need to distribute the XNA runtime with the things you make if you wish to distribute them, where as the old MDX came pre-packaged with DirectX.

I'm not quite sure what you're asking about compatibility, though. Are you asking if the different api can work together? If so, I'd say more or less no. You wouldn't want to combine them with the same project. You could use different ones for different tools, obviously, but I don't see why you would work with multiple libraries.

If you're just asking for a comparison between the compatibility with SlimDX/XNA and OpenGL, then OpenGL is by far the most cross-platform compatible api. SlimDX and XNA are pretty much limited to Windows (XNA stuff can be deployed on the Xbox360/Zune) machines while OpenGL enjoys deployment on Mac, Linux, and Windows alike.

Share this post


Link to post
Share on other sites
Asking more if I build using SlimDX, can I also use OGL if I decide that Id like more platforms to be supported? I know very little about SDX, but if I say create a graphics device with SDX and then got it to 'do stuff'(yes I know, this is very descriptive... I did say I knew very little) - such as drawing some triangles and then rendering, can I do something like...

var UGC = Users Graphics Choice
switch(UGC)
case:SDX
//SDX Code to Set up device goes here.
break;
case:OGL
//OGL code that does the equivalent goes here.
break;
case:Hasnt Chosen
//Do something else

while(game is running)
{
if(UGC=SDX)
{
Draw what needs to be drawn here with SDX
}else{
Draw what needs to be drawn here with OGL
}
}

Share this post


Link to post
Share on other sites
You can do that, yes. However if you do decide you want to support more api, I would highly recommend you encapsulate the separate renderers into separate plugin modules or something and create some sort of unified rendering scheme for drawing. Having two totally different sets of rendering code in one area would look confusing as hell.

Share this post


Link to post
Share on other sites
awesome, thanks everyone for the replies :)
Ive decided to go with C#, and use SDX with the possibility of adding OGL at a later date. Maybe if Im lucky Ill have a few people who like the end-product xD

Share this post


Link to post
Share on other sites
Quote:
Original post by FeverGames
i say this because if you start implementing things like low level audio api's or world wide used physics engines you are not going to be able to deploy it on the 360 anymore, which is kind of a big part of the whole xna thing. Also it doesn't allow you to easily swap to stuff like DX10 or DX11 in future, if not never.

ok so XNA might not be an engine, but it does do lots of things for you which you might not want.

but this might go a bit to much offtopic. I am just not a big fan of XNA :) It's a great way to start fast with 3D and get results fast on both the Xbox and pc. Just don't pass the lines that microsoft gives you (audio specific, physics specific, scripting languages other then written in .NET) :)


Well we're obviously talking about the PC here (since we're comparing XNA with SlimDX, OpenGL, etc.) so I'm not sure how the restrictions of the 360 runtime environment apply. On the PC you have your choice to use whatever components of XNA you want. You're not at all limited in terms of physics, audio API's, or any of the other things you mentioned.

Share this post


Link to post
Share on other sites
well, I just got SDX and was having a mess around with it, when I noticed that it seems to lack a representative for DirectX.AudioVideoPlayback... is there any particular way that I should go about playing back AVI files(without having to pay an arm and a leg for BINK), or is this just very well hidden inside the SDK?

Share this post


Link to post
Share on other sites
But XNA can make a good game with good performances ?

"XNA vs SlimDX" is good title for a benchmark or the capacities.

and C++ versus the winner ...

Share this post


Link to post
Share on other sites
Quote:
Original post by mickliddy
well, I just got SDX and was having a mess around with it, when I noticed that it seems to lack a representative for DirectX.AudioVideoPlayback... is there any particular way that I should go about playing back AVI files(without having to pay an arm and a leg for BINK), or is this just very well hidden inside the SDK?


As of now, the preferred method is to use DirectShow.Net, which is a separate project that wraps DirectShow; DirectX.AudioVideoPlayback is based off of that, IIRC. We may add some video support in the future, but for now it's not on the menu.

Share this post


Link to post
Share on other sites
i think if you want to 'wrap' two different api's together you want to go for the most low level api's like DX and OpenGL, since XNA comes with quite a few premade classes. If you want to be able to run a game on a platform that doesn't support .NET you have to be able to not use XNA as a reference (so you cannot use the premade classes from xna). Of course you can also simply not use the premade classes but then for me it sounds weird to use XNA, but that's more of a feeling than performance or anything other decision for me. of course it is all doable ;)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
  • Advertisement