• Advertisement
Sign in to follow this  

OpenGL What happened to Longs Peak?

Recommended Posts

For the uninformed, back in ~2007 the Khronos Group made a big revision to OpenGL called "Longs Peak" but killed it off in favor of what is now OpenGL 3.0. I read about it a while back and forgot about it, but I recently was reminded of it and I'm curious as to why it was killed off.

There are still quite a few interesting articles and documents explaining the changes, such as this slide from GDC 2007 https://www.khronos.org/assets/uploads/developers/library/gdc_2007/OpenGL/A-peek-inside-OpenGL-Longs-Peak.pdf. What I found really interesting was that one of the reasons for changing the API was because OpenGL hasn't caught up with current hardware (2007 "current").

Isn't that why Vulkan was created? I know the two APIs are very different, but how would Longs Peak have compared to the newer APIs? It seems like they had an idea for this wonderful new API but killed it before it ever got released. Why?

Share this post


Link to post
Share on other sites
Advertisement

Looking through that page, it looks like most of those have been part of the various updates.

OGL 3.0 deprecated many of the features the paper said should be eliminated, like fixed-function vertex and pixel shaders, client-side arrays, and unusual/ancient pixel formats. They also brought in a bunch of features it recommended.

3.1 added the buffer objects and buffer textures, 3.2 brought in more of the shader functionality the paper talked about, etc.

Pulling the paper's goals as a list, it looks like all of them (or nearly all of them) are part of the standard by now. Many of the features have been added and adjusted multiple times since that paper came out:

  • Arrays and buffers on card, not client (3.0, 3.1, 4.0, 4.3)
  • Geometry storing on card (3.2, 4.0, 4.3)
  • Eliminate fixed-function vertex processing, hardware support for everything (3.0, 3.2, 3.3, 4.3)
  • Eliminate fixed-function fragment shading, hardware support for everything (3.0, 3.1, 3.3, 4.3)
  • All buffers on the card, not client (3.0, 3.1, 4.3, 4.4)
  • Allow editing of objects on card (3.1, 3.2, 4.2, 4.4)
  • Allow instancing (3.1, 3.3, 4.0)
  • Allow reusable state objects (3.3, 4.0, 4.1, 4.5)
  • Buffers for images,sync objects, query objects, VBO/PBO objects (3.0, 3.1, 3.3, 4.0, 4.1)
  • etc.

Share this post


Link to post
Share on other sites

Just to clarify one thing: buffer objects are frequently referred to as though they were a new GL feature but in fact they're not; they date all the way back to OpenGL 1.5 and the GL_ARB_vertex_buffer_object extension.  This is well before the Longs Peak plans and what GL 3.x did that was new was make using them mandatory in core contexts.

The major missing feature from Longs Peak remains the object model; or at least part of it.  Much of it is there but GL still suffers from atrocious type-safety which a well-specified object model should have fixed.

Share this post


Link to post
Share on other sites

While it's good to see that they added many of these features into OpenGL since, why did they scrap the new spec and wait to implement these features later?

 

The major missing feature from Longs Peak remains the object model; or at least part of it.  Much of it is there but GL still suffers from atrocious type-safety which a well-specified object model should have fixed.

The object model would have been a major change for the better. I don't understand why they thought it was a good idea to not include it. OpenGL's current object system feels like a bunch of hacks thrown on a 25 year old API.

Share this post


Link to post
Share on other sites

The biggest "feature" of longs peak is that it would've been a fresh API / broken backwards compatibility with existing GL code.

GL3 half-assed that by deprecating old API interfaces, but Windows drivers kept supporting them anyway (Mac actually killed them off, yay!).

And yes, Vulkan has actually achieved this goal by coming up with a new API from scratch (well, from Mantle), so there's not three decades of GL cruft hanging off of it.

Share this post


Link to post
Share on other sites

...why did they scrap the new spec and wait to implement these features later?

 

There's never been an official statement about this.

If you were there at the time, the way it played out was that the ARB were making all the right noises about Longs Peak and everybody was excited, keen and supportive.  Then they announced "some unresolved issues that we want addressed before we feel comfortable releasing a specification", went into a total media blackout for some time, and eventually emerged with the OpenGL 3.0 we know of today.

The closest to an answer you'll get is this post on the OpenGL forums:

What happened to Longs Peak?

In January 2008 the ARB decided to change directions. At that point it had become clear that doing Longs Peak, although a great effort, wasn't going to happen. We ran into details that we couldn't resolve cleanly in a timely manner. For example, state objects. The idea there is that of all state is immutable. But when we were deciding where to put some of the sample ops state, we ran into issues. If the alpha test is immutable, is the alpha ref value also? If we do so, what does this mean to a developer? How many (100s?) of objects does a developer need to manage? Should we split sample ops state into more than one object? Those kind of issues were taking a lot of time to decide.

Furthermore, the "opt in" method in Longs Peak to move an existing application forward has its pros and cons. The model of creating another context to write Longs Peak code in is very clean. It'll work great for anyone who doesn't have a large code base that they want to move forward incrementally. I suspect that that is most of the developers that are active in this forum. However, there are a class of developers for which this would have been a, potentially very large, burden. This clearly is a controversial topic, and has its share of proponents and opponents.

While we were discussing this, the clock didn't stop ticking. The OpenGL API *has to* provide access to the latest graphics hardware features. OpenGL wasn't doing that anymore in a timely manner. OpenGL was behind in features. All graphics hardware vendors have been shipping hardware with many more features available than OpenGL was exposing. Yes, vendor specific extensions were and are available to fill the gap, but that is not the same as having a core API including those new features. An API that does not expose hardware capabilities is a dead API.

Thus, prioritization was needed, and we made several decisons.

1) We set a goal of exposing hardware functionality of the latest generations of hardware by this Siggraph. Hence, the OpenGL 3.0 and GLSL 1.30 API you guys all seem to love

2) We decided on a formal mechanism to remove functionality from the API. We fully realize that the existing API has been around for a long time, has cruft and is inconsistent with its treatment of objects (how many object models are in the OpenGL 3.0 spec? You count). In its shortest form, removing functionality is a two-step process. First, functionality will be marked "deprecated" in the specification. A long list of functionality is already marked deprecated in the OpenGL 3.0 spec. Second, a future revision of the core spec will actually remove the deprecated functionality. After that, the ARB has options. It can decide to do a third step, and fold some of the removed functionality into a profile. Profiles are optional to implement (more below) and its functionality might still be very important to a sub-set of the OpenGL market. Note that we also decided that new functionality does not have to, and will likely not work with, deprecated functionality. That will make the spec easier to write, read and understand, and drivers easier to implement.

3) We decided to provide a way to create a forward-compatible context. That is an OpenGL 3.0 context with all deprecated features removed. Giving you, as a developer, a preview of what a next version of OpenGL might look like. Drivers can take advantage of this, and might be able to optimize certain code paths in the forward-compatible context only. This is described in the WGL_ARB_create_context extension spec.

4) We decided to have a formal way of defining profiles. During the Longs Peak design phase, we ran into disagreement over what features to remove from the API. Longs Peak removed quite a lot of features as you might remember. Not coincidentally, most of those features are marked deprecated in OpenGL 3.0. The disagreements happened because of different market needs. For some markets a feature is essential, and removing it will cause issues, whereas for another market it is not. We discovered we couldn't do one API to serve all. A profile encapsulates functionality needed to meet the needs of a particular market. Conformant OpenGL products may implement one or more profiles. A profile is by definition a subset of the whole core specification. The core OpenGL specification will contain all functionality, including what is in a profile, in a coherently designed whole. Profiles simply enable products for certain markets to not ship functionality that is not relevant to those markets in a well defined way. Only the ARB may define profiles, individual vendors may not (this in contrast to extensions).

5) We will keep working on object model issues. Yes, this work has been put on the back burner to get OpenGL 3.0 done, but we have picked that work up again. One of the early results of this is that we will work on folding object model improvements into the core in a more incremental manner.

6) We decided to provide functionality, where possible, as extensions to OpenGL 2.1. Any OpenGL 3.0 feature that does not require OpenGL 3.0 hardware is also available in extension form to OpenGL 2.1. The idea here is that new functionality on older hardware enables software vendors to provide upgrades to their existing users.

7) We decided that OpenGL is not going to evolve into a general GPU compute API. In the last two years or so compute using a GPU and a CPU has taken off, in fact is exploding. Khronos has recognized this and is on a fast track to define and release OpenCL, the open standard for compute programming. OpenGL and OpenCL will be able to share data, like buffer objects, in an efficient manner.

There are many good ideas in Longs Peak. They are not lost. We would be stupid to ignore it. We spent almost two years on it, and a lot of good stuff was designed. There is a desire to work on object model issues in the ARB, and we recently started doing that again. Did you know that you have no guarantee that if you change properties of a texture or render buffer attached to a framebuffer object that the framebuffer object will actually notice? It has to notice it, otherwise your next rendering command will not work. Each vendor's implementation deals with this case a bit differently. If you throw in multiple contexts in the mix, this becomes an even more interesting issue. The ARB wants to do object model improvements right the first time. We can't afford to do it wrong. At the same time, the ARB will work on exposing new hardware functionality in a timely manner.

 

What's clear about this is that the job of specifying Longs Peak was taking too long, design-by-committee wasn't working and OpenGL was once again at risk of being overtaken even further.  So something had to be done, and what was done was a rush-release of OpenGL 3.0 with promises to fold in much of the Longs Peak features/improvements in future releases, which is what eventually happened.

At the time there were other rumours, one of which was that CAD interests on the ARB killed it, because their CAD programs were using the old/crufty behaviours that Longs peak threatened to remove, and they didn't want to upgrade them.  The statement quoted that "there are a class of developers for which this would have been a, potentially very large, burden" suggests CAD vendors for sure, but IMO this rumour was never 100% credible, because of course CAD vendors would have always had the option to just continue using the older API (just as even today you can still write an OpenGL 1.1 program).  A second rumour I recall is that one of the GPU vendors killed it for some non-specific reasons.

On balance of probability the situation I describe above (spec taking too long and competition getting further ahead again) seems most likely to me.

Edited by mhagain

Share this post


Link to post
Share on other sites

At the time there were other rumours, one of which was that CAD interests on the ARB killed it, because their CAD programs were using the old/crufty behaviours that Longs peak threatened to remove, and they didn't want to upgrade them.  The statement quoted that "there are a class of developers for which this would have been a, potentially very large, burden" suggests CAD vendors for sure, but IMO this rumour was never 100% credible, because of course CAD vendors would have always had the option to just continue using the older API (just as even today you can still write an OpenGL 1.1 program).  A second rumour I recall is that one of the GPU vendors killed it for some non-specific reasons.


I can tell you for a fact the CAD companies didn't kill it - I was having a conversation with someone on the ARB at the time who confirmed that.

My working theory is that it was Apple and possibly Blizzard who did for it - AMD and NV were very much on board so I doubt they killed it... Intel is a maybe but feels unlikely.

Share this post


Link to post
Share on other sites

I recall hearing Blizzard mentioned as a possible villain of the piece too.

Amusingly, the one company we can be absolutely certain didn't kill it is Microsoft; they had long since ceased involvement with OpenGL by then.

Share this post


Link to post
Share on other sites
On 4/12/2017 at 0:46 PM, Oberon_Command said:

Can anyone explain why Blizzard (or Apple, for that matter) would be the culprit, though? Why would a game company push to kill Long Peaks?

Late response, but to answer your question: Apple does not like open standards. I'm going to guess they had plans to build their own proprietary API like Microsoft did, and with the recent surge of low-level APIs they had an opportunity to do just that.

I'm not sure what Blizzard could have possibly had against it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  • Advertisement