• Advertisement
Sign in to follow this  

OpenGL SNES Like 5 Layer 320x240 Graphics Library

This topic is 1395 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

https://sites.google.com/site/simpleopenglsneslike/

 

 

NO "NVIDIA or AMD or Intel OpenGL or DirectX License to unlock it's full potential" needed but it's 12FPS if you don't have license, if you have license it takes NO TIME to render at all.
 
GeForce 7600GT or Higher needed.(Consult source code for needed extensions)
 
If you are enthusiast and don't have money, use ths library, connect it to TV and it's awesome.
12FPS for GeForce 7600GT and it's slow only for graphics so it's kind of fixed FPS and if you use CPU time for hard working it won't slow down at all like 12FPS fixed forever unless you do some damned thing...
 
License: LGPL, CC0

 

 

 

ADDED:

It is basically 2D Library with similar graphical environment like THE CONSOLE Super Nintendo Entertainment System. We have this awesome and fast CPU why would we want to use crippled Public OpenGL library for this? Because, not only doing it on GPU is COOL but also gives you totally free cpu time per frame. GL Draw Call is done in another process using nvglvt.dll or similar things and it's basically same thing as just doing Sleep(1000/12) in another thread.

 

So, if you tries to implement this kind of environment like 32Bit 320x240 and 5 Layer in GPU blit only, it gives you about 3FPS no matter how great your graphics card is.

The more objects you have the slower it gets. Means you CAN'T make even Starcraft 1 if you only use what is given to you.

 

 

 

Point is, CPU is FREE and 12FPS is just Graphics do some crazy stuff like voxel 3D filling in cpu and path tracing in vertex shader and stuff. It is about using only uncrippled DP4 Vertex Shader processing line.

 

 

ADDED2:

 

Think about Raspberry PI

Edited by WalkingTimeBomb

Share this post


Link to post
Share on other sites
Advertisement

What exactly is this? I see no documentation at all.

Friend, it means nothing to you if you don't understand what I have wrote.

 

Try to run it in your old and slow computer. If it gives you 2000FPS it means nothing to you but if it gives you 300FPS you have a problem.

 

Also, code is so simple, just follow through Lesson45.cpp and vs2.txt

Edited by WalkingTimeBomb

Share this post


Link to post
Share on other sites

The problem is that we DON'T understand what you wrote. Is this something you made? Is this just something you found and thought was cool and wanted to share?

What's the purpose? Is it supposed to be a SNES emulator running on the GPU? Etc?

Share this post


Link to post
Share on other sites

OK friends, you guys either know about licensing problems or don't know what you guys are talking about.

 

Us OpenGL programmers without proper job always tried to make some 3D or 2D game with existing hardwares.

 

 

I've had access to Properly licensed NVIDIA and AMD card and even shitty code that loads 1MegaByte 3D Mesh ran on 2000FPS and they do it all on CPU.

 

I asked why is it so fast here and so slow at home?

 

 

My previous boss said, yeah.. you need PAY NVIDIA and AMD to get full speed on OpenGL or DirectX or Assembly SDK.

 

I'm saying here is, no need to pay if you are interested with just playing along... I mean, did you guys even made a finished game anyways with existing hardware without licensing? I did. https://code.google.com/p/digdigrpg/

 

I runs fast but it should run at least 500 FPS if I have licensed it correctly.

 

 

Play with my source code you guys will get it. Change int wdt=320 into 160 and int hgt=240 into 120 and you will see FPS goes up to 300FPS.

 

It runs on 12FPS not because I load images every frame but NVIDIA driver cripples and blocks it.

 

 

Upper comments said it right it uses Vertex Shader why I did it? Vertex Shader is only thing that is not crippled in unlicensed hardware.

Upper comments said it wrong it loading PNG every frame does NOT slow it down. Updating VBO every frame does not slow it down either.

 

Well you guys just wanted for me to explain it all but really, if someone don't get it and don't like it, and don't understand WHAT I HAVE WROTE IN THE FIRST PLACE even if you don't like it or not, it's NOT FOR YOU.

Edited by WalkingTimeBomb

Share this post


Link to post
Share on other sites

OK here is the WHOLE explanation since you guys want to know what it is all about.

 

MFU and other functionality in Vertex Shader is all slow and crippled even if you have correct license.

 

 

Look at my vs2.txt code:

in  vec4 in_Pixel;
in  vec4 in_Layer1;
in  vec4 in_Layer2;
in  vec4 in_Layer3;
in  vec4 in_Layer4;
in  vec4 in_Layer5;
in  vec4 in_Layer6;
in  vec4 in_Layer7;
varying vec3 color;
varying float test;

void main()
{
	gl_Position = gl_ModelViewProjectionMatrix * in_Pixel; // Vertex Array Object's First Vertex Buffer Obect it Simulates Pixel. Remember. This is Vertex Shader. Don't transform it.

	vec4 curLayer = in_Layer2; // VAO's Second VBO. Color Buffer. Thus, First Layer of Bitmap.
	vec4 colR = vec4(in_Layer1.r, 0.0, 0.0, 0.0); // Read RGB pixel from current Bitmap.
	vec4 colG = vec4(in_Layer1.g, 0.0, 0.0, 0.0);
	vec4 colB = vec4(in_Layer1.b, 0.0, 0.0, 0.0);

	vec4 col2R = vec4(curLayer.r, 0.0, 0.0, 0.0);
	vec4 col2G = vec4(curLayer.g, 0.0, 0.0, 0.0);
	vec4 col2B = vec4(curLayer.b, 0.0, 0.0, 0.0);
	vec4 col2W = vec4(curLayer.a, 0.0, 0.0, 0.0);
	vec4 oneMinusLayer2Alpha2 = vec4(1.0, curLayer.a, 0.0, 0.0); // For blending

	vec4 oneMinusLayer2Alpha1 = vec4(1.0, -1.0, 0.0, 0.0); // For blending
	float colAf = dot(oneMinusLayer2Alpha1, oneMinusLayer2Alpha2); 
        // dot oneMinusLayer2Alpha2 oneMinusLayer2Alpha1 ?
        // 1.0*1.0 + curLayer.a*-1 + 0.0*0.0 + 0.0*0.0 == 1.0-curLayerAlpha
	float colRf = dot(colR, vec4(colAf)); //RGB * Alpha
	float colGf = dot(colG, vec4(colAf));
	float colBf = dot(colB, vec4(colAf));
	float col2Rf = dot(col2R, col2W); // RGB * 1.0 not used here but in next layer it will be used as current layer alpha
	float col2Gf = dot(col2G, col2W);
	float col2Bf = dot(col2B, col2W);
	float colFinalRf = dot(vec4(colRf, 1.0, 0.0, 0.0), vec4(1.0, col2Rf, 0.0, 0.0));
	float colFinalGf = dot(vec4(colGf, 1.0, 0.0, 0.0), vec4(1.0, col2Gf, 0.0, 0.0));
	float colFinalBf = dot(vec4(colBf, 1.0, 0.0, 0.0), vec4(1.0, col2Bf, 0.0, 0.0));

	color = vec3(colFinalRf,colFinalGf,colFinalBf);
}

It is all about using SO ABUNDANT 1GB average PREFETCHABLE from GPU DP4 unit line in GPU processor.

Edited by WalkingTimeBomb

Share this post


Link to post
Share on other sites

I looked at your vertex shader code as well, and I don't think you understand what it is that you're trying to achieve.

 

For one thing the DP4 (Vector) style design ended with the 7x00 series of GeForce cards more than 8 years ago. The G80 onwards have all been Scalar designs, they were released in 2006, which means that all you get from trying to exploit Vector instructions is (potentially) a little bit of pipeline improvement.

 

Also if all you're trying to do is draw things too a screen then your still much worse off doing it on the CPU and then blitting the result to the screen using the GPU and THEN trying to do the blending in a very poor way using a shader.

Share this post


Link to post
Share on other sites

First of all lets clear up a misconception: You don't pay nVidia / AMD/ Intel or anyone else for using the OpenGL or DirectX APIs. They are free and if someone has told you that you have to pay to get better performance then you have been misinformed (Lied too).

 

You can write code, and compile it, run it, give away that programs etc, all without paying anyone and you will enjoy access to the same performance as everyone else does including big companies.

 

The reason your program runs slowly is because of the way that it is written, not because of nVidia / AMD, just because of your code.

 

fastCall22 has given you some of the reasons why it is slow and they should be easy enough to understand:

  • Loading the image every frame - ask yourself this question: which is faster, loading something, copying, then deleting it and repeating that every frame, or loading it once, using it until the program ends and only then releasing it? The answer is obviously that it is always faster to do less work so loading/deleting it only once is always going to be faster.
  • Software blitting - this is very slow and unnecessary. You should look into drawing textured quads (2 triangles), load your images (once only) turn the, into a texture and apply that texture too your quads, render those using opengl instead or blitting to the layers using software.
  • Updating your buffers every frame - Your data isn't changing so why are you updating them? Create them, fill them with data, and then use them every frame without updating them anymore. This goes back to the first problem of doing more work than you need/want to do each frame. This one comes with a 2nd problem though, you've told OpenGL that you won't update them very often by using the STATIC_DRAW flag... but then you are updating all of the time. That means that OpenGL has to do a lot of extra work which means that _you_ are slowing it down.

Everyone has to start learning somewhere, and I started with the NeHe tutorials a long time ago as well, but part of learning involves listening to what other people are trying to tell you.

 

Andy

 

 

Buddy, just call NVIDIA customer service and ASK FOR IT before you tell lies to people.

 

 

Scalar design is ADDED TO DP4 unit, not REPLACED.

 

I don't know which industry you are from but I'm from the industry I've been here since 1997.

 

 

 

You say software blitting is slow then why is PyGame hardware assisted blitting is slow on GeForce Titan? Everybody knows about PyGame so I talk about pygame but I'm actually saying can you do 1440*900 5 Layers fullscreen blitting every frame with your current technology?

Edited by WalkingTimeBomb

Share this post


Link to post
Share on other sites

To clarify finally, it is 320x240 people. It is just resize the window of Lesson45.exe.

 

320x240 software/hardware blitting doesn't matter.

 

Images loaded every frame is 8x16 and 8x8 pixel resolution.

 

 

I told you to read code and you didn't even read and understood my code and talk like professional. Don't ruin it, it is going to be a big technology in the future for Raspberry Pi and other cheap computers.

Share this post


Link to post
Share on other sites

(most of this is just speculation)

 

You can pay NVidia/AMD to get premium support etc, which might mean some people will help you create the best code for their graphics cards. I can imagine someone saying "you pretty much have to pay for a premium membership to get optimal performance", meaning you need their expertise to be able to utilize their cards to their fullest potential. There's no real technical advantage though, more access to knowledge.

 

You might also get access to some better tools, though I think most of them are free?. It may also be that you can use those "NVidia - the way it's meant to be played" etc logos only if you pay them or enter some form of partnership.

 

If your game is very successful, then NVidia and/or AMD may take an extra look at your shaders etc, wether you pay them or not, to make sure the game runs as fast as possible on their cards. This is to make their own cards more competitive though.

Share this post


Link to post
Share on other sites

@Krypton - my apologies I initially thought there was actually hope in here for it being a simple misunderstanding and maybe he could learn something :(

 

@WalkingTimeBomb - I'm a bit concerned that you're telling people that there is some secret-paid-for drivers/code that make "something" (I cannot figure out what) run much faster once you've paid for it. There isn't. You buy the hardware and you have full access too do with it what you like using the various API's available. Both AMD and nVidia give away a lot of code samples, for free, that prove you can do anything with your own code that they do with theirs.

 

Also don't rely on stating your own years of experience to try and get leverage in discussions on the forums, a lot of us are full-time professional game developers.

 

I still don't know what it is that you think is so amazing about that code but it's worth noting that the Raspberry Pi also has a dedicated GPU which people have access too and for which there are open source drivers.

 

@All

 won't post again in here, sorry for feeding the troll :/

Share this post


Link to post
Share on other sites

You can pay NVidia/AMD to get premium support etc, which might mean some people will help you create the best code for their graphics cards. I can imagine someone saying "you pretty much have to pay for a premium membership to get optimal performance", meaning you need their expertise to be able to utilize their cards to their fullest potential. There's no real technical advantage though, more access to knowledge.

 

nVidia's "TWIMTBP" program just gets you visits from one of their engineers, help with optimising things and if you've found a bug with their drivers then it might get prioritised (I think) but a lot of the optimisation is stuff you can do yourself and all of the tools I've ever encountered have been available for free. Although sometimes they have access to slightly newer version that aren't yet public - nothing revolutionary.

Share this post


Link to post
Share on other sites

Agreed.

 

P.S. You're not supposed to call BufferData every frame when using STATIC_DRAW.

Edited by Promit

Share this post


Link to post
Share on other sites

This topic is 1395 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Guest
This topic is now closed to further replies.
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  • Advertisement