Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. I have rewritten this function a few times now and I am having trouble getting every other row to have the triangles face the reverse direction. The image enclosed demonstrates the faulty behavior which includes only one row facing the reversed direction. I am also confused by how many indexes I should have for the g_vertex_buffer_data_land variable, could someone show me a breakdown like : 18 vertices times 2 sets times 8 columns times 8 depth. In another post fleabay mentioned I was not setting a VAO in core profile, however It seems to be. here is the code: float* getVertices(void) { //using defines int incol = _colus; int depth = _depth; int i = 0; float scaleit = .5; float tempdepth = 0; int startindexat = 0; int counter = 0; int secondcounter = 0; //for (; (tempdepth+1) <= (depth);) //don't forget to change this back! for (int q=0;q<3;q++) { //odd rows for (int col = 0; (col+1) <= (incol ); col++) { GLfloat matrix1[3][3] = { {(col + 1),0,(tempdepth)},{ (col),0,(tempdepth)}, {(col),0,(tempdepth + 1) } }; // //vertex 1 g_vertex_buffer_data_land[startindexat + 0 + counter] = matrix1[0][0] * scaleit; g_vertex_buffer_data_land[startindexat + 1 + counter] = matrix1[0][1] * scaleit; g_vertex_buffer_data_land[startindexat + 2 + counter] = matrix1[0][2] * scaleit; //vertex 2 g_vertex_buffer_data_land[startindexat + 3 + counter] = matrix1[1][0] * scaleit; g_vertex_buffer_data_land[startindexat + 4 + counter] = matrix1[1][1] * scaleit; g_vertex_buffer_data_land[startindexat + 5 + counter] = matrix1[1][2] * scaleit; g_vertex_buffer_data_land[startindexat + 6 + counter] = matrix1[2][0] * scaleit; g_vertex_buffer_data_land[startindexat + 7 + counter] = matrix1[2][1] * scaleit; g_vertex_buffer_data_land[startindexat + 8 + counter] = matrix1[2][2] * scaleit; int matrix2[3][3] = { { (col + 1),0,(tempdepth + 1)},{ (col + 1),0,(tempdepth)}, {(col),0,(tempdepth + 1) } }; g_vertex_buffer_data_land[startindexat + 9 + counter] = matrix2[0][0] * scaleit; g_vertex_buffer_data_land[startindexat + 10 + counter] = matrix2[0][1] * scaleit; g_vertex_buffer_data_land[startindexat + 11 + counter] = matrix2[0][2] * scaleit; g_vertex_buffer_data_land[startindexat + 12 + counter] = matrix2[1][0] * scaleit; g_vertex_buffer_data_land[startindexat + 13 + counter] = matrix2[1][1] * scaleit; g_vertex_buffer_data_land[startindexat + 14 + counter] = matrix2[1][2] * scaleit; g_vertex_buffer_data_land[startindexat + 15 + counter] = matrix2[2][0] * scaleit; g_vertex_buffer_data_land[startindexat + 16 + counter] = matrix2[2][1] * scaleit; g_vertex_buffer_data_land[startindexat + 17 + counter] = matrix2[2][2] * scaleit; counter = counter + 18; }//end col startindexat = 17 + counter+ 1; for (int col2 = 0; (col2+1) <= (incol); col2++) { //first triangle : even rows GLfloat matrix3[3][3] = { {(col2 + 1) ,0,(tempdepth + 2)} , {(col2 + 1),0,(tempdepth + 1)}, {(col2),0,(tempdepth +1)} }; // //vertex 1 g_vertex_buffer_data_land[(startindexat + secondcounter)] = matrix3[0][0] * scaleit; g_vertex_buffer_data_land[(startindexat + 1 + secondcounter)] = matrix3[0][1] * scaleit; g_vertex_buffer_data_land[(startindexat + 2 + secondcounter)] = matrix3[0][2] * scaleit; //vertex 2 g_vertex_buffer_data_land[(startindexat + 3 + secondcounter)] = matrix3[1][0] * scaleit; g_vertex_buffer_data_land[(startindexat + 4 + secondcounter)] = matrix3[1][1] * scaleit; g_vertex_buffer_data_land[(startindexat + 5 + secondcounter)] = matrix3[1][2] * scaleit; g_vertex_buffer_data_land[(startindexat + 6 + secondcounter)] = matrix3[2][0] * scaleit; g_vertex_buffer_data_land[(startindexat + 7 + secondcounter)] = matrix3[2][1] * scaleit; g_vertex_buffer_data_land[(startindexat + 8 + secondcounter)] = matrix3[2][2] * scaleit; // even (2) int matrix4[3][3] = { {(col2 + 1),0,(tempdepth+ 2)},{ (col2),0,(tempdepth+ 1)}, {(col2),0,(tempdepth + 2) } }; g_vertex_buffer_data_land[(startindexat+9 + secondcounter)] = matrix4[0][0] * scaleit; g_vertex_buffer_data_land[(startindexat+10 + secondcounter)] = matrix4[0][1] * scaleit; g_vertex_buffer_data_land[(startindexat+11 + secondcounter)] = matrix4[0][2] * scaleit; g_vertex_buffer_data_land[(startindexat+12 + secondcounter)] = matrix4[1][0] * scaleit; g_vertex_buffer_data_land[(startindexat+13 + secondcounter)] = matrix4[1][1] * scaleit; g_vertex_buffer_data_land[(startindexat+14 + secondcounter)] = matrix4[1][2] * scaleit; g_vertex_buffer_data_land[(startindexat+15 + secondcounter)] = matrix4[2][0] * scaleit; g_vertex_buffer_data_land[(startindexat+16 + secondcounter)] = matrix4[2][1] * scaleit; g_vertex_buffer_data_land[(startindexat+17 + secondcounter)] = matrix4[2][2] * scaleit; //one column of 4 triangles //(three vetices per triangle) secondcounter = secondcounter + 18; } startindexat = 17 + secondcounter + 1; tempdepth = tempdepth - 1; } return gvertices; } I am hoping someone might have the experience to help me solve this problem. Or, what could I check and do I need to show more repo code? Thank you, Josheir
  2. This is a follow up to a previous post. MrHallows had asked me to post the project, so I am going to with a new fresh thread so that I can get the most needed help. I have put the class in the main .cpp to simplify for your debugging purposes. My error is : C1189 #error: OpenGL header already included, remove this include, glad already provides it I tried adding : #define GLFW_INCLUDE_NONE, and tried adding this as a preprocessor definitions too. I also tried to change the #ifdef - #endif, except I just couldn't get it working. The code repository URL is : https://github.com/Joshei/GolfProjectRepo/tree/combine_sources/GOLFPROJ The branch is : combine_sources The Commit ID is: a4eaf31 The files involved are : shader_class.cpp, glad.h, glew.h glad1.cpp was also in my project, I removed it to try to solve this problem. Here is the description of the problem at hand: Except for glcolor3f and glRasterPos2i(10,10); the code works without glew.h. When glew is added there is only a runtime error (that is shown above.) I could really use some exact help. You know like, "remove the include for gl.h on lines 50, 65, and 80. Then delete the code at line 80 that states..." I hope that this is not to much to ask for, I really want to win at OpenGL. If I can't get help I could use a much larger file to display the test values or maybe it's possible to write to an open file and view the written data as it's outputted. Thanks in advance, Josheir
  3. I have a texture I'm reading from like this: vec4 diffuseFrag = texture2D(DiffuseMap, fDiffuseCoord); Sometimes, for special effects, I'd actually like to do something this: vec4 uvFrag = texture2D(UVMap, fUVCoord); vec4 diffuseFrag = texture2D(DiffuseMap, uvFrag.rg); ...basically, I'm using a texture's Red and Green color channels to store the frag coordinates I want to read from DiffuseMap. My probably is, both the UV map and the Diffuse map are spritesheets with multiple images in them. This means, I'm actually wanting uvFrag.rg's (0-1) texcoord to be multiplied against the *subportion* of the texture that all four vertices of my fDiffuseCoord are referring to. Something like: vec4 uvFrag = texture2D(UVMap, fUVCoord).rg; vec4 upperLeftOfSubrect = ...; vec4 bottomRightOfSubrect = ...; vec4 subrectSize = (bottomRightOfSubrect - upperLeftOfSubrect); uvFrag = upperLeftOfSubrect + (uvFrag * range); vec4 diffuseFrag = texture2D(DiffuseMap, uvFrag.rg); Where my mind is going blank is, how can I get upperLeftOfSubrect / bottomRightOfSubrect, without bloating my vertex struct further with additional attributes? It mentally trips me up that I'll have to copy upperLeftOfSubrect / bottomRightOfSubrect into all four of my vertices... and triply annoys me because I'm already passing them in as fDiffuseCoord (just spread between different vertices). Is there a simple solution to this that I'm missing?
  4. I am trying to use : OpenGL and draw some simple text quickly. However I cannot get the top two commands to define. glColor3f(rgb.r, rgb.g, rgb.b); glRasterPos2f(x, y); glutBitmapString(font, string); I tried this : #include <C:/Program Files (x86)/Microsoft SDKs/Windows/v7.1A/Include/gl/GL.h> May I have some help please. The last command which is with glut is fine. Apparently, the problem is not with GLUT or FreeGlut, but with Visual Studio headers. I am using Visual Studio 2017, C++. Thank you, Josheir
  5. I'm having trouble with glew sharing some defines that I can't resolve. Does anyone know of a way to get the following statements working instead of an include with glew (glew resolves the red squigglies too.) glColor3f(0, 1, 0.); glRasterPos2i(10,10); I really want to use a quick glut command for now. The command uses the statements above. Thank You, Josheir
  6. Hi all, I'm targeting OpenGL 2.1 (so no VAO's for me 😢) and I have multiple shaders rendering quite different things at each frame; for the sake of argument, say that I have just two shaders - one shader draws the background (so it uses vertex attributes for texture coordinates, 2D positions, etc.) while the other draws some meshes (so it uses vertex attributes for colors and 3D positions, and no textures). The data provided to each of the shaders changes for each frame. Now, I've realized I can choose between two different strategies when it comes to binding vertex attribute locations. With the first strategy I reuse vertex attribute indices, whereby the first shader would bind to, say, attribute locations 0, 1, 2, 3 and the second shader would bind to, say, 0, 1, 2. With this approach I'll have to constantly call glVertexAttribPointer for these indices, as for each frame one shader would require one set of VBO's to feed 0, 1, 2, 3 and the other shader would require another set of VBO's to feed 0, 1, 2. With the second strategy, instead, I use "dedicated" vertex attribute indices: the first shader would bind to 0, 1, 2, 3 and the second shader would bind to 4, 5, 6. In other words, each vertex attribute index has its own dedicated VBO to feed data to it. The advantage of this approach is that I need to call glVertexAttribPointer only once per index, say at program initialization, but at the same time it's limiting my capacity to "grow" shaders in the future (my GPU only supports 16 vertex attributes, hence it'll be hard to add new shaders or to add new attributes to existing shaders). Now, in my implementation I can't see any performance benefit of one versus the other, but truth to be told, I'm developing on an ancient Dell laptop with an Intel mobile card...😩 nonetheless I would like this code to run as fast as possible on modern GPU's. Is there any performance benefit to choosing one of the two strategies over the other? And in case there's no performance benefit, is one strategy preferable over the other for reasons that I can't think of at the moment? Thanks so much in advance for any tips!!!!
  7. Hello, I was just wondering whether it was a thing to use a component-based architecture to handle models in OpenGL much like in a entity component system. This structure has especially helped me in cases where I have models that need different resources. By that I mean, some models need a texture and texture coordinate data to go with it, others just need some color data (vec3). Some have a normals buffer whereas others just get their normals calculated on the fly (geometry shader). Some have an index buffer (rendered with glDrawElements) whereas others don't and get rendered using glDrawArrays etc... Instead of branching off into complicated hierarchies to create models that have only certain resources, or coming up with strange ways resolve certain problems concerning checking which resources some models have, I just attach components to each model such as a vertex buffer or texture coordinate buffer or index buffer etc... So, I was just wondering if I was using the other version of model handling wrong or whether this style of programming is a viable option and whether there are flaws that I am unable to foresee?
  8. So, there is one thing that i don't quite understand. (Probably because i didn't dive that deep into PBR lighting in the first place.) Currently, i implemented a very basic PBR renderer (with the BRDF microfaced shading model) into my engine. The lighting system i have is pretty basic (1 directional/sun light, deffered point lights and 1 ambient light) I don't have a GI solution yet. (Only a very basic world-space Ambient occlusion technique). Here is how it looks like: Now, what i would like to do is to give the shadows a slightly blueish tint. (To simulate the blueish light from the sky.) Unreal seems to implement this too which gives the scene a much more natural look: Now, my renderer does render in HDR and i use exposure/tonemapping to bring this down to LDR. The first image used an indirect light with a RGB value of (40,40,40) and an indirect light of (15,15,15). Here is the same picture but with an ambient light of (15,15,15) * ((109, 162, 255) / (255,255,255)) which should give us this blueish tint. The problem is it looks like this: The shadows do get the desired color (more or less) the issue is that all lit pixels also get affected giving the scene a blue tint. Reducing the ambient light intensity results in way too dark shadows. Increase the intensity and the shadows look alright but then the whole scene gets affected way too much. In the shader i basically have: color = directionalLightColor * max(dot(normal,sunNormal),0.0) + ambientLight; The result is that the blue component of the color will be always higher than the other two. I could of course fix it by faking it (only adding the ambient light if the pixel is in shadow) but i want to stay as close to PBR as possible and avoid adding hacks like that. My question is: How is this effect done properly (with PBR / proper physically based lighting)?
  9. Hi, I was studying making bloom/glow effect in OpenGL and following the tutorials from learnopengl.com and ThinMatrix (youtube) tutorials, but i am still confuse on how to generate the bright colored texture to be used for blur. Do I need to put lights in the area of location i want the glow to happen so it will be brighter than other object in the scene?, that means i need to draw the scene with the light first? or does the brightness can be extracted based on how the color of the model was rendered/textured thru a formula or something? I have a scene that looks like this that i want to glow the crystal can somebody enlighten me on how or what the correct approach is? really appreciated!
  10. I tried to port DirectX program to OpenGL version. My planets were inverted horizontally (opposite rotation Y axis). Z-coordinate was inverted too. I figured them out for some months. I finally recognized that there are two different coordinate systems - left-handed and right-handed. DirectX uses left-handed coordinate but OpenGL uses right-handed coordinate as default. I googled it and learned a lot. I prefer left-handed coordinate system for easy programming instead. Also I tried google projection matrix formula for left-handed coordinate but did not find any sources. Does anyone know any matrix formula for left-handed coordinate system? Does anyone know any good guides for DirectX to OpenGL conversion for porting? Tim
  11. Hello. I'm trying to implement normal mapping. I've been following this: http://ogldev.atspace.co.uk/www/tutorial26/tutorial26.html The problem is that my tangent vectors appear rather obviously wrong. But only one of them, never both. Here's my code for calculating the tangents: this.makeTriangle = function(a, b, c) { var edge1 = VectorSub(b.pos, a.pos); var edge2 = VectorSub(c.pos, a.pos); var deltaU1 = b.texCoords[0] - a.texCoords[0]; var deltaV1 = b.texCoords[1] - a.texCoords[1]; var deltaU2 = c.texCoords[0] - a.texCoords[0]; var deltaV2 = c.texCoords[1] - a.texCoords[1]; var f = 1.0 / (deltaU1 * deltaV2 - deltaU2 * deltaV1); var vvec = VectorNormal([ f * (deltaV2 * edge1[0] - deltaV1 * edge2[0]), f * (deltaV2 * edge1[1] - deltaV1 * edge2[1]), f * (deltaV2 * edge1[2] - deltaV1 * edge2[2]), 0.0 ]); var uvec = VectorNormal([ f * (-deltaU2 * edge1[0] - deltaU1 * edge2[0]), f * (-deltaU2 * edge1[1] - deltaU1 * edge2[1]), f * (-deltaU2 * edge1[2] - deltaU1 * edge2[2]), 0.0 ]); if (VectorDot(VectorCross(a.normal, uvec), vvec) < 0.0) { uvec = VectorScale(uvec, -1.0); }; /* console.log("Normal: "); console.log(a.normal); console.log("UVec: "); console.log(uvec); console.log("VVec: "); console.log(vvec); */ this.emitVertex(a, uvec, vvec); this.emitVertex(b, uvec, vvec); this.emitVertex(c, uvec, vvec); }; My vertex shader: precision mediump float; uniform mat4 matProj; uniform mat4 matView; uniform mat4 matModel; in vec4 attrVertex; in vec2 attrTexCoords; in vec3 attrNormal; in vec3 attrUVec; in vec3 attrVVec; out vec2 fTexCoords; out vec4 fNormalCamera; out vec4 fWorldPos; out vec4 fWorldNormal; out vec4 fWorldUVec; out vec4 fWorldVVec; void main() { fTexCoords = attrTexCoords; fNormalCamera = matView * matModel * vec4(attrNormal, 0.0); vec3 uvec = attrUVec; vec3 vvec = attrVVec; fWorldPos = matModel * attrVertex; fWorldNormal = matModel * vec4(attrNormal, 0.0); fWorldUVec = matModel * vec4(uvec, 0.0); fWorldVVec = matModel * vec4(vvec, 0.0); gl_Position = matProj * matView * matModel * attrVertex; } And finally the fragment shader: precision mediump float; uniform sampler2D texImage; uniform sampler2D texNormal; uniform float sunFactor; uniform mat4 matView; in vec2 fTexCoords; in vec4 fNormalCamera; in vec4 fWorldPos; in vec4 fWorldNormal; in vec4 fWorldUVec; in vec4 fWorldVVec; out vec4 outColor; vec4 calcPointLight(in vec4 normal, in vec4 source, in vec4 color, in float intensity) { vec4 lightVec = source - fWorldPos; float sqdist = dot(lightVec, lightVec); vec4 lightDir = normalize(lightVec); return color * dot(normal, lightDir) * (1.0 / sqdist) * intensity; } vec4 calcLights(vec4 pNormal) { vec4 result = vec4(0.0, 0.0, 0.0, 0.0); ${CALC_LIGHTS} return result; } void main() { vec4 surfNormal = vec4(cross(vec3(fWorldUVec), vec3(fWorldVVec)), 0.0); vec2 bumpCoords = fTexCoords; vec4 bumpNormal = texture(texNormal, bumpCoords); bumpNormal = (2.0 * bumpNormal - vec4(1.0, 1.0, 1.0, 0.0)) * vec4(1.0, 1.0, 1.0, 1.0); bumpNormal.w = 0.0; mat4 bumpMat = mat4(fWorldUVec, fWorldVVec, fWorldNormal, vec4(0.0, 0.0, 0.0, 1.0)); vec4 realNormal = normalize(bumpMat * bumpNormal); vec4 realCameraNormal = matView * realNormal; float intensitySun = clamp(dot(normalize(realCameraNormal.xyz), normalize(vec3(0.0, 0.0, 1.0))), 0.0, 1.0) * sunFactor; float intensity = clamp(intensitySun + 0.2, 0.0, 1.0); outColor = texture(texImage, fTexCoords) * (vec4(intensity, intensity, intensity, 1.0) + calcLights(realNormal)); //outColor = texture(texNormal, fTexCoords); //outColor = 0.5 * (fWorldUVec + vec4(1.0, 1.0, 1.0, 0.0)); //outColor = vec4(fTexCoords, 1.0, 1.0); outColor.w = 1.0; } Here is the result of rendering an object, showing its normal render, the uvec, vvec, and texture coordinates (each commented out in the fragment shader code): Normal map itself: The uvec, as far as I can tell, should not be all over the place like it is; either this, or some other mistake, causes the normal vectors to be all wrong, so you can see on the normal render that for example there is a random dent on the left side which should not be there. As far as I can tell, my code follows the math from that tutorial. I use right-handed corodinates. So what could be wrong?
  12. I got a book called "High Dynamic Range Imaging" (published by Elsevier) and read some pages. I learned that HDR requires deep color depth (10/12-bit color depth). I heard that many vendors and independent developers implemented HDR features into their games and applications because UHD standard requires deep color depth (HDR) to eliminate noticeable color-banding. I had seen color banding on SDR displays and want to get rid of it. I googled HDR for OpenGL but learned that they require Quadro or FireGL cards to support that. How do they get HDR working on consumer video cards? That's why I want HDR implementation for my OpenGL programs. Tim
  13. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  14. Hi all, I know that most inverted texture issue is because of inverted coordinate system but this case maybe different. so here are the facts. 1. I am rendering a 2D sprite using a quad and a texture in orthographic projection. I was just following the code from learnopengl.com here https://learnopengl.com/In-Practice/2D-Game/Rendering-Sprites have a look at initRenderData() void SpriteRenderer::initRenderData() { // Configure VAO/VBO GLuint VBO; GLfloat vertices[] = { // Pos // Tex 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f }; ... } Please take note that this was rendered in orthographic projection, which means the coordinates of this quads are in screenspace (top left are 0,0 and bottom right are width,height) and please observed the texture coordinate as well, instead of 0,0 at the bottom left, with width, height at top right the coordinates is inverted instead. SO if we are using orthographic projection, does this mean texture coordinates are inverted as well? Is this assumption correct? 2. When rendering this quad straight, everything is OK and rendered with proper orientation void Sprite2D_Draw(int x, int y, int width, int height, GLuint texture) { // Prepare transformations glm::mat4 model = glm::mat4(1.0f); //initialize to identity // First translate (transformations are: scale happens first, // then rotation and then finall translation happens; reversed order) model = glm::translate(model, glm::vec3(x, y, 0.0f)); //Resize to current scale model = glm::scale(model, glm::vec3(width, height, 1.0f)); spriteShader.Use(); int projmtx = spriteShader.GetUniformLocation("projection"); spriteShader.SetUniformMatrix(projmtx, projection); int modelmtx = spriteShader.GetUniformLocation("model"); spriteShader.SetUniformMatrix(modelmtx, model); int textureID = spriteShader.GetUniformLocation("texture"); glUniform1i(textureID, 0); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture); glBindVertexArray(_quadVAO); glBindBuffer(GL_ARRAY_BUFFER, _VBO); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), (GLvoid*)0); glDrawArrays(GL_TRIANGLES, 0, 6); glBindBuffer(GL_ARRAY_BUFFER, 0); } draw call glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); //display to main screen Sprite2D_Draw(0, 0, 500, 376, _spriteTexture); result BUT when rendering that image into a frame buffer, the result is inverted! here is the code #if 1 frameBuffer.Bind(); glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); //... display backbuffer contents here //draw sprite at backbuffer 0,0 Sprite2D_Draw(0, 0, 1000, 750, _spriteTexture); //----------------------------------------------------------------------------- // Restore frame buffer frameBuffer.Unbind(SCR_WIDTH, SCR_HEIGHT); #endif glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); //display to main screen Sprite2D_Draw(0, 0, 500, 376, frameBuffer.GetColorTexture()); result, why? for reference here is the FBO code int FBO::Initialize(int width, int height, bool createDepthStencil) { int error = 0; _width = width; _height = height; // 1. create frame buffer glGenFramebuffers(1, &_frameBuffer); glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer); // 2. create a blank texture which will contain the RGB output of our shader. // data is set to NULL glGenTextures(1, &_texColorBuffer); glBindTexture(GL_TEXTURE_2D, _texColorBuffer); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, _width, _height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL ); error = glGetError(); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // 3. attached our texture to the frame buffer, note that our custom frame buffer is already active // by glBindFramebuffer glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _texColorBuffer, 0 ); error = glGetError(); // 4. we create the depth and stencil buffer also, (this is optional) if (createDepthStencil) { GLuint rboDepthStencil; glGenRenderbuffers(1, &rboDepthStencil); glBindRenderbuffer(GL_RENDERBUFFER, rboDepthStencil); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, _width, _height); } error = glGetError(); // Set the list of draw buffers. this is not needed? //GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 }; //glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers error = glGetError(); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) { return -1; } // Restore frame buffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return glGetError();; } void FBO::Bind() { // Render to our frame buffer glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer); glViewport(0, 0, _width, _height); // use the entire texture, // this means that use the dimension set as our total // display area } void FBO::Unbind(int width, int height) { glBindFramebuffer(GL_FRAMEBUFFER, 0); glViewport(0, 0, width, height); } int FBO::GetColorTexture() { return _texColorBuffer; } this is how it was initialized FBO frameBuffer; frameBuffer.Initialize(1024, 1080); spriteShader.Init(); spriteShader.LoadVertexShaderFromFile("./shaders/basic_vshader.txt"); spriteShader.LoadFragmentShaderFromFile("./shaders/basic_fshader.txt"); spriteShader.Build(); hayleyTex = LoadTexture("./textures/smiley.jpg"); Sprite2D_InitData(); Any one knows what is going on in here?
  15. Hello there, I have tried following various OpenGL tutorials and I'm now at a point where I can render multiple 2d sprites with textures. For that I have a sprite class: Header: #ifndef SPRITE_H #define SPRITE_H #include <GL/glew.h> #include "Shader.h" #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> #include <glm/gtc/type_ptr.hpp> #include "Texture.h" #include <stb_image.h> #include "Camera.h" class Sprite { public: Sprite(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture); ~Sprite(); void draw(Camera &camera); void setPosition(float x, float y, float z); void move(float x, float y, float z); void setTexture(Texture *texture); Texture getTexture(); float x, y, width, height; private: void init(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture); GLuint VBO = 0, VAO = 0, EBO = 0; GLint transformShaderLocation, viewShaderLocation, projectionShaderLocation; Shader* shader; glm::mat4 transform, projection, view; Texture *texture; }; #endif Code: #include "Sprite.h" Sprite::Sprite(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture) { init(x, y, width, height, shader, texture); } void Sprite::init(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture) { this->shader = &shader; this->x = x; this->y = y; this->width = width; this->height = height; GLfloat vertices[] = { width / 2 , height / 2, 0.0f, /* Top Right */ 1.0f, 1.0f, width / 2 , -height / 2 , 0.0f, /* Bottom Right*/ 1.0f, 0.0f, -width / 2 ,-height / 2 , 0.0f, /* Bottom Left */ 0.0f, 0.0f, -width / 2 , height / 2 , 0.0f, /* Top Left */ 0.0f, 1.0f }; GLuint indices[] = { 0, 1, 3, // 1 1, 2, 3 // 2 }; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glGenBuffers(1, &EBO); glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); //Position glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0); glEnableVertexAttribArray(0); // TexCoord glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat))); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); transformShaderLocation = glGetUniformLocation(shader.program, "transform"); viewShaderLocation = glGetUniformLocation(shader.program, "view"); projectionShaderLocation = glGetUniformLocation(shader.program, "projection"); transform = glm::translate(transform, glm::vec3(x , y , 0)); this->texture = texture; } Sprite::~Sprite() { //DELETE BUFFERS glDeleteBuffers(1, &VBO); glDeleteBuffers(1, &EBO); glDeleteBuffers(1, &VAO); delete texture; } void Sprite::draw(Camera &camera) { shader->Use(); glBindTexture(GL_TEXTURE_2D, texture->texture); view = camera.getView(); projection = camera.getProjection(); // Pass to shaders glUniformMatrix4fv(transformShaderLocation, 1, GL_FALSE, glm::value_ptr(transform)); glUniformMatrix4fv(viewShaderLocation, 1, GL_FALSE, glm::value_ptr(view)); // Note: currently we set the projection matrix each frame, but since the projection matrix rarely changes it's often best practice to set it outside the main loop only once. glUniformMatrix4fv(projectionShaderLocation, 1, GL_FALSE, glm::value_ptr(projection)); glBindVertexArray(VAO); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); glBindVertexArray(0); } void Sprite::setPosition(float x, float y, float z) { //Z? transform = glm::translate(transform, glm::vec3(x - this->x , y - this->y , z)); this->x = x; this->y = y; } void Sprite::move(float x, float y, float z) { transform = glm::translate(transform, glm::vec3(x, y , z)); this->x += x; this->y += y; } void Sprite::setTexture(Texture *texture) { delete this->texture; this->texture = texture; } Texture Sprite::getTexture() { return *texture; } When I want to draw something, I create an instance of the sprite class with it's own Texture and use sprite->draw(); in the draw loop for each sprite to draw it. This works perfectly fine. To improve the performance, I now want to create a spritebatch. As far as I understood it puts all the sprites together so it can send them all at once to the gpu. I had no clue how to get started, so I just created a spritebatch class which put all the vertices and indices into one object every time draw() is called, and actually only draws when flush() is called. Here's the header file: #ifndef SPRITEBATCH_H #define SPRITEBATCH_H #include <glm/glm.hpp> #include "Texture.h" #include <GL/glew.h> #include "Camera.h" #include "Shader.h" #include <vector> class SpriteBatch { public: SpriteBatch(Shader& shader, Camera &camera); ~SpriteBatch(); void draw(Texture *texture, GLfloat x, GLfloat y, GLfloat width, GLfloat height); void flush(); private: GLfloat vertices[800]; GLuint indices[800]; int index{ 0 }; int indicesIndex{ 0 }; GLuint VBO = 0, VAO = 0, EBO = 0; GLint transformShaderLocation, viewShaderLocation, projectionShaderLocation; Shader *shader; Camera *camera; std::vector<Texture*>* textures; glm::mat4 transform, projection, view; }; #endif And the class. I added some comments here: #include "SpriteBatch.h" SpriteBatch::SpriteBatch(Shader& shader, Camera &camera) { this->shader = &shader; this->camera = &camera; textures = new std::vector<Texture*>(); } SpriteBatch::~SpriteBatch() { glDeleteBuffers(1, &VBO); glDeleteBuffers(1, &EBO); glDeleteBuffers(1, &VAO); //delete texture; } void SpriteBatch::draw(Texture *texture, GLfloat x, GLfloat y, GLfloat width, GLfloat height) { textures->push_back(texture); vertices[index] = width/2 ; vertices[index + 1] = height/2; vertices[index + 2] = 0.0f; vertices[index + 3] = 1.0f; vertices[index + 4] = 1.0f; vertices[index + 5] = width / 2; vertices[index + 6] = -height / 2; vertices[index + 7] = 0.0f; vertices[index + 8] = 1.0f; vertices[index + 9] = 0.0f; vertices[index + 10] = -width / 2; vertices[index + 11] = -height / 2; vertices[index + 12] = 0.0f; vertices[index + 13] = 0.0f; vertices[index + 14] = 0.0f; vertices[index + 15] = -width / 2; vertices[index + 16] = height / 2; vertices[index + 17] = 0.0f; vertices[index + 18] = 0.0f; vertices[index + 19] = 1.0f; index += 20; indices[indicesIndex] = 0; indices[indicesIndex + 1] = 1; indices[indicesIndex + 2] = 3; indices[indicesIndex + 3] = 1; indices[indicesIndex + 4] = 2; indices[indicesIndex + 5] = 3; indicesIndex += 6; } void SpriteBatch::flush() { if (index == 0) return; //Ensures that there are sprites added //Debug information. This works perfectly int spritesInBatch = index / 20; std::cout << spritesInBatch << " I : " << index << std::endl; int drawn = 0; //Create Buffers glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glGenBuffers(1, &EBO); glBindVertexArray(VAO); //Bind vertices glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); //Bind indices glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); //Position glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0); glEnableVertexAttribArray(0); // TexCoord glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat))); glEnableVertexAttribArray(1); //VAO glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); //Shader locations transformShaderLocation = glGetUniformLocation(shader->program, "transform"); viewShaderLocation = glGetUniformLocation(shader->program, "view"); projectionShaderLocation = glGetUniformLocation(shader->program, "projection"); //Draw //So this sets the texture for each sprites and draws it afterwards with the right texture. At least that's how it should work. for (int i = 0; i < spritesInBatch; i++) { Texture *tex = textures->at(i); shader->Use(); glBindTexture(GL_TEXTURE_2D, tex->texture); //? view = camera->getView(); projection = camera->getProjection(); // Pass them to the shaders glUniformMatrix4fv(transformShaderLocation, 1, GL_FALSE, glm::value_ptr(transform)); glUniformMatrix4fv(viewShaderLocation, 1, GL_FALSE, glm::value_ptr(view)); glUniformMatrix4fv(projectionShaderLocation, 1, GL_FALSE, glm::value_ptr(projection)); //Draw VAO glBindVertexArray(VAO); glDrawElements(GL_TRIANGLES, indicesIndex, GL_UNSIGNED_INT, 0); glBindVertexArray(0); } //Sets index 0 to welcome new sprites index = 0; } It also puts the textures into a list. The code to draw two sprites is this: spriteBatch->draw(&_sprite1->getTexture(), _sprite1->x, _sprite1->y, _sprite1->width, _sprite1->height); spriteBatch->draw(&_sprite1->getTexture(), _sprite1->x+10, _sprite1->y+10, _sprite1->width*2, _sprite1->height); spriteBatch->flush(); but I only get one small black rectangle in the bottom left corner. It works perfectly when I draw the sprites without the spritebatch; _sprite1->draw(*camera); _sprite2->draw(*camera); I think I messed up in the flush() method, but I have no clue how to implement this. I'd be grateful if someone can help me with it. Thank you!
  16. hi all, i am trying to implement a textbox with scroll where you can display as many text as you can and just use the scroll bar to see the rest of the image, same as to how listbox works, etc I have implemented this using 2D SDL by displaying the messages in an extra framebuffer/texture and just bitBlk the portion of it to the main screen depending on offset. I am porting my 2D SDL code to straight OpenGL ES 2.0 by creating extra framebuffer(FBO) and render to texture, now my question is how to select a portion of that texture to be rendered only in OpenGL ES 2.0 (more like how is bitblk can be implemented in OpenGL ES 2.0)? I was thinking to using scissors but im not sure if this is the right solution. Also, I am using OpenGL ES 2.0 (Mobile) so not all libraries from desktop OpenGL is available. In Summary 1. How to do bitblk in OpenGL ES.0 for textures rendered in orthographic projection (2D)?
  17. This is a beginner question in a way, but it does involve computer graphics. I am having difficulties setting up GLAD for OpenGL C++. I can find no documentation for the process of filling in the web-service generator. I don't know what needs to be entered and with what. The source code I am looking at is https://learnopengl.com/code_viewer_gh.php?code=src/1.getting_started/6.3.coordinate_systems_multiple/coordinate_systems_multiple.cpp. The glad generator is found here: https://glad.dav1d.de/. For example, is "API / gl" the OpenGL OS? I am trying to understand how everyone did this without any instructions! Could I have complete (eleven UIs) help, please? Thank you so much; I'm having a rewarding time with OpenGL, Josheir
  18. I am currently trying to draw a isometric map in batches, these batches are meshes put together from map data. For each wall and floor I am creating a quad and add that to the chunks mesh. Because I am using alpha blending for a lot of objects I generate these meshes in the order they should be drawn and draw the combined chunk meshes back to front and bottom to top for multiple height levels. This works all great and I can draw a huge part of my map while remaining my target FPS of 60.The meshes and objects go allong the normal axis and I just rotate a Orthographic camera to get it's isometric projection matrix. Now comes the tricky part, the dynamic objects have transparency as well, they are also just quads with a transparent texture. If I draw a mesh later in sequence but behind a transparent object in the world that mesh won't be visible trough the transparency of the object in front of it. I guess this is because the object behind does not exist at the moment of drawing the front object and so it is not being rendered on it's transparent pixels. If there is an obvious not too expensive solution for this issue I am saved. I need to draw moving transparent meshes in between these walls and objects belonging to the chunk mesh and I do not know OpenGL good enough to know if there is a trick for this. I can think of two unwanted options: Adding these dynamic objects to the chunks in the right draw does not seem like a proper solution since that means rearranging the whole mesh each time something moves. Dump the whole chunk idea and just draw each object individually and deal with the loss in frames in other area's. Making dynamic objects full 3D instead of just a quad with a texture. Now I can just draw it before the chunk and depth sorting should sort it. However, I cannot use any transparency on these objects which is a sever limitation and I wanted to avoid going "full 3D". Besides that, I might want to add 2D particle effects at a much later stage so I am really a much happier man if I can sort the drawing out. Don't combine the transparent objects in the chunks mesh and draw all these later, together with the dynamic meshes and properly sorted. The latter seems like the best option now but it feels hacky and error prone. This is a whole other question but if this is a viable solution are there good and proven ways to add and remove meshes/vertex data/indices from a mesh and keeping vertex data and indices properly sorted, I also need to add meshes in the proper draw order as I explained earlier. I guess I need to keep this data somewhere when I create my chunk meshes and look it up later. Anyway, a proper solution (magic trick) to get the draw order and transparency correct in the first place would be awesome. I am using LibGDX btw and here is some code I use for drawing. Gdx.gl.glEnable(GL20.GL_DEPTH_TEST); Gdx.gl.glEnable(GL20.GL_BLEND); // I tried a lot of different combinations of blend functions, but as always this comes closest. Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA); // Very basic shader just taking in position and UV coordinates. // If the shader can do anything for me in regard of my problem I'd really like to know. shader.begin(); shader.setUniformMatrix("u_worldView", cam.combined); // If I draw the player before the chunk the transparent pixels of the player would not render the chunk. player.getMesh().render(shader, GL20.GL_TRIANGLES); // Drawing the mesh chunk combines of floors, walls and objects of each tile within the chunk. chunk.getCombinedMesh().render(shader, GL20.GL_TRIANGLES); // If I draw the player after the chunk the player would not be drawn if it is behind a transparent mesh like a window or a tree. player.getMesh().render(shader, GL20.GL_TRIANGLES); shader.end(); So is this drawing order problem really that complicated or expensive to easily solve by OpenGL or have I been looking in the wrong places for a solution?
  19. Hi all, Recently I've been working (as hobbyist) to Volumetric Clouds rendering (something we've discussed in the other topic). I reached a visually satisfying result, but the performance are (on a GTX 960 @ 900p) quite slow: about 10-15 fps. I was trying to improve it. ATM the solely optimization it's the low transmittance (high alphaness) early exit. I wanted to implement even early exit on high transmittance (for example, when the cloud coverage is low the performance decrease a lot due to the absence of early exit): this require to keep in memory the last frame and check if in the previous frame (by finding the correct uv coords) its transmittance was high. I've got some problems to find the correct uv coords. Currently I'm doing it in this way: vec3 computeClipSpaceCoord(){ vec2 ray_nds = 2.0*gl_FragCoord.xy/iResolution.xy - 1.0; return vec3(ray_nds, -1.0); } vec2 computeScreenPos(vec2 ndc){ return (ndc*0.5 + 0.5); } //for picking previous frame color vec4 ray_clip = vec4(computeClipSpaceCoord(), 1.0); vec4 camToWorldPos = invViewProj*ray_clip; camToWorldPos /= camToWorldPos.w; vec4 pPrime = oldFrameVP*camToWorldPos; pPrime /= pPrime.w; vec2 prevFrameScreenPos = computeScreenPos(pPrime.xy); And then use prevFrameScreenPos to sample the last frame texture. Now, this (obviously) doesn't work: what I feel is that the issue here is that I'm setting z coord in computeClipSpaceCoord() as -1.0, ignoring the depth of that fragment. But: how can I determine the depth of a fragment in volume rendering, since it's all done by raymarching in the fragment shader? Anyway, it seems to be the key of Temporal Reprojection. I wasn't able to find anything about, do you have any resource/advice to implement this? Thank you all for your help.
  20. reading this post: a doubt stay in my mind, Doom 2016 just upgraded OpenGL drivers 4.5 to 4.6, is there any differences in performance? for example, if i create a opengl context window 3.3 (im using by learnopengl tutorials) n create just a triangle rotating and i get 1000 fps... if i change the context to 4.0 (no using shaders or any 3.4~4.0 features) i'll get less fps (at least less 1 fps = 999)? the same to 4.5 to 4.6 window/game? @edit: im talking about the usage and fps too
  21. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  22. I have a number of line strips that I need to draw and I was hoping I could group them all together into a single draw call in OpenGL. Is there a way to do this? With triangle strips, I can add degenerate triangles between each strip and have them all drawn together in one draw call. Is there a trick to do the same thing with line strips?
  23. I try to implement ssao, i've read few tutorials and they use view space calculations - like you need a texture with viewspace positions of each fragment blablabla, but what are the numbers that define viewspace are they in range of 0..1? How do i multiply a vertex by view matrix do i use matrix row 4th component too to calculate it? Additionally i would like to ask what was the difference in vertex 4th component? I recall that 1.0? were for positions and 0.0 were for vectors right? So if i want to calc viewspace normal i dont then use 4th matrix row component at all? So i decided to reinvent a wheel (......) Forget about view space, perspective divisions and kerenels random vectors - just sample all pixels around each pixel and compare their depths to find occlusion value... And i came with that ugly result,(see attachement maybe i could improove it. And if not i would like to not use viewspace thing cause i have no imagination on that kind of thing and rather use worldspace but still maybe theres a way of doing ssao without involving rotation of kernel samples, i would like to avoid filling tangent information to gpu buffer and calculating rotation matrix for world space solution of this kind of thing.
  24. Hello. For some reason if I call glBufferStorage only once when everything works just fine. I want to recreate a buffer by calling glBufferStorage second time(and more) if its size is not enough but this second call generates GL_INVALID_OPERATION error. After that glMapBufferRange return nullptr and that's it. Has anyone had similar problem before? This is how I create/recreate buffer: const auto vertex_buffer_size = CYCLES * sizeof(Vertex) * VERTICES_PER_QUAD * m_total_text_length; GLint current_vertices_size; glGetBufferParameteriv(GL_ARRAY_BUFFER, GL_BUFFER_SIZE, &current_vertices_size); if (vertex_buffer_size > current_vertices_size) { if (m_syncs[m_buffer_id] != 0) { glClientWaitSync(m_syncs[m_buffer_id], GL_SYNC_FLUSH_COMMANDS_BIT, -1); glDeleteSync(m_syncs[m_buffer_id]); } glUnmapBuffer(GL_ARRAY_BUFFER); GLuint error = glGetError(); glBufferStorage(GL_ARRAY_BUFFER, vertex_buffer_size, 0, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT); GLuint error2 = glGetError(); m_vertices = static_cast<Vertex*>(glMapBufferRange(GL_ARRAY_BUFFER, 0, vertex_buffer_size, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT)); m_buffer_id = 0; for (auto& sync : m_syncs) { glDeleteSync(sync); sync = 0; } }
  25. Waste one day on this function and get nothing. The only example I found on github is this example. https://github.com/multiprecision/sph_opengl It does work but lack of example about updating Uniform buffer. When I got something like this #version 460 uniform UniformBufferObject { mat4 model; mat4 view; mat4 proj; }ubo; layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; layout (location = 0) out vec2 TexCoord; out gl_PerVertex { vec4 gl_Position; }; void main() { gl_Position = ubo.proj * ubo.view * ubo.model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } I can not pass model,view,proj matrix to shader correctly in cpp. The old version 330 from LearnOpengl will work for non binary shader,glShaderSource from text will work. But I really want to try to use glShaderBinary #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } This shader format does not compile with glslangValidator. https://github.com/KhronosGroup/glslang So I have to use that 460 version and don't know how to pass matrix to it correctly. I tried to use Uniform buffer,map,unmap,not work. glBindBuffer(GL_UNIFORM_BUFFER, UBO); GLvoid* p = glMapBuffer(GL_UNIFORM_BUFFER, GL_WRITE_ONLY); memcpy(p, &uboVS, sizeof(uboVS)); glUnmapBuffer(GL_UNIFORM_BUFFER); and glGetUniformLocation always return -1,no matter what name I use glGetUniformLocation(xxx,"ubo") glGetUniformLocation(xxx,"UniformBufferObject") glGetUniformLocation(xxx,"model") All Fail. if I change gl_Position = ubo.proj * ubo.view * ubo.model * vec4(aPos, 1.0f); to gl_Position = vec4(aPos, 1.0f); Then the shader works,which means all the matrix not pass to shader correctly and they are all Zero I think. So anybody know how to use glShaderBinary with glslangValidator updating Uniform buffer on OpenGL? I am not sure if this 460 shader is correct,It just pass glslangValidator compile.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!