Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 2191 results

  1. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  2. I have a number of line strips that I need to draw and I was hoping I could group them all together into a single draw call in OpenGL. Is there a way to do this? With triangle strips, I can add degenerate triangles between each strip and have them all drawn together in one draw call. Is there a trick to do the same thing with line strips?
  3. I try to implement ssao, i've read few tutorials and they use view space calculations - like you need a texture with viewspace positions of each fragment blablabla, but what are the numbers that define viewspace are they in range of 0..1? How do i multiply a vertex by view matrix do i use matrix row 4th component too to calculate it? Additionally i would like to ask what was the difference in vertex 4th component? I recall that 1.0? were for positions and 0.0 were for vectors right? So if i want to calc viewspace normal i dont then use 4th matrix row component at all? So i decided to reinvent a wheel (......) Forget about view space, perspective divisions and kerenels random vectors - just sample all pixels around each pixel and compare their depths to find occlusion value... And i came with that ugly result,(see attachement maybe i could improove it. And if not i would like to not use viewspace thing cause i have no imagination on that kind of thing and rather use worldspace but still maybe theres a way of doing ssao without involving rotation of kernel samples, i would like to avoid filling tangent information to gpu buffer and calculating rotation matrix for world space solution of this kind of thing.
  4. Hello. For some reason if I call glBufferStorage only once when everything works just fine. I want to recreate a buffer by calling glBufferStorage second time(and more) if its size is not enough but this second call generates GL_INVALID_OPERATION error. After that glMapBufferRange return nullptr and that's it. Has anyone had similar problem before? This is how I create/recreate buffer: const auto vertex_buffer_size = CYCLES * sizeof(Vertex) * VERTICES_PER_QUAD * m_total_text_length; GLint current_vertices_size; glGetBufferParameteriv(GL_ARRAY_BUFFER, GL_BUFFER_SIZE, &current_vertices_size); if (vertex_buffer_size > current_vertices_size) { if (m_syncs[m_buffer_id] != 0) { glClientWaitSync(m_syncs[m_buffer_id], GL_SYNC_FLUSH_COMMANDS_BIT, -1); glDeleteSync(m_syncs[m_buffer_id]); } glUnmapBuffer(GL_ARRAY_BUFFER); GLuint error = glGetError(); glBufferStorage(GL_ARRAY_BUFFER, vertex_buffer_size, 0, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT); GLuint error2 = glGetError(); m_vertices = static_cast<Vertex*>(glMapBufferRange(GL_ARRAY_BUFFER, 0, vertex_buffer_size, GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT)); m_buffer_id = 0; for (auto& sync : m_syncs) { glDeleteSync(sync); sync = 0; } }
  5. Waste one day on this function and get nothing. The only example I found on github is this example. https://github.com/multiprecision/sph_opengl It does work but lack of example about updating Uniform buffer. When I got something like this #version 460 uniform UniformBufferObject { mat4 model; mat4 view; mat4 proj; }ubo; layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; layout (location = 0) out vec2 TexCoord; out gl_PerVertex { vec4 gl_Position; }; void main() { gl_Position = ubo.proj * ubo.view * ubo.model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } I can not pass model,view,proj matrix to shader correctly in cpp. The old version 330 from LearnOpengl will work for non binary shader,glShaderSource from text will work. But I really want to try to use glShaderBinary #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } This shader format does not compile with glslangValidator. https://github.com/KhronosGroup/glslang So I have to use that 460 version and don't know how to pass matrix to it correctly. I tried to use Uniform buffer,map,unmap,not work. glBindBuffer(GL_UNIFORM_BUFFER, UBO); GLvoid* p = glMapBuffer(GL_UNIFORM_BUFFER, GL_WRITE_ONLY); memcpy(p, &uboVS, sizeof(uboVS)); glUnmapBuffer(GL_UNIFORM_BUFFER); and glGetUniformLocation always return -1,no matter what name I use glGetUniformLocation(xxx,"ubo") glGetUniformLocation(xxx,"UniformBufferObject") glGetUniformLocation(xxx,"model") All Fail. if I change gl_Position = ubo.proj * ubo.view * ubo.model * vec4(aPos, 1.0f); to gl_Position = vec4(aPos, 1.0f); Then the shader works,which means all the matrix not pass to shader correctly and they are all Zero I think. So anybody know how to use glShaderBinary with glslangValidator updating Uniform buffer on OpenGL? I am not sure if this 460 shader is correct,It just pass glslangValidator compile.
  6. I'm making a simple 2D game — a copy of 'Battle City' — using OpenGL Core profile (to train the skill in it), and now I've come across a question, how should I handle an object coordinates and sizes. What kind of measure should I use for them? As I get it, that info is being put to the model matrix. But how can I place my objects to the exact positions I desire them to be in? And how scale them properly? For instance, I want to draw a game field -- collection of little squares. The resolution of the screen may change, so fixed coordinates and sizes are inappropriate (or not?). Maybe then I should set numbers relatively the width and height of the screen? I hope I expressed myself clearly. It's quite a basic problem, everyone who made a game has faced with it. Though, can't get, what coordinates and sizes in what coordinate system to use when it comes to placing and scaling game objects.
  7. The main reason I can't write even a simplest game is the problem in the title. In my vision, I should separate the information related to the game (score of a player, a nickname etc) and the information related to the rendering (mesh info, drawing method), which is done, for example, by creating a separate classes (there are other ways of doing so). Due to this, I'm always trying to do in that way, with no result, as you can see. Eventually, a design problem occurs. Namely, how I am to link those things between each other, if I ever have to separate them at all? If you have experience in making completed (operational, finished, not abandoned in the middle of development) games, tell me, how did you design rendering of game objects? I assume object-oriented programming language is used (C++ in my case). Well, it's not exactly an OpenGL problem described above. I'm just using its functionality and think in terms of vertex buffers, shaders and all that GL specific stuff.
  8. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  9. For two days I have been struggling with trying to draw a image on a plane at the exact point. I have a UV point, however because of how the shaders work, or at least my understanding of it, I the piece of code only effects the current pixel. So I can easily draw gradients, change the color of the pixel at that UV point and all kinds of effects. However the only way I could think of drawing the image was to calculate the four corners from the center point, except after I found these points using 4 "If" branches I have no idea on how to use these points to draw the images. The shader is just a basic shadless vertex and fragment shader. Unlit.
  10. I'm making a 2D sprite-based game in SFML and I wanted to add an outline around each of the entities in my game world. This is relatively easy to do per-entity, but this approach has numerous problems. First, it's a lot more calls to the graphics card which could add up if there's a lot of stuff on the screen at once. Second, I don't want to have to put a line of code into each entity class to apply the effect. Finally, if the entity's texture extends all the way to its bounding box, the outline gets cut off unless I create a separate, slightly larger buffer to draw to first. Because of this I want it to be a global effect applied to the whole scene. I've done it in 3D with Unreal using the depth buffer, but in 2D there's no real depth axis--for something to be "in front" you just draw it later. So I'm wondering if there's a good way to achieve this effect, either by enabling the depth buffer in OpenGL or finding some other way to fudge it.
  11. So, I developed an engine a while back following ThinMatrix's tutorials and it worked perfectly. However, upon trying to create my own simple lightweight game engine from scratch, I hit a snag. I created an engine that only wants to render my specified background color, and nothing else. I first tried to render just one cube, and when that failed I figured that i probably just had the incorrect coordinates set, so I went and generated a hundred random cubes... Nothing. Not even a framerate drop. So I figure that they aren't being passed through the shaders, however the shaders are functioning as I'm getting no errors (to my knowledge, I can't be sure). The engine itself is going to be open source and free anyways, so I don't mind posting the source here. Coded in Java, using OpenGL (from LWJGL), and in Eclipse (Neon) format. Warning: When first running the engine, it will spit out an error saying it couldn't find a config file, this will then generate a new folder in your %appdata% directory labeled 'Fusion Engine' with a Core.cfg file. This file can be opened in any old text editor, so if you aren't comfortable with that just change it in the source at: "src/utility/ConfigManager.java" before running. Just ask if you need more info, please I've been trying to fix this for a month now. Fusion Engine V2.zip
  12. I am trying to write a program to rotate an octagon cube. I have the front and back faces completed. I can't seem to figure out the vertices for the right and top faces. Can someone please help me? Thanks for your time! Here are the front vertices: (-0.5, -1.0, 1.0) (0.5, -1.0, 1.0) (1.0, -0.5, 1.0) (1.0, 0.5, 1.0) (0.5, 1.0, 1.0) (-0,5, 1.0, 1.0) (-1.0, 0.5, 1.0) (-1.0, -0.5, 1.0)
  13. Hi there, for the past while I've been working on a deferred renderer using OpenGL and I've implemented planar reflections utilizing the stencil buffer. To prevent drawing objects behind the reflection plane I use the brilliant Oblique View Frustum Depth Projection and Clipping technique which has performed very well to solve that issue. However in various shaders I use require linearizing the depth buffer which has proven itself to be quite a burden for these oblique frustums. Fortunately I've come across this article here which thankfully provides a solution, although I haven't been very successful with incorporating it in my own project. I admit my understanding of complex matrix maths are lacking and much of the detail in the article isn't the most comprehensible to me. I've done a fair bit of searching online and there doesn't seem to be any working examples available out there using this technique. I wrote a basic depth buffer visualization shader to test out my implementation and so far the results are quite bizarre (seems like the inverse of what I should expect), so I'm quite certain that I've either overlooked something and/or the maths are incorrect. I'm wondering could anybody experienced with matrix math could take a look at my code and see if they notice anything off? Any help or possible insight would be greatly appreciated. I'll include the relevant code here and a brief clip showcasing the issue (sorry about the tiny size). Thanks!
  14. Imagine you are Valve or ID or Dice, and your team is going to create a new engine to run your company's main titles for the next decade. You want an engine that is innovative and flexible, can knock socks off next year and still impress gamers 5 years down the road. Would someone in this position use helper libraries like GLUT aor GLFW or GLM or would they create their own libraries for their project and do the win API stuff manually?
  15. i want to know what exactly happen behind this code when we use this glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); , i mean what processes it takes , (for more detail i want to know what " | " does here and want to know that glClear , clears both of them in one term or one by one)
  16. hello , i have a question about camera in openGL why the vecto r of direction of camera is in reverse ? what is the reason behind it ? d8a60p6w.bmp
  17. I’m trying to find a global illumination solution for my project, and I need it to be fast. So I tried to get direct illumination into the light map by sampling a shadow map. The light map’s resolution is pretty low, and caused the problems of aliasing ( in my case, it’s 128 * 128 for the ground). Is there any way to improve this? Thank you all for reading.
  18. Hey :) for a while now we have been using classic shadow mapping with PCF and have decided to upgrade to a more faster and efficient method of shadow mapping, variance shadow mapping, but I am getting problems with trying to implement variance shadow mapping with a deferred renderer. Below is an image of the result I am getting. as you can see the shadows are not casting correctly. The code below is all the code you need. GBuffer vertex shader #version 330 core layout(location = 0) in vec3 position; layout(location = 1) in vec3 texcoord; layout(location = 2) in vec3 normal; layout(location = 3) in vec3 tangent; out vec3 _texcoord; out vec3 _normal; out vec3 _tangent; out vec3 _frag_pos; uniform mat4 mod; uniform mat4 view; uniform mat4 proj; uniform mat4 lightSpaceMatrix; void main() { vec4 world_space = mod * vec4(position, 1.0); _frag_pos = world_space.xyz; _texcoord = texcoord; _normal = (mod * vec4(normal, 0.0)).xyz; _tangent = (mod * vec4(tangent, 0.0)).xyz; gl_Position = proj * view * world_space; } Light fragment shader (directional light calculation) vec3 calc_directional_light(vec3 Diffuse, vec3 Specular, vec3 Metalness, float ao) { vec3 Ambient = vec3(0.3, 0.3, 0.3); vec3 light_colour = lightColour * lightIntensity; vec3 lighting = Ambient * Diffuse * ao; vec3 viewDir = normalize(camera_pos - FragPos); vec3 lightDir = normalize(lightPos - FragPos); vec3 diffuse = max(dot(Normal, lightDir), 0.0) * Diffuse * light_colour; vec4 vShadowCoords = lightSpaceMatrix * vec4(FragPos, 1.0); if(vShadowCoords.w > 1) { //divide the shadow coordinate by homogeneous coordinate vec3 uv = vShadowCoords.xyz / vShadowCoords.w; //get the depth value float depth = uv.z; //read the moments from the shadow map texture vec4 moments = texture(gShadowmap, uv.xy); //calculate variance from the moments float E_x2 = moments.y; float Ex_2 = moments.x*moments.x; float var = E_x2-Ex_2; //bias the variance var = max(var, 0.00002); //subtract the fragment depth from the first moment //divide variance by the squared difference value //to get the maximum probability of fragment to be in shadow float mD = depth-moments.x; float mD_2 = mD*mD; float p_max = var/(var+ mD_2); //darken the diffuse component if the current depth is less than or equal //to the first moment and the retured value is less than the calculated //maximum probability diffuse *= max(p_max, (depth<=moments.x)?1.0:0.2); } vec3 halfwayDir = normalize(lightDir + viewDir); float spec = pow(max(dot(Normal, halfwayDir), 0.0), 32.0); vec3 specular = (Specular * light_colour) * spec; vec3 metalness = Metalness * Diffuse * ao; lighting += (diffuse + specular + metalness); return lighting; } Rendering to shadowmap inline virtual void Render() { glDisable(GL_BLEND); // Disable blending for opique materials glEnable(GL_DEPTH_TEST); // Enable depth test to avoid quads rendering on top of each other that shouldnt glDisable(GL_CULL_FACE); // Disable cull face so the shadowmap does not have the far plane bugs glm::mat4 model; // model matrix for all the meshes in the shadowmap light_projection = glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, 1.0f, 25.0f); // project onto the scene from the position of the light (sun) light_view = glm::lookAt(glm::vec3(2.0f, 4.0f, 3.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f)); // position the camera at the lights position light_space_matrix = light_projection * light_view * glm::inverse(Content::_map->GetCamera()->GetViewMatrix()); // calculate the lightSpaceMatrix glUseProgram(_shader_programs[0]); // bind the first pass shader glUniformMatrix4fv(_u_lsm, 1, GL_FALSE, glm::value_ptr(light_space_matrix)); // set the lightSpaceMatrix uniform glViewport(0, 0, _shadowmap_resolution, _shadowmap_resolution); // set the viewport size to the resolution of the shadow map _fbos[0]->Bind(); glClear(GL_DEPTH_BUFFER_BIT); // clear any depth info // loop through all the meshes within the scene for (unsigned int i = 0; i < Content::_map->GetActors().size(); i++) { model = Content::_map->GetActors()[i]->GetModelMatrix() * Content::_map->GetCamera()->GetViewMatrix(); // get the model matrix in viewspace for all the meshes glUniformMatrix4fv(_u_mod, 1, GL_FALSE, glm::value_ptr(model)); // set the viewspace model matrix uniform Content::_map->GetActors()[i]->Render(); // render all the meshes into the shadowmap } _fbos[0]->Unbind(); // unbind the shadowmap fbo glViewport(0, 0, 1920, 1080); // reset the viewport glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // glclear the gbuffer before rendering to it glEnable(GL_CULL_FACE); // enable cull face } For blurring the shadows I use a simple guassian blur shader. Any ideas as to why this might be happening? help is much appriecated is it possible to do variance shadows and deferred rendering? if so a quick overview on how to do it would be great
  19. I need to change from right handed coord to left handed, however dont know where actually to start , the goal is supposed to look from behind like X points right, Y up, Z points forward, I found that glFrustum calculattion needs to be changed but they say it will negate the Z axis and i need to swap X axis... ANYWAY i believe gluPerspectiveA function has glFrustum code in it template <class T> void glLookAt(Matrix44<T> &matrix, t3dpoint<T> eyePosition3D, t3dpoint<T> center3D, t3dpoint<T> upVector3D ) { t3dpoint<T> forward, side, up; forward = Normalize( vectorAB(eyePosition3D, center3D) ); side = Normalize( forward * upVector3D ); up = side * forward; matrix.LoadIdentity(); matrix.m[0] = side.x; matrix.m[1] = side.y; matrix.m[2] = side.z; matrix.m[4] = up.x; matrix.m[5] = up.y; matrix.m[6] = up.z; matrix.m[8] = -forward.x; matrix.m[9] = -forward.y; matrix.m[10] = -forward.z; Matrix44<T> transgender; transgender.Translate(-eyePosition3D.x, -eyePosition3D.y, -eyePosition3D.z); matrix = transgender * matrix; } template <class T> void gluPerspectiveA(Matrix44<T> & matrix, T fovy, T aspect, T zmin, T zmax) { T xmin, xmax, ymin, ymax; ymax = zmin * tan(fovy * M_PI / 360.0); ymin = -ymax; xmin = ymin * aspect; xmax = ymax * aspect; matrix.m[0] = (2.0*zmin)/(xmax-xmin); matrix.m[1] = 0.0; matrix.m[2] = (xmax + xmin) / (xmax - xmin); matrix.m[3] = 0.0; matrix.m[4] = 0.0; matrix.m[5] = (2.0*zmin) / (ymax - ymin); matrix.m[6] = (ymax + ymin) / (ymax - ymin); matrix.m[7] = 0.0; matrix.m[8] = 0.0; matrix.m[9] = 0.0; matrix.m[10] = -(zmax + zmin) / (zmax-zmin); matrix.m[11] = (-2.0*zmax*zmin) / (zmax-zmin); matrix.m[12] = 0.0; matrix.m[13] = 0.0; matrix.m[14] = -1.0; matrix.m[15] = 0.0; }
  20. Hello. I've got a problem with swapping shaders in one render. I am rendering HUD (2d user inteface over previously rendered 3d scene). The structure is not complex, tree-like: there are multiple movable windows containing texture background and optinally some textboxes, buttons, radiobuttons and statictexts (each widget type having own dedicated shader). The gui windows may overlap eachother (one used last should be on top of previous). So the rendering works like this - interate over the list of windows (which are sorted from back to front) and render each. Rendering single window - use button shader, set uniforms, render all buttons, then use textbox shader, set uniforms, render all texboxes, use statictext shader, set uniforms render all etc... As you can see i am rebinding the same shader multiple multiple times during one render. But i kind of have to? I cannot find another solution. If i render all buttons first and then for example textboxes, the textbox from window behind can overwrite previously draw button from front window. The depth test is obviously disabled for HUD rendering (what i draw last is on front). Actually maybe that's the solution to use depth buffer in some tricky way (bumped onto it while writin the post)? Thanks for all suggestions
  21. So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts: http://filmicworlds.com/blog/filmic-tonemapping-operators/ http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html and tried to implement some of those mentioned tonemapping methods into my postprocessing shader. The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7) This is the tonemapping code: vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping: This is with the uncharted tonemapping: Which makes the image a lot darker. The shader code looks like this: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1. But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement? To check this i plotted the tonemapping curve: You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.) My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure? For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
  22. Hello, everyone! I hope my problem isn't too 'beginnerish'. I'm doing research on motion synthesis now, trying to implement the Deep Mimic paper (DeepMimic) by BINPENG XUE, in this paper, I need to first retarget character A's motion to another character B to make the reference motion clips for character B, since we don't have character B‘s reference motion. The most important thing is that in the paper, the author copied character A's joint's rotation with respective to joint's local coordinate system (not the parent) to character B. In my personal understanding, the joint's rotation with respective to joint's local coordinate system is something like that in the attached photo, where for the Elbow joint, i need to get the Elbow's rotation in the elbow's local coordinate system (i'm very grateful for you to share your ideas if i have misunderstanding about it 🙂) I have searched many materials on the internet about how to extract the local joint's information from FBX, the most relative one i found is the pivot rotation( and geometric transformation, object offset transformation). I'm a beginner in computer graphics, and i'm confused about whether the pivot rotation( or geometric transformation, object offset transformation) is exactly the joint's local rotation i'm seeking? I hope someone that have any ideas can help me, I'd be very grateful for any pointers in the right direction. Thanks in advance!
  23. Hey My laptop recently decided to die, so Ive been transferring my project to my work laptop just to get it up to date, and commit it. I was banging my head against the wall all day, as my textures where not displaying in my program- I was getting no errors and no indication of why it was occurring so I have been just trying to figure it out- I know the image loading was working ok, as im using image data elsewhere, I was pretty confident that the code was fine also, as ive never had an issue with displaying textures before, so I thought it might be the drivers on this laptop, (my old one was just using the built in IntelHD, while this laptop has a NVIDIA graphics card) but all seems to be up to date. Below are my basic shaders: Vertex Shader #version 330 core layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; uniform mat4 Projection; uniform mat4 Model; out vec3 Color; out vec3 Normal; out vec2 TexCoord; void main() { gl_Position = Projection * Model * vec4( position, 1.0 ); Color = color; Normal = normal; TexCoord = vec2( texCoord.x, texCoord.y); } Fragment Shader #version 330 core in vec3 Color; in vec3 Normal; in vec2 TexCoord; uniform sampler2D textureData; void main() { vec4 textureColor = texture( textureData, TexCoord ); vec4 finalColor = textureColor * vec4( Color, 1.0f); gl_FragColor = finalColor; } Calling Code glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glUniform1i(glGetUniformLocation(shaderID, "textureData"), textureID); Now this is the part i dont understand, I worked through my program, until I got to the above 'Calling Code'. This just displays a black texture.. my original issue. Out of desperation, I just tried changing the name in glGetUniformLocation from "textureData" to "textureData_invalid" to see if my error checks would through up something, but in actual fact, it is now displaying the texture as expected. Can anyone fathom a guess as too why this is occurring.. im assuming the random text is just picking up the correct location by c++ witchcraft, but why is the original one not getting picked up correctly and/or not working as expected I realize more code is probably needed to see how it all hangs together.. but it seems to come down to this as the issue
  24. Hello. So far i got decently looking 3d scene. I also managed to render a truetype font, on my way to implementing gui (windows, buttons and textboxes). There are several issues i am facing, would love to hear your feedback. 1) I render text using atlas with VBO containing x/y/u/v of every digit in the atlas (calculated basing on x/y/z/width/height/xoffset/yoffset/xadvance data in binary .fnt format file, screenshot 1). I generated a Comic Sans MS with 32 size and times new roman with size 12 (screenshot 2 and 3). The first issue is the font looks horrible when rescaling. I guess it is because i am using fixed -1 to 1 screen space coords. This is where ortho matrix should be used, right? 2) Rendering GUI. Situation is similar to above. I guess the widgets should NOT scale when scaling window, am i right? So what am i looking for is saying "this should be always in the middle, 200x200 size no matter the display window xy", and "this should stick to the bottom left corner". Is ortho matrix the cure for all such problems? 3) The game is 3D but i have to go 2D to render static gui elements over the scene - and i want to do it properly! At the moment i am using matrix 3x3 for 2d transformations and vec3 for all kinds of coordinates. In shaders tho i technically still IS 3D. I have to set all 4 x y z w of the gl_Position while it would be much much more conventient to... just do the maths in 2d space. Can i achieve it somehow? 4) Text again. I am kind of confused what is the reason of artifacts in Times New Roman font displaying (screenshot 1). I render from left to right, letter after letter. You can clearly see that letters on the right (so the ones rendered after ones on the left are covered by the previous one). I was toying around with blending options but no luck. I do not support kerning at the moment but that's definitely not the cause of error. The display of the small font looks dirty aliased too. I am knd of confused how to interpret the integer data and how should be scaled/adapted to the screen view. Is it just store the data as constant size and again - use ortho matrix? Thanks in advance for all your ideas and suggestions! https://i.imgur.com/4rd1VC3.png https://i.imgur.com/uHrSXfe.png https://i.imgur.com/xRTffPn.png
  25. hello guys , i have some questions what does glLinkProgram and glBindAttribLocation do? i searched but there wasnt any good resource
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!