Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. Hello everybody! I decided to write a graphics engine, the killer of Unity and Unreal. If anyone interested and have free time, join. High-level render is based on low-level OpenGL 4.5 and DirectX 11. Ideally, there will be PBR, TAA, SSR, SSAO, some variation of indirect light algorithm, support for multiple viewports and multiple cameras. The key feature is COM based (binary compatibility is needed). Physics, ray tracing, AI, VR will not. I grabbed the basic architecture from the DGLE engine. The editor will be on Qt (https://github.com/fra-zz-mer/RenderMasterEditor). Now there is a buildable editor. The main point of the engine is the maximum transparency of the architecture and high-quality rendering. For shaders, there will be no new language, everything will turn into defines.
  2. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  3. Hi, I've recently been trying to implement screen space reflections into my engine, however, it is extremely buggy. I'm using this tutorial : http://imanolfotia.com/blog/update/2017/03/11/ScreenSpaceReflections.html The reflections look decent when I am close to the ground (first image), however when I get further away from the ground (the surface that is reflecting stuff), the reflections become blocky and strange (second image). I have a feeling that it has something to do with the fact that the further the rays travel in view space, the more scattered they get -> therefore the reflected image is less detailed hence the blockiness. However I really am not sure about this and if this is the case, I don't know how to fix it. It would be great if anyone had any suggestions around how to debug or sort this thing out. Thanks. Here is the code for the ray casting vec4 ray_cast(inout vec3 direction, inout vec3 hit_coord, out float depth_difference, out bool success) { vec3 original_coord = hit_coord; direction *= 0.2; vec4 projected_coord; float sampled_depth; for (int i = 0; i < 20; ++i) { hit_coord += direction; projected_coord = projection_matrix * vec4(hit_coord, 1.0); projected_coord.xy /= projected_coord.w; projected_coord.xy = projected_coord.xy * 0.5 + 0.5; // view_positions store the view space coordinates of the objects sampled_depth = textureLod(view_positions, projected_coord.xy, 2).z; if (sampled_depth > 1000.0) continue; depth_difference = hit_coord.z - sampled_depth; if ((direction.z - depth_difference) < 1.2) { if (depth_difference <= 0) { vec4 result; // binary search for more detailed sample result = vec4(binary_search(direction, hit_coord, depth_difference), 1.0); success = true; return result; } } } return vec4(projected_coord.xy, sampled_depth, 0.0); } Here is the code just before this gets called float ddepth; vec3 jitt = mix(vec3(0.0), vec3(hash33(view_position)), 0.5); vec3 ray_dir = reflect(normalize(view_position), normalize(view_normal)); ray_dir = ray_dir * max(0.2, -view_position.z); /* ray cast */ vec4 coords = ray_cast(ray_dir, view_position, ddepth);
  4. Hi everyone, I have an issue where my geometry color would flicker from red to black. I narrowed the issue down to it being caused by my Uniform Buffers, but for the life of me I cannot figure out why its happening. So I have two Uniform Buffers. One for my camera matrices and the other is for my material colors. This is my first buffer, the camera buffer and this is how it looks like struct SystemBuffer { BF::Math::Matrix4 modelMatrix; // 0 + 4 bytes * 4 floats * 4 vector4 = 64 byte BF::Math::Matrix4 viewMatrix; // 64 + 4 bytes * 4 floats * 4 vector4 = 128 byte BF::Math::Matrix4 projectionMatrix; // 128 + 4 bytes * 4 floats * 4 vector4 = 192 byte BF::Math::Vector4f cameraPosition; // 192 + 4 bytes * 4 floats = 208 byte }; This is 208 bytes and is aligned perfectly for OpenGL std140 as far as I know. This is my second buffer, the material buffer and this is how it looks like struct ColorBuffer { Color ambientColor; // 0 + 4 bytes * 4 floats = 16 byte Color diffuseColor; // 16 + 4 bytes * 4 floats = 32 byte Color specularColor; // 32 + 4 bytes * 4 floats = 48 byte float shininess = 0.0f; // 48 + 4 bytes = 52 byte }; This is 52 bytes and is also aligned perfectly for OpenGL std140 as far as I know. My issue is that if I remove the shininess variable from my color buffer, my geometry does not flicker to black at all and the problem is solved the flickering is reduced a lot but still does not get better. However, when I add that variable back it goes back to flickering. I tried to add padding as in add 3 more floats under the shininess variable to make my ColorBuffer a multiple of 16 bytes but that did not help at all. This is how my uniform buffer class looks like namespace BF { namespace Platform { namespace API { namespace OpenGL { GLConstantBuffer::GLConstantBuffer() : buffer(0), bindingIndex(0) { } GLConstantBuffer::~GLConstantBuffer() { GLCall(glDeleteBuffers(1, &buffer)); } void GLConstantBuffer::Create(unsigned int size, unsigned int bindingIndex) { this->bindingIndex = bindingIndex; GLCall(glGenBuffers(1, &buffer)); GLCall(glBindBufferBase(GL_UNIFORM_BUFFER, bindingIndex, buffer)); GLCall(glBufferData(GL_UNIFORM_BUFFER, size, nullptr, GL_STATIC_DRAW)); GLCall(glBindBuffer(GL_UNIFORM_BUFFER, 0)); } void GLConstantBuffer::Update(const void* data, unsigned int size) { GLCall(glBindBufferBase(GL_UNIFORM_BUFFER, bindingIndex, buffer)); GLCall(glBufferSubData(GL_UNIFORM_BUFFER, 0, size, data)); GLCall(glBindBuffer(GL_UNIFORM_BUFFER, 0)); } } } } } This is my usage for the buffers void Camera::Initialize() { constantBuffer.Create(sizeof(SystemBuffer), 0); } void Camera::Update() { constantBuffer.Update(&systemBuffer, sizeof(SystemBuffer)); } //------ void ForwardRenderer::Initialize() { materialConstantBuffer.Create(sizeof(MeshMaterial::ColorBuffer), 2); } void ForwardRenderer::Render() { // clear depth + color buffers for (size_t i = 0; i < meshes.size(); i++) { //transform meshe constantBuffer.Update(&systemBuffer, sizeof(SystemBuffer)); materialConstantBuffer.Update(&meshes[i]->material->colorBuffer, sizeof(MeshMaterial::ColorBuffer)); //draw } } and this is how my shader looks like vertexShader = R"( #version 450 core layout(location = 0) in vec3 inPosition; layout (std140, binding = 0) uniform camera_data { mat4 buffer_modelMatrix; mat4 buffer_viewMatrix; mat4 buffer_projectionMatrix; vec4 cameraPos; }; void main() { vec4 worldSpace = buffer_modelMatrix * vec4(inPosition.xyz, 1.0f); gl_Position = buffer_projectionMatrix * buffer_viewMatrix * worldSpace; } )"; pixelShader = R"( #version 450 core struct Material { vec4 ambientColor; vec4 diffuseColor; vec4 specularColor; float shininess; }; layout (std140, binding = 2) uniform MaterialUniform { Material material; }; out vec4 color; void main() { color = material.ambientColor * material.diffuseColor * material.specularColor; } )"; All these planes will flicker randomly to black then back to red. Any help would be greatly appreciated.
  5. Hello everyone. I'm following lessons from learnopengl.com and concludes the chapter on "Deferred Shading". I confess that I am a lighting enthusiast in games. And unfortunately I did not find anything to explain with using as many lights as I want at runtime, I just found examples showing with limited light source: for (int i = 0; i <NR_LIGHTS; ++ i) { vec3 lightDir = normalize (lights [i] .Position - FragPos); vec3 diffuse = max (dot (Normal, lightDir), 0.0) * Albedo * lights [i] .Color; lighting + = diffuse; } Looking at google I found some things about accumulating information in framebuffer, however I did not find a code or anything else explained. Could someone explain to me how I could do this? A pseudocode with the openGL commands would be fine. Right now I thank you all.
  6. Hello ! I have two gpu on my computer, one from my cpu and another one from my graphic card. I am trying to use the opengl/opencl interop capabilities. But I am stuck at the creation of the opencl context, I don't know how to identify which platform / device of the two one is used by opengl. In the above code, which fonction should I use in the test "DEVICE MATCHING OPENGL ONE" to test if the device is the one used by OpenGL, or what should I do to test if the platform_id is the good one ? sf::ContextSettings settings; settings.depthBits = 24; settings.stencilBits = 8; settings.antialiasingLevel = 2; sf::Window window(sf::VideoMode(2048, 1024), "GAME", sf::Style::Fullscreen, settings); glewInit(); cl_platform_id platform_ids[16] = { NULL }; cl_device_id device_id = NULL; cl_uint ret_num_devices; cl_uint ret_num_platforms; cl_platform_id platform_id = 0; cl_int ret = clGetPlatformIDs(_countof(platform_ids), platform_ids, &ret_num_platforms); size_t n = 0; cl_context_properties props[] = { CL_GL_CONTEXT_KHR, (cl_context_properties) wglGetCurrentContext(), CL_WGL_HDC_KHR, (cl_context_properties) wglGetCurrentDC(), CL_CONTEXT_PLATFORM, (cl_context_properties) platform_id, 0 }; for (size_t i = 0; i < ret_num_platforms; ++i) { platform_id = platform_ids[i]; cl_device_id curDevices_id[16]; ret = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, _countof(curDevices_id), curDevices_id, &ret_num_devices); for (cl_uint nDevices = 0; nDevices < ret_num_devices; ++nDevices) { cl_device_id curDevice_id = curDevices_id[nDevices]; clGetGLContextInfoKHR_fn clGetGLContextInfo = reinterpret_cast<clGetGLContextInfoKHR_fn> (clGetExtensionFunctionAddressForPlatform( platform_id, "clGetGLContextInfoKHR")); if (clGetGLContextInfo) { cl_device_id clGLDevice = 0; props[5] = reinterpret_cast<cl_context_properties>(platform_id); clGetGLContextInfo(props, CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR, sizeof(clGLDevice), &clGLDevice, &n); if (DEVICE MATCHING OPENGL ONE) { device_id = clGLDevice; } } } if (device_id) { break; } } cl_context context = clCreateContext(props, 1, &device_id, NULL, NULL, &ret); Thanks for your future help!
  7. Hi, I was studying making bloom/glow effect in OpenGL and following the tutorials from learnopengl.com and ThinMatrix (youtube) tutorials, but i am still confuse on how to generate the bright colored texture to be used for blur. Do I need to put lights in the area of location i want the glow to happen so it will be brighter than other object in the scene?, that means i need to draw the scene with the light first? or does the brightness can be extracted based on how the color of the model was rendered/textured thru a formula or something? I have a scene that looks like this that i want to glow the crystal can somebody enlighten me on how or what the correct approach is? really appreciated!
  8. Hi, I'm trying to produce volumetric light in OpenGL following the implementation details on "GPU Pro 5 Volumetric light effects in KillZone". I am confused on the number of passes needed to create the effect. So I got the shadow pass which renders the scene from the light's POV, then I have the GBuffer pass which renders to texture the whole scene. and finally a 3rd pass which computes the ray marching on every pixel and computes the amount of accumulated scattering factor according to its distance of the light in the scene (binding the shadow map form the first pass). Then what ? blending these 3 buffers on a full screen quad finally pass ?? or maybe should I do the ray marching in the resulting buffer of blending the shadow map and the Gbuffer? Thanks in advance
  9. Hello. I can only use vec4 in glsl as out color. How to use other formats like int, uint, ivec4, ..?
  10. Hello. I'm tring to implement opencl/opengl interop via clCreateFromGLTexture (texture sharing) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr); With such texture I expected that write_imagei and write_imageui would work but they don't. Only write_imagef works. This behaviour is same for intel and nvidia gpus on my laptop. Why is it and why there is no such information in any documentation and in the entire internet? This pitfall cost me several hours and probably same for many developers.
  11. Hello! When I implemented SSR I encountered the problem of artifacts. Screenshots here Code: #version 330 core uniform sampler2D normalMap; // in world space uniform sampler2D colorMap; uniform sampler2D reflectionStrengthMap; uniform sampler2D positionMap; // in world space uniform mat4 projection, view; uniform vec3 cameraPosition; in vec2 texCoord; layout (location = 0) out vec4 fragColor; void main() { mat4 vp = projection * view; vec3 position = texture(positionMap, texCoord).xyz; vec3 normal = texture(normalMap, texCoord).xyz; vec4 coords; vec3 viewDir = normalize(position - cameraPosition); vec3 reflected = reflect(viewDir, normal); float L = 0.5; vec3 newPos; for (int i = 0; i < 10; i++) { newPos = position + reflected * L; coords = vp * vec4(newPos, 1.0); coords.xy = 0.5 + 0.5 * coords.xy / coords.w; newPos = texture(positionMap, coords.xy).xyz; L = length(position - newPos); } float fresnel = 0.0 + 2.8 * pow(1 + dot(viewDir, normal), 4); L = clamp(L * 0.1, 0, 1); float error = (1 - L); vec3 color = texture(colorMap, coords.xy).xyz; fragColor = mix(texture(colorMap, texCoord), vec4(color, 1.0), texture(reflectionStrengthMap, texCoord).r); } I will be grateful for help!
  12. Hello! During the implementation of SSLR, I ran into a problem: only objects that are far from the reflecting surface are reflected. For example, as seen in the screenshot, this is a lamp and angel wings. I give the code and screenshots below. #version 330 core uniform sampler2D normalMap; // in view space uniform sampler2D depthMap; // in view space uniform sampler2D colorMap; uniform sampler2D reflectionStrengthMap; uniform mat4 projection; uniform mat4 inv_projection; in vec2 texCoord; layout (location = 0) out vec4 fragColor; vec3 calcViewPosition(in vec2 texCoord) { // Combine UV & depth into XY & Z (NDC) vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r); // Convert from (0, 1) range to (-1, 1) vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1); // Undo Perspective transformation to bring into view space vec4 ViewPosition = inv_projection * ScreenSpacePosition; ViewPosition.y *= -1; // Perform perspective divide and return return ViewPosition.xyz / ViewPosition.w; } vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) { dir *= 0.25f; for (int i = 0; i < 20; i++) { hitCoord += dir; vec4 projectedCoord = projection * vec4(hitCoord, 1.0); projectedCoord.xy /= projectedCoord.w; projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5; float depth = calcViewPosition(projectedCoord.xy).z; dDepth = hitCoord.z - depth; if(dDepth < 0.0) return projectedCoord.xy; } return vec2(-1.0); } void main() { vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0; vec3 viewPos = calcViewPosition(texCoord); // Reflection vector vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal))); // Ray cast vec3 hitPos = viewPos; float dDepth; float minRayStep = 0.1f; vec2 coords = rayCast(reflected * minRayStep, hitPos, dDepth); if (coords != vec2(-1.0)) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r); else fragColor = texture(colorMap, texCoord); } Screenshot: colorMap: normalMap: depthMap: I will be grateful for help
  13. So, i'm currently in the process of implementing a reversed floating point depth buffer (to increase the depth precision) in OpenGL. I got everything working except the modifications to the projectionmatrices nessecary for this to work. (Matrices are my weakness. ) What i got working are a projectionMatrix and an orthographic projection matrix which has the x-y range spaning from -1 to 1 and the z range from 0-1 as in DirectX. (it's basically the exact same code from the glm library) I use them in combination with: glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE); Here are the two matrices: void Camera::setProjectionPerspektiveY_reverseDepth_ZO(float fovY, float width, float height, float znear, float zfar) { float rad = MathE::toRadians(fovY); float h = glm::cos(0.5f * rad) / glm::sin(0.5f * rad); float w = h * height / width; glm::mat4 p; p[0][0] = w; p[1][1] = h; p[2][2] = zfar / (znear - zfar); p[2][3] = -1; p[3][2] = -(zfar * znear) / (zfar - znear); this->projectionMatrix = p; } void Camera::setProjectionOrtho_reversed_ZO(float left, float bottom, float right, float top, float znear, float zfar) { glm::mat4 p; p[0][0] = 2.0f / (right - left); p[1][1] = 2.0f / (top - bottom); p[2][2] = -1.0f / (zfar - znear); p[3][0] = -(right + left) / (right - left); p[3][1] = -(top + bottom) / (top - bottom); p[3][2] = -znear / (zfar - znear); this->projectionMatrix = p; } My question is: Does anybody know how to properly reverse the depth on both of these matrices? (from 0-1 to 1-0)
  14. I setup my OpenGL context using GLEW with no alpha bitplanes and it works fine - at least on my GPU is works fine. I notice that other people are using 8-bits: const int pixelAttribs[] = { WGL_DRAW_TO_WINDOW_ARB, GL_TRUE, WGL_SUPPORT_OPENGL_ARB, GL_TRUE, WGL_DOUBLE_BUFFER_ARB, GL_TRUE, WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB, WGL_ACCELERATION_ARB, WGL_FULL_ACCELERATION_ARB, WGL_COLOR_BITS_ARB, 32, WGL_ALPHA_BITS_ARB, 8, WGL_DEPTH_BITS_ARB, 24, WGL_SAMPLE_BUFFERS_ARB, GL_TRUE, WGL_SAMPLES_ARB, 4, 0 }; (example from here: https://mariuszbartosik.com/opengl-4-x-initialization-in-windows-without-a-framework/) I wonder what is the correct way to do this?
  15. When importing sprites in Unity, we get a Pixels Per Unit option. The smaller the value, the larger it looks on screen. This is great for very small (50x32px) sprites I download. My question is, how can I accomplish this with OpenGL? Should I make the sprite larger in the frag shader? I don't want to scale the game object, I'd like the image to be rendered at the size I need without changing the scale, similar to Unity. No code needed, just some suggestions that put me in the right direction. Thanks!
  16. Hello there! So I want to understand this better. With OpenGL 4.6, support was added to be able to run SPIR-V compiled shaders. I've messed around with it, gotten it to work. It's a little bit more complicated to have to use UBOs for most things now. What I mainly want to know is what the benefits are. So I understand SPIR-V is bytecode, and that it's used by Vulkan and that because it's bytecode, there isn't any worry about wild inconsistencies across GPU vendors. When OpenGL is using SPIR-V, does it also benefit from this? Also, does this deal with micro-stutter that's caused by shaders being loaded for the first time using traditional GLSL and shader caching? I have some application that uses some really old version of GLSL, like GLSL 120, and I'd been thinking about updating it and properly supporting GLSL 460 compiled to SPIR-V. The application also uses DX9, so I was just curious how updating and using these new techniques with OpenGL 4.6 would stack up against the old GLSL or DX9 methods. I'm not expecting some magic performance benefit, I'm just legitimately curious if it'd be worth it to try.
  17. Hello! I tried to implement the Morgan's McGuire method, but my attempts failed. He described his method here: Screen Space Ray Tracing. Below is my code and screenshot. SSLR fragment shader: #version 330 core uniform sampler2D normalMap; // in view space uniform sampler2D depthMap; // in view space uniform sampler2D colorMap; uniform sampler2D reflectionStrengthMap; uniform mat4 projection; uniform mat4 inv_projection; in vec2 texCoord; layout (location = 0) out vec4 fragColor; vec3 calcViewPosition(in vec2 texCoord) { // Combine UV & depth into XY & Z (NDC) vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r); // Convert from (0, 1) range to (-1, 1) vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1); // Undo Perspective transformation to bring into view space vec4 ViewPosition = inv_projection * ScreenSpacePosition; ViewPosition.y *= -1; // Perform perspective divide and return return ViewPosition.xyz / ViewPosition.w; } // By Morgan McGuire and Michael Mara at Williams College 2014 // Released as open source under the BSD 2-Clause License // http://opensource.org/licenses/BSD-2-Clause #define point2 vec2 #define point3 vec3 float distanceSquared(vec2 a, vec2 b) { a -= b; return dot(a, a); } // Returns true if the ray hit something bool traceScreenSpaceRay( // Camera-space ray origin, which must be within the view volume point3 csOrig, // Unit length camera-space ray direction vec3 csDir, // A projection matrix that maps to pixel coordinates (not [-1, +1] // normalized device coordinates) mat4x4 proj, // The camera-space Z buffer (all negative values) sampler2D csZBuffer, // Dimensions of csZBuffer vec2 csZBufferSize, // Camera space thickness to ascribe to each pixel in the depth buffer float zThickness, // (Negative number) float nearPlaneZ, // Step in horizontal or vertical pixels between samples. This is a float // because integer math is slow on GPUs, but should be set to an integer >= 1 float stride, // Number between 0 and 1 for how far to bump the ray in stride units // to conceal banding artifacts float jitter, // Maximum number of iterations. Higher gives better images but may be slow const float maxSteps, // Maximum camera-space distance to trace before returning a miss float maxDistance, // Pixel coordinates of the first intersection with the scene out point2 hitPixel, // Camera space location of the ray hit out point3 hitPoint) { // Clip to the near plane float rayLength = ((csOrig.z + csDir.z * maxDistance) > nearPlaneZ) ? (nearPlaneZ - csOrig.z) / csDir.z : maxDistance; point3 csEndPoint = csOrig + csDir * rayLength; // Project into homogeneous clip space vec4 H0 = proj * vec4(csOrig, 1.0); vec4 H1 = proj * vec4(csEndPoint, 1.0); float k0 = 1.0 / H0.w, k1 = 1.0 / H1.w; // The interpolated homogeneous version of the camera-space points point3 Q0 = csOrig * k0, Q1 = csEndPoint * k1; // Screen-space endpoints point2 P0 = H0.xy * k0, P1 = H1.xy * k1; // If the line is degenerate, make it cover at least one pixel // to avoid handling zero-pixel extent as a special case later P1 += vec2((distanceSquared(P0, P1) < 0.0001) ? 0.01 : 0.0); vec2 delta = P1 - P0; // Permute so that the primary iteration is in x to collapse // all quadrant-specific DDA cases later bool permute = false; if (abs(delta.x) < abs(delta.y)) { // This is a more-vertical line permute = true; delta = delta.yx; P0 = P0.yx; P1 = P1.yx; } float stepDir = sign(delta.x); float invdx = stepDir / delta.x; // Track the derivatives of Q and k vec3 dQ = (Q1 - Q0) * invdx; float dk = (k1 - k0) * invdx; vec2 dP = vec2(stepDir, delta.y * invdx); // Scale derivatives by the desired pixel stride and then // offset the starting values by the jitter fraction dP *= stride; dQ *= stride; dk *= stride; P0 += dP * jitter; Q0 += dQ * jitter; k0 += dk * jitter; // Slide P from P0 to P1, (now-homogeneous) Q from Q0 to Q1, k from k0 to k1 point3 Q = Q0; // Adjust end condition for iteration direction float end = P1.x * stepDir; float k = k0, stepCount = 0.0, prevZMaxEstimate = csOrig.z; float rayZMin = prevZMaxEstimate, rayZMax = prevZMaxEstimate; float sceneZMax = rayZMax + 100; for (point2 P = P0; ((P.x * stepDir) <= end) && (stepCount < maxSteps) && ((rayZMax < sceneZMax - zThickness) || (rayZMin > sceneZMax)) && (sceneZMax != 0); P += dP, Q.z += dQ.z, k += dk, ++stepCount) { rayZMin = prevZMaxEstimate; rayZMax = (dQ.z * 0.5 + Q.z) / (dk * 0.5 + k); prevZMaxEstimate = rayZMax; if (rayZMin > rayZMax) { float t = rayZMin; rayZMin = rayZMax; rayZMax = t; } hitPixel = permute ? P.yx : P; // You may need hitPixel.y = csZBufferSize.y - hitPixel.y; here if your vertical axis // is different than ours in screen space sceneZMax = texelFetch(csZBuffer, ivec2(hitPixel), 0).r; } // Advance Q based on the number of steps Q.xy += dQ.xy * stepCount; hitPoint = Q * (1.0 / k); return (rayZMax >= sceneZMax - zThickness) && (rayZMin < sceneZMax); } void main() { vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0; vec3 viewPos = calcViewPosition(texCoord); // Reflection vector vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal))); vec2 hitPixel; vec3 hitPoint; bool tssr = traceScreenSpaceRay( viewPos, reflected, projection, depthMap, vec2(1366, 768), 0.0, // zThickness -1.0, // nearPlaneZ 1.0, // stride 0.0, // jitter 32, // maxSteps 32, // maxDistance hitPixel, hitPoint ); //fragColor = texture(colorMap, hitPixel); if (tssr) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, hitPixel), texture(reflectionStrengthMap, texCoord).r); else fragColor = texture(colorMap, texCoord); } Screenshot: I create a projection matrix like this: glm::perspective(glm::radians(90.0f), (float) WIN_W / (float) WIN_H, 1.0f, 32.0f) This is what will happen if I display the image like this fragColor = texture(colorMap, hitPixel) colorMap: normalMap: depthMap: What am I doing wrong? Perhaps I misunderstand the value of csOrig, csDir and zThickness, so I would be glad if you could help me understand what these variables are.
  18. i ported the line rendering using triangles from the site bellow to OpenGL (desktop) and/or ES (in C++) https://hypertolosana.wordpress.com/2015/03/10/efficient-webgl-stroking/ original source (in javascript) https://hypertolosana.github.io/efficient-webgl-stroking/playground.js basically it just draw and create a line based from points and render a traingle based (GL_TRIANGLES) lines. now my question is, how do i get to compute the texture coordinates for it? im more confuse specially on the join part? Do you guys have any suggestions?
  19. I eventually want to become an expert game programmer and I'm considering a specialised course to help me get there. I've been programming for almost a year in several languages, like JavaScript, C#, Lua, Swift, etc. Mostly for game development. I have barely touched C and C++. One of the topics in that course is OpenGL. I've always seen OpenGL (and Vulkan, Metal, etc) as this arcane API that take a great deal of patience and dedication to work with. After checking out learnopengl.com and finding out how many lines of code it takes to get a window running, I feel like I'm right about that, it's kind of overwhelming. So before I take on this course next year, I want to try and learn the fundamentals slowly so that I can get a general idea of what I'm getting myself into. What are some good resources for learning OpenGL, aside from learnopengl.com? Which library should I be using (meaning SDL, SFML, LWJGL, etc)? And if possible, how can I help ease the learning process?
  20. Me again. So, i spend the last couple of days with trying to stabilise my cascaded shadowmaps. To do the shadowmap has to: 1) have a fixed size (so that it doesn't scale/change with the camera rotation) by using a spherical bounding box 2) round the position to the nearest texel for camera movement. Now, i got number 1 working (it may not be the smallest possible sphere, but it works as a start) but number 2 is still giving me headaches. Here is the whole code: The important bit for clamping the coordinates to the nearest texels is this: I applied the viewProjection matrix of the sun to a position at 0/0/0 then tried to clamp it to the neatest pixel by bringing the clipspace position to the 0-1 range, multiply it with the shadowmapresulotuin, round this and then calculate the difference between the rounded coordinate and the original one. But no matter what i do, the shadowmap is still completely unstable. So i presume that i'm missing something in this rounding calculation. Note that i had to reverse the Z coordinates and flip min/max. Not entirely sure why i had to do this but it seemed to fix shadowmapping for me. (it worked perfectly fine with the standard shadowmapping code.) Does anyone have an idea what could be missing in the rounding part of the code?
  21. So, recently i was in the process of improving shadowmapping and tried to fix (or at least reduce) shadowacne. One of the solutions frequently recommended is to use front-face culling (which completely removes shadow-acne on lit surfaces.) It works, but it comes with another artifact: It makes sense as to why it's happenning. If we put the camera inside the white block we see this: The first thing i tried is to experiment with shadow bias. While this removes the pixel crawl in the shadowed area, it introduces shadows on top of the edge of the lit surface: I also tried to come up with alternative solutions. For example rendering the shadowmap with front-and backface-culling and using the average between the two distances as the comparison point with the camera depth. Though this still didn't remove all shadow acne. (Not to mention that this adds another renderpass which hits performance,) I wasn't really able to find any ressources as to how to combat this issue. Are there any common techniques as to how this could be fixed or at least reduced to a minimum?
  22. I'm making a 2D river, and shifting the texture coordinates of the water based on a low resolution "flow map". Each pixel in the flow map is 24x24 pixels on-screen. Everything is mostly working fine, but I'm getting a weird distortion where pixels are interpolating between the flow map. I'm animating the tex coords based on time, and passing in the time via a uniform, from 0.0 to 1.0. What's weird is that the distortion is non-existent at a time of 0.0, and gets worse and worse until 1.0. At time = 0.0, no distortions At time = ~0.5, some distortions At time = ~1.0, lots of distortion [YouTube video demonstrating the glitch] (Notice it snap back around the 5-second mark of the video - that's when uWaterCycle reaches 0.0 again) I tried to simplify the shader as much as possible to narrow in on what's going on, but something is eluding me. I understand the texture won't line up perfectly, since the texture only tiles at the seams, but here it'll be sampling arbitrarily. Regardless, I don't understand why the lines are all streaky and seemingly moving sideways? uniform float uWaterCycle; //Goes from 0.0 to 1.0, based on time. void main() { //======================================================================================= //...snip irrelevancies... //======================================================================================= //The "flow map", where each pixel represents the water flow of a 24x24 pixel area on-screen. vec4 areaWaterCell = texture2D(Area_WaterCellTexture, fArea_WaterCellCoord); //Water direction. vec2 waterDirection = areaWaterCell.rg; //Convert from (0 - 1), to (-1 to 1) waterDirection = (((waterDirection) * 2.0) - 1.0); //Multiply to increase water flow speed. vec2 waterDirectionX10 = (waterDirection * 10.0); //Get the primary water texture. vec4 waterDiffuseFrag = texture2D(Water_DiffuseTexture, fWater_DiffuseCoord + (waterDirectionX10 * -uWaterCycle)) * Water_Coloration; //======================================================================================= //...snip irrelevancies... //Set this color as the output. ColorBufferOutput = waterDiffuseFrag; //======================================================================================= }
  23. So I have a program that involves rendering a few models and some tessellated terrain, and allowing the user to navigate through it using wasd (R and F for vertical) keys and a mouselook camera. I also have it rendering two triangles at the front of the screen that I am attempting to use as the basis for some post-processing raymarching stuff. I also have a depth framebuffer texture that I render from the player's camera perspective to get a sense of how far each pixel's ray must be cast. For reference, here's the part of my fragment shader that deals with the raymarching: float LinearizeDepth(in vec2 uv) { float zNear = 0.1; // TODO: Replace by the zNear of your perspective projection float zFar = 250.0; // TODO: Replace by the zFar of your perspective projection float depth = texture2D(texture0, uv).x; float z_n = 2.0 * depth - 1.0; float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear)); return z_e; } float mapSphere(vec3 eyePos) { return length(eyePos) - 1.0; } float doMap(vec3 eyePos) { float total = 999999; for(int x = 0; x<5; x++) { for(int y = 0; y<5; y++) { eyePos += vec3(x*10,y*10,0); total = min(total,mapSphere(eyePos)); eyePos -= vec3(x*10,y*10,0); } } return total; } Above: Some functions to help set things up. Below: The part of main that deals with the two triangles applied to the screen. if(skyDrawingSky == 1) { vec2 uv = (vert.xy + 1.0) * 0.5; //vert is the 2d coords from -1 to 1 of the two triangles rendered to the screen for the raymarching shader float depth = LinearizeDepth(uv); //get the depth of a particular pixel's view from a depth buffer rendered to texture0 vec4 r = normalize(vec4(-vert.x,-vert.y,1.0,0.0)); r = r * camAngleMain; //I thought adding a 3rd dimension and the normalizing would be equivalent to a perspective matrix //At which point I just multiply those 'perspective coordinates' by the inverse of my actual camera view matrix that I use for the polygonal graphics vec3 rayDirection = r.xyz; vec3 rayOrigin = -camPositionMain; float distance = 0; float total = 0; for(int i = 0; i < 64; i++) { vec3 pos = rayOrigin + rayDirection * distance; float value = doMap(pos); distance += value; if(distance >= depth) break; else total += clamp(1-value,0,1); } color = vec4(1,1,1,total); return; } edit: Changing the raymarching camera glsl code to: vec4 r = normalize(vec4(vert.x,vert.y,1.0,0.0)); r = r * camProjectionMain; r = r * camAngleMain; vec3 rayDirection = r.xyz; vec3 rayOrigin = camPositionMain; Doesn't really change anything, but it's easier to read. My main problem is that for some reason vertically panning the camera results in vertical distortion of objects as they approach the top and bottom of the view/screen. If you look at the attached image, the white orbs are part of the raymarching shader, everything else is tesselated/polygonal graphics. In the first two images, the orbs can be clearly seen to be above or below the sand colored heightmap. In the last image, the position of the orbs is a bit off such that they now clip through the terrain. I can't figure out why this is. My camera seems to *almost* work, but not quite. My aspect ratio at the moment is a perfect square, and I multiply my raymarching screen coordinates (2d from -1 to 1) by the inverse of my camera view matrix so I don't really know why this would happen. Is there anything I'm doing wrong with my implementation here?
  24. Hello, I am currently drawing an FFT ocean into a texture including Dx, Dy and Dz. As i need the real height at a point for another algorithm, i am passing the points through a vertex shader as following: #version 330 core layout (location = 0) in vec3 position; layout (location = 1) in vec2 texCoords; // Displacement and normal map uniform sampler2D displacementMap; uniform mat4 MVPMatrix; uniform int N; uniform vec2 gridLowerLeftCorner; out float displacedHeight; void main() { // Displace the original position by the amount in the texture vec3 displacedVertex = texture(displacementMap, texCoords).xyz + position; // Scale vertex to the 0 -> 1 range vec2 waterCellIndices = vec2((displacedVertex.x - gridLowerLeftCorner.x)/N, (displacedVertex.z - gridLowerLeftCorner.y)/N); // scale it to -1 -> 1 waterCellIndices = (waterCellIndices * 2.0) - 1.0; displacedHeight = displacedVertex.y; gl_Position = vec4(waterCellIndices, 0, 1); } This works correctly (it writes the correct height at a given point). The issue is that some points due to the Dx and Dz displacement will get outside the clip space. This points should instead wrap around as the ocean is a collection of tiles. As you can see in the attached file the edges fit together perfectly inside white square if they would wrap around (this is the clip space dimensions from RenderDoc). Is there any way i could wrap around this texture (in reality wrap around the clip space positions) so it stays all inside the viewport correctly? I tried to wrap around in the vertex shader by checking the boundaries and wrapping around but it doesnt work when a triangle has a least one vertice inside of the viewport and others outside. Many thanks, André
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!