Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. Hi everyone, I have an issue where my geometry color would flicker from red to black. I narrowed the issue down to it being caused by my Uniform Buffers, but for the life of me I cannot figure out why its happening. So I have two Uniform Buffers. One for my camera matrices and the other is for my material colors. This is my first buffer, the camera buffer and this is how it looks like struct SystemBuffer { BF::Math::Matrix4 modelMatrix; // 0 + 4 bytes * 4 floats * 4 vector4 = 64 byte BF::Math::Matrix4 viewMatrix; // 64 + 4 bytes * 4 floats * 4 vector4 = 128 byte BF::Math::Matrix4 projectionMatrix; // 128 + 4 bytes * 4 floats * 4 vector4 = 192 byte BF::Math::Vector4f cameraPosition; // 192 + 4 bytes * 4 floats = 208 byte }; This is 208 bytes and is aligned perfectly for OpenGL std140 as far as I know. This is my second buffer, the material buffer and this is how it looks like struct ColorBuffer { Color ambientColor; // 0 + 4 bytes * 4 floats = 16 byte Color diffuseColor; // 16 + 4 bytes * 4 floats = 32 byte Color specularColor; // 32 + 4 bytes * 4 floats = 48 byte float shininess = 0.0f; // 48 + 4 bytes = 52 byte }; This is 52 bytes and is also aligned perfectly for OpenGL std140 as far as I know. My issue is that if I remove the shininess variable from my color buffer, my geometry does not flicker to black at all and the problem is solved the flickering is reduced a lot but still does not get better. However, when I add that variable back it goes back to flickering. I tried to add padding as in add 3 more floats under the shininess variable to make my ColorBuffer a multiple of 16 bytes but that did not help at all. This is how my uniform buffer class looks like namespace BF { namespace Platform { namespace API { namespace OpenGL { GLConstantBuffer::GLConstantBuffer() : buffer(0), bindingIndex(0) { } GLConstantBuffer::~GLConstantBuffer() { GLCall(glDeleteBuffers(1, &buffer)); } void GLConstantBuffer::Create(unsigned int size, unsigned int bindingIndex) { this->bindingIndex = bindingIndex; GLCall(glGenBuffers(1, &buffer)); GLCall(glBindBufferBase(GL_UNIFORM_BUFFER, bindingIndex, buffer)); GLCall(glBufferData(GL_UNIFORM_BUFFER, size, nullptr, GL_STATIC_DRAW)); GLCall(glBindBuffer(GL_UNIFORM_BUFFER, 0)); } void GLConstantBuffer::Update(const void* data, unsigned int size) { GLCall(glBindBufferBase(GL_UNIFORM_BUFFER, bindingIndex, buffer)); GLCall(glBufferSubData(GL_UNIFORM_BUFFER, 0, size, data)); GLCall(glBindBuffer(GL_UNIFORM_BUFFER, 0)); } } } } } This is my usage for the buffers void Camera::Initialize() { constantBuffer.Create(sizeof(SystemBuffer), 0); } void Camera::Update() { constantBuffer.Update(&systemBuffer, sizeof(SystemBuffer)); } //------ void ForwardRenderer::Initialize() { materialConstantBuffer.Create(sizeof(MeshMaterial::ColorBuffer), 2); } void ForwardRenderer::Render() { // clear depth + color buffers for (size_t i = 0; i < meshes.size(); i++) { //transform meshe constantBuffer.Update(&systemBuffer, sizeof(SystemBuffer)); materialConstantBuffer.Update(&meshes[i]->material->colorBuffer, sizeof(MeshMaterial::ColorBuffer)); //draw } } and this is how my shader looks like vertexShader = R"( #version 450 core layout(location = 0) in vec3 inPosition; layout (std140, binding = 0) uniform camera_data { mat4 buffer_modelMatrix; mat4 buffer_viewMatrix; mat4 buffer_projectionMatrix; vec4 cameraPos; }; void main() { vec4 worldSpace = buffer_modelMatrix * vec4(inPosition.xyz, 1.0f); gl_Position = buffer_projectionMatrix * buffer_viewMatrix * worldSpace; } )"; pixelShader = R"( #version 450 core struct Material { vec4 ambientColor; vec4 diffuseColor; vec4 specularColor; float shininess; }; layout (std140, binding = 2) uniform MaterialUniform { Material material; }; out vec4 color; void main() { color = material.ambientColor * material.diffuseColor * material.specularColor; } )"; All these planes will flicker randomly to black then back to red. Any help would be greatly appreciated.
  2. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  3. I have been having difficulty with many lights and deferred shading in opengl. Some users here helped me but I'm still unsuccessful. ( i posted in stackoverflow but i dont like how the site work, i prefer here) I'm at a time trying to add lights to my scene but unfortunately it's to no avail. Following the learnopengl deferred shading tutorial, several lights are shown but in the final screen quad shader, and I wanted to render my lights independently. At the end of the session the author indicates how to do it, which is adding beads as "light source", as shown below: But unfortunately my result was this (embarrassing HAHAHA): How could I accumulate all the lights in my main scene? Why is not working ? That is, each light was rendered on the ball along with the framebuffer. A snippet of my code: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); shaderLightingPass.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, gPosition); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, gNormal); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, gAlbedoSpec); // send light relevant uniforms //for (unsigned int i = 0; i < lightPositions.size(); i++) //{ shaderLightingPass.setVec3("light.Position", lightPositions[0]); shaderLightingPass.setVec3("light.Color", lightColors[0]); // update attenuation parameters and calculate radius const float constant = 1.0; // note that we don't send this to the shader, we assume it is always 1.0 (in our case) const float linear = 0.7; const float quadratic = 1.8; shaderLightingPass.setFloat("light.Linear", linear); shaderLightingPass.setFloat("light.Quadratic", quadratic); //} shaderLightingPass.setVec3("viewPos", camera.Position); // finally render quad renderQuad(); glBindFramebuffer(GL_READ_FRAMEBUFFER, gBuffer); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glBlitFramebuffer(0, 0, SCR_WIDTH, SCR_HEIGHT, 0, 0, SCR_WIDTH, SCR_HEIGHT, GL_DEPTH_BUFFER_BIT, GL_NEAREST); glBindFramebuffer(GL_FRAMEBUFFER, 0); //where i add lights AddLightSphere(shaderLightBox); AddLightSphere Method: void AddLightSphere(Shader shaderLightBox) { glm::mat4 model; glm::mat4 projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); glm::mat4 view = camera.GetViewMatrix(); for (unsigned int i = 0; i < lightPositions.size(); i++) { shaderLightBox.use(); shaderLightBox.setInt("gPosition", 0); shaderLightBox.setInt("gNormal", 1); shaderLightBox.setInt("gAlbedoSpec", 2); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, gPosition); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, gNormal); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, gAlbedoSpec); shaderLightBox.setMat4("projection", projection); shaderLightBox.setMat4("view", view); model = glm::mat4(); model = glm::translate(model, lightPositions[i]); model = glm::scale(model, glm::vec3(0.5f)); shaderLightBox.setMat4("model", model); shaderLightBox.setVec3("light.Position", lightPositions[i]); shaderLightBox.setVec3("light.Color", lightColors[i]); // update attenuation parameters and calculate radius const float constant = 1.0; const float linear = 0.7; const float quadratic = 0.8; shaderLightBox.setFloat("light.Linear", linear); shaderLightBox.setFloat("light.Quadratic", quadratic); shaderLightBox.setVec3("viewPos", camera.Position); renderSphere(); glUseProgram(0); } } The ShaderLightBox (lightSphere would be correct) (Shader shaderLightBox("8.1.deferred_light_box.vs", "8.1.deferred_light_box.fs")): Vertex: layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoords; uniform mat4 projection; uniform mat4 view; uniform mat4 model; out vec2 TexCoords; void main() { TexCoords = aTexCoords; gl_Position = projection * view * model * vec4(aPos, 1.0); } //////////////// FRAGMENT ////////////// out vec4 FragColor; in vec2 TexCoords; uniform sampler2D gPosition; uniform sampler2D gNormal; uniform sampler2D gAlbedoSpec; struct Light { vec3 Position; vec3 Color; float Linear; float Quadratic; }; uniform Light light; uniform vec3 viewPos; void main() { // retrieve data from gbuffer vec3 FragPos = texture(gPosition, TexCoords).rgb; vec3 Normal = texture(gNormal, TexCoords).rgb; vec3 Diffuse = texture(gAlbedoSpec, TexCoords).rgb; float Specular = texture(gAlbedoSpec, TexCoords).a; // then calculate lighting as usual vec3 lighting = Diffuse * 0.5; // hard-coded ambient component vec3 viewDir = normalize(viewPos - FragPos); // diffuse vec3 lightDir = normalize(light.Position - FragPos); vec3 diffuse = max(dot(Normal, lightDir), 0.0) * Diffuse * light.Color; // specular vec3 halfwayDir = normalize(lightDir + viewDir); float spec = pow(max(dot(Normal, halfwayDir), 0.0), 16.0); vec3 specular = light.Color * spec * Specular; // attenuation float distance = length(light.Position - FragPos); float attenuation = 1.0 / (1.0 + light.Linear * distance + light.Quadratic * distance * distance); diffuse *= attenuation; specular *= attenuation; lighting += diffuse + specular; FragColor = vec4(lighting, 1.0); } Has anyone experienced this and could guide me in this situation? I can not make the lights pile up to have a final result with all the lights. For more details, I put the code here. How could I accumulate all the lights in my main scene? Please, I've been trying for a long time, have patience. Thank u
  4. Hello again Recently I was trying to apply 6 different textures in a cube and I noticed that some textures would not apply correctly, but if i change the texture image with another it works just fine. I can't really understand what's going on. I will also attach the image files. So does this might have to do anything with coding or its just the image fault??? This is a high quality texture 2048x2048 brick1.jpg, which does the following: And this is another texture 512x512 container.jpg which is getting applied correctly with the exact same texture coordinates as the prev one: Vertex Shader #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); TexCoord = aTexCoord; } Fragment Shader #version 330 core out vec4 Color; in vec2 TexCoord; uniform sampler2D diffuse; void main() { Color = texture(diffuse, TexCoord); } Texture Loader Texture::Texture(std::string path, bool trans, int unit) { //Reverse the pixels. stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (!trans) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); } //Loading Failed. else throw EngineError("The was an error loading image: " + path); //Free the image data. stbi_image_free(data); } Texture::~Texture() { } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); } Rendering Code: Renderer::Renderer() { float vertices[] = { // positions // normals // texture coords -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f }; //Create the Vertex Array. m_vao = new Vao(); //Create the Vertex Buffer. m_vbo = new Vbo(vertices, sizeof(vertices)); //Create the attributes. m_attributes = new VertexAttributes(); m_attributes->Push(3); m_attributes->Push(3); m_attributes->Push(2); m_attributes->Commit(m_vbo); } Renderer::~Renderer() { delete m_vao; delete m_vbo; delete m_attributes; } void Renderer::DrawArrays(Cube *cube) { //Render the cube. cube->Render(); unsigned int tex = 0; for (unsigned int i = 0; i < 36; i += 6) { if (tex < cube->m_textures.size()) cube->m_textures[tex]->Bind(); GLCall(glDrawArrays(GL_TRIANGLES, i, 6)); tex++; } }
  5. Hello everyone. I'm following lessons from learnopengl.com and concludes the chapter on "Deferred Shading". I confess that I am a lighting enthusiast in games. And unfortunately I did not find anything to explain with using as many lights as I want at runtime, I just found examples showing with limited light source: for (int i = 0; i <NR_LIGHTS; ++ i) { vec3 lightDir = normalize (lights [i] .Position - FragPos); vec3 diffuse = max (dot (Normal, lightDir), 0.0) * Albedo * lights [i] .Color; lighting + = diffuse; } Looking at google I found some things about accumulating information in framebuffer, however I did not find a code or anything else explained. Could someone explain to me how I could do this? A pseudocode with the openGL commands would be fine. Right now I thank you all.
  6. Hello! My texture problems just don't want to stop keep coming... After a lot of discussions here with you guys, I've learned a lot about textures and digital images and I fixed my bugs. But right now I'm making an animation system and this happened. Now if you see, the first animation (bomb) is ok. But the second and the third (which are arrows changing direction) are being render weird (They get the GL_REPEAT effect). In order to be sure, I only rendered (without using my animation system or anything else i created in my project, just using simple opengl rendering code) the textures that are causing this issue and this is the result (all these textures have exactly 115x93 resolution) I will attach all the images which I'm using. giphy-27 and giphy-28 are rendering just fine. All the others not.They give me an effect like GL_REPEAT which I use in my code. This is why I'm getting this result? But my texture coordinates are inside the range of -1 and 1 so why? My Texture Code: #include "Texture.h" #include "STB_IMAGE/stb_image.h" #include "GLCall.h" #include "EngineError.h" #include "Logger.h" Texture::Texture(std::string path, int unit) { //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (m_channels == 3) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else if (m_channels == 4) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } //This image is not supported. else { std::string err = "The Image: " + path; err += " , is using " + m_channels; err += " channels which are not supported."; throw VampEngine::EngineError(err); } //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); } //Loading Failed. else throw VampEngine::EngineError("There was an error loading image \ (Myabe the image format is not supported): " + path); //Unbind the texture. GLCall(glBindTexture(GL_TEXTURE_2D, 0)); //Free the image data. stbi_image_free(data); } Texture::~Texture() { GLCall(glDeleteTextures(1, &m_id)); } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); } My Render Code: #include "Renderer.h" #include "glcall.h" #include "shader.h" Renderer::Renderer() { //Vertices. float vertices[] = { //Positions Texture Coordinates. 0.0f, 0.0f, 0.0f, 0.0f, //Left Bottom. 0.0f, 1.0f, 0.0f, 1.0f, //Left Top. 1.0f, 1.0f, 1.0f, 1.0f, //Right Top. 1.0f, 0.0f, 1.0f, 0.0f //Right Bottom. }; //Indices. unsigned int indices[] = { 0, 1, 2, //Left Up Triangle. 0, 3, 2 //Right Down Triangle. }; //Create and bind a Vertex Array. GLCall(glGenVertexArrays(1, &VAO)); GLCall(glBindVertexArray(VAO)); //Create and bind a Vertex Buffer. GLCall(glGenBuffers(1, &VBO)); GLCall(glBindBuffer(GL_ARRAY_BUFFER, VBO)); //Create and bind an Index Buffer. GLCall(glGenBuffers(1, &EBO)); GLCall(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO)); //Transfer the data to the VBO and EBO. GLCall(glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW)); GLCall(glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW)); //Enable and create the attribute for both Positions and Texture Coordinates. GLCall(glEnableVertexAttribArray(0)); GLCall(glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(float) * 4, (void *)0)); //Create the shader program. m_shader = new Shader("Shaders/sprite_vertex.glsl", "Shaders/sprite_fragment.glsl"); } Renderer::~Renderer() { //Clean Up. GLCall(glDeleteVertexArrays(1, &VAO)); GLCall(glDeleteBuffers(1, &VBO)); GLCall(glDeleteBuffers(1, &EBO)); delete m_shader; } void Renderer::RenderElements(glm::mat4 model) { //Create the projection matrix. glm::mat4 proj = glm::ortho(0.0f, 600.0f, 600.0f, 0.0f, -1.0f, 1.0f); //Set the texture unit to be used. m_shader->SetUniform1i("diffuse", 0); //Set the transformation matrices. m_shader->SetUniformMat4f("model", model); m_shader->SetUniformMat4f("proj", proj); //Draw Call. GLCall(glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL)); } Vertex Shader: #version 330 core layout(location = 0) in vec4 aData; uniform mat4 model; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * model * vec4(aData.xy, 0.0f, 1.0); TexCoord = aData.zw; } Fragment Shader: #version 330 core out vec4 Color; in vec2 TexCoord; uniform sampler2D diffuse; void main() { Color = texture(diffuse, TexCoord); }
  7. Hello! For those who don't know me I have started a quite amount of threads about textures in opengl. I was encountering bugs like the texture was not appearing correctly (even that my code and shaders where fine) or I was getting access violation in memory when I was uploading a texture into the gpu. Mostly I thought that these might be AMD's bugs because when someone was running my code he was getting a nice result. Then someone told me "Some drivers implementations are more forgiven than others, so it might happen that your driver does not forgive that easily. This might be the reason that other can see the output you where expecting". I did not believe him and move on. Then Mr. @Hodgman gave me the light. He explained me somethings about images and what channels are (I had no clue) and with some research from my perspective I learned how digital images work in theory and what channels are. Then by also reading this article about image formats I also learned some more stuff. The question now is, if for example I want to upload a PNG to the gpu, am I 100% that I can use 4 channels? Or even that the image is a PNG it might not contain all 4 channels (rgba). So I need somehow to retrieve that information so my code below will be able to tell the driver how to read the data based on the channels. I'm asking this just to know how to properly write the code below (with capitals are the variables which I want you to tell me how to specify) stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, HOW_MANY_CHANNELS_TO_USE); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); GLCall(glTexImage2D(GL_TEXTURE_2D, 0, WHAT_FORMAT_FOR_THE_TEXTURE, m_width, m_height, 0, WHAT_FORMAT_FOR_THE_DATA, GL_UNSIGNED_BYTE, data)); } So back to my question. If I'm loading a PNG, and tell stbi_load to use 4 channels and then into glTexImage2D, WHAT_FORMAT_FOR_THE_DATA = RGBA will I be sure that the driver will properly read the data without getting an access violation? I want to write a code that no matter the image file, it will always be able to read the data correctly and upload them to the GPU. Like 100% of the tutorials and guides about openGL out there (even one which I purchased from Udemy) where not explaining all these stuff and this is why I was experiencing all these bugs and got stuck for months! Also some documentation you might need to know about stbi_load to help me more: // Limitations: // - no 12-bit-per-channel JPEG // - no JPEGs with arithmetic coding // - GIF always returns *comp=4 // // Basic usage (see HDR discussion below for HDR usage): // int x,y,n; // unsigned char *data = stbi_load(filename, &x, &y, &n, 0); // // ... process data if not NULL ... // // ... x = width, y = height, n = # 8-bit components per pixel ... // // ... replace '0' with '1'..'4' to force that many components per pixel // // ... but 'n' will always be the number that it would have been if you said 0 // stbi_image_free(data)
  8. Hello! I was trying to load some textures and I was getting this access violation atioglxx.dll access violation stb image which i'm using to load the png file into the memory, was not reporting any errors. I found this on the internet explaining that it is a bug from AMD. I fixed that problem by changing the image file which i was using. The image that was causing this issue was generated by this online converter from gif to pngs. Does anyone know more about it? Thank you.
  9. Hello everybody! I decided to write a graphics engine, the killer of Unity and Unreal. If anyone interested and have free time, join. High-level render is based on low-level OpenGL 4.5 and DirectX 11. Ideally, there will be PBR, TAA, SSR, SSAO, some variation of indirect light algorithm, support for multiple viewports and multiple cameras. The key feature is COM based (binary compatibility is needed). Physics, ray tracing, AI, VR will not. I grabbed the basic architecture from the DGLE engine. The editor will be on Qt (https://github.com/fra-zz-mer/RenderMasterEditor). Now there is a buildable editor. The main point of the engine is the maximum transparency of the architecture and high-quality rendering. For shaders, there will be no new language, everything will turn into defines.
  10. Improved Bloom, tonemapping, contrast reduction, gamma correction:
  11. Hi, I've recently been trying to implement screen space reflections into my engine, however, it is extremely buggy. I'm using this tutorial : http://imanolfotia.com/blog/update/2017/03/11/ScreenSpaceReflections.html The reflections look decent when I am close to the ground (first image), however when I get further away from the ground (the surface that is reflecting stuff), the reflections become blocky and strange (second image). I have a feeling that it has something to do with the fact that the further the rays travel in view space, the more scattered they get -> therefore the reflected image is less detailed hence the blockiness. However I really am not sure about this and if this is the case, I don't know how to fix it. It would be great if anyone had any suggestions around how to debug or sort this thing out. Thanks. Here is the code for the ray casting vec4 ray_cast(inout vec3 direction, inout vec3 hit_coord, out float depth_difference, out bool success) { vec3 original_coord = hit_coord; direction *= 0.2; vec4 projected_coord; float sampled_depth; for (int i = 0; i < 20; ++i) { hit_coord += direction; projected_coord = projection_matrix * vec4(hit_coord, 1.0); projected_coord.xy /= projected_coord.w; projected_coord.xy = projected_coord.xy * 0.5 + 0.5; // view_positions store the view space coordinates of the objects sampled_depth = textureLod(view_positions, projected_coord.xy, 2).z; if (sampled_depth > 1000.0) continue; depth_difference = hit_coord.z - sampled_depth; if ((direction.z - depth_difference) < 1.2) { if (depth_difference <= 0) { vec4 result; // binary search for more detailed sample result = vec4(binary_search(direction, hit_coord, depth_difference), 1.0); success = true; return result; } } } return vec4(projected_coord.xy, sampled_depth, 0.0); } Here is the code just before this gets called float ddepth; vec3 jitt = mix(vec3(0.0), vec3(hash33(view_position)), 0.5); vec3 ray_dir = reflect(normalize(view_position), normalize(view_normal)); ray_dir = ray_dir * max(0.2, -view_position.z); /* ray cast */ vec4 coords = ray_cast(ray_dir, view_position, ddepth);
  12. Zemlaynin

    The Great Tribes Devblog #32

    Hello dears! It's been a month and a half since my last diary, a huge amount of work has been done during this time. So that was my task sheet, without considering the tasks that I perform on in-game mechanics: All tasks were performed not in the order in which they were located in the list, and there are no small tasks that had to be solved along the way. Many of the tasks did not concern my participation, such as Alex slowly changed to buildings: Work on selection of color registration of a terrane: The option that we have chosen to date will show a little below. The first thing I had the task of implementing shadows from objects on the map and the first attempts to implement through Shadow map gave this result: And after a short torment managed to get this result: Next, the task was to correct the water, pick up her good textures, coefficients and variables for better display, it was necessary to make the glare on the water: At the same time, our small team joined another Modeler who made us a new unit: The model was with a speculator card, but the support of this material was not in my engine. I had to spend time on the implementation of special map support. In parallel with this task it was necessary to finish lighting at last. All as they say, clinging to one another, had to introduce support for the influence of shadows on the speculator: And to make adjustable light source to check everything and everything: As you can see now there is a panel where you can control the position of the light source. But that was not all, had to set an additional light source simulating reflected light to get a more realistic speculator from the shadow side, it is tied to the camera position. As you can see the armor is gleaming from the shadow side: Wow how many were killed of free time on the animation of this character, the exact import of the animation. But now everything works fine! Soon I will record a video of the gameplay. Meanwhile, Alexei rolled out a new model of the mine: To make such a screenshot with the approach of the mine had to untie the camera, which made it possible to enjoy the views: In the process of working on the construction of cities, a mechanism for expanding the administrative zone of the city was implemented, in the screenshot it is indicated in white: I hope you read our previous diary on the implementation of visualization system for urban areas: As you may have noticed in the last screenshot, the shadows are better than before. I have made an error in the calculation of shadows and that the shadows behind the smallest of objects and get the feeling that they hang in the air, now the shadow falls feel more natural. The map generator was slightly modified, the hills were tweaked, made them smoother. There were glaciers on land, if it is close to the poles: A lot of work has been done to optimize the display of graphics, rewritten shaders places eliminating weaknesses. Optimized the mechanism of storage and rendering of visible tiles, which gave a significant increase and stable FPS on weak computers. Made smooth movement and rotation of the camera, completely eliminating previously visible jerks. This is not a complete list of all solved problems, I just do not remember everything Plans for the near future: - The interface is very large we have a problem with him and really need the help of specialists in this matter. - The implementation of the clashes of armies. - The implementation of urban growth, I have not completed this mechanism. - Implementation of the first beginnings of AI, maneuvering the army and decision-making and reaction to the clash of enemy armies. - The implementation of the mechanism storage conditions of the relationship of AI to the enemies, diplomacy. - AI cities. Thank you for your attention! Join our group in FB: https://www.facebook.com/groups/thegreattribes/
  13. I'm trying to write a game on OpenGL using C++. From third-party libraries for creating windows (widgets?) Inside OpenGL, I was able to add ImGui to my project, create a window and attach some functions to it. But I did not find information like changing the style of this window. Specifically, in this situation, I need to create the starting window of the game (start the game, settings, exit, etc.) and create in-game windows (inventory, character menu, minimap, chat, etc.). I heard about Qt, but given the size they require, my program will weigh 3-4 times more than we would like. In addition, I do not need any super high-quality graphics and a large set of visualization capabilities. I would like to understand what my program consists of and have a set of basic concepts about how this is implemented. Could you advise: is there a similar library with the ability to create and edit the style of in-game windows (maybe in ImGui this function is still there?) with open source code in C++?
  14. Hello ! I have two gpu on my computer, one from my cpu and another one from my graphic card. I am trying to use the opengl/opencl interop capabilities. But I am stuck at the creation of the opencl context, I don't know how to identify which platform / device of the two one is used by opengl. In the above code, which fonction should I use in the test "DEVICE MATCHING OPENGL ONE" to test if the device is the one used by OpenGL, or what should I do to test if the platform_id is the good one ? sf::ContextSettings settings; settings.depthBits = 24; settings.stencilBits = 8; settings.antialiasingLevel = 2; sf::Window window(sf::VideoMode(2048, 1024), "GAME", sf::Style::Fullscreen, settings); glewInit(); cl_platform_id platform_ids[16] = { NULL }; cl_device_id device_id = NULL; cl_uint ret_num_devices; cl_uint ret_num_platforms; cl_platform_id platform_id = 0; cl_int ret = clGetPlatformIDs(_countof(platform_ids), platform_ids, &ret_num_platforms); size_t n = 0; cl_context_properties props[] = { CL_GL_CONTEXT_KHR, (cl_context_properties) wglGetCurrentContext(), CL_WGL_HDC_KHR, (cl_context_properties) wglGetCurrentDC(), CL_CONTEXT_PLATFORM, (cl_context_properties) platform_id, 0 }; for (size_t i = 0; i < ret_num_platforms; ++i) { platform_id = platform_ids[i]; cl_device_id curDevices_id[16]; ret = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, _countof(curDevices_id), curDevices_id, &ret_num_devices); for (cl_uint nDevices = 0; nDevices < ret_num_devices; ++nDevices) { cl_device_id curDevice_id = curDevices_id[nDevices]; clGetGLContextInfoKHR_fn clGetGLContextInfo = reinterpret_cast<clGetGLContextInfoKHR_fn> (clGetExtensionFunctionAddressForPlatform( platform_id, "clGetGLContextInfoKHR")); if (clGetGLContextInfo) { cl_device_id clGLDevice = 0; props[5] = reinterpret_cast<cl_context_properties>(platform_id); clGetGLContextInfo(props, CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR, sizeof(clGLDevice), &clGLDevice, &n); if (DEVICE MATCHING OPENGL ONE) { device_id = clGLDevice; } } } if (device_id) { break; } } cl_context context = clCreateContext(props, 1, &device_id, NULL, NULL, &ret); Thanks for your future help!
  15. phil67rpg

    glutIdleFunc animation

    is there anyway to slow down the animation speed using glutIdleFunc function? void collision() { screen += 0.1667f; if (screen >= 1.0f) { screen = 1.0f; glutIdleFunc(NULL); } glutPostRedisplay(); } I have solved my previous problem.
  16. Hi, I was studying making bloom/glow effect in OpenGL and following the tutorials from learnopengl.com and ThinMatrix (youtube) tutorials, but i am still confuse on how to generate the bright colored texture to be used for blur. Do I need to put lights in the area of location i want the glow to happen so it will be brighter than other object in the scene?, that means i need to draw the scene with the light first? or does the brightness can be extracted based on how the color of the model was rendered/textured thru a formula or something? I have a scene that looks like this that i want to glow the crystal can somebody enlighten me on how or what the correct approach is? really appreciated!
  17. Hi, I'm trying to produce volumetric light in OpenGL following the implementation details on "GPU Pro 5 Volumetric light effects in KillZone". I am confused on the number of passes needed to create the effect. So I got the shadow pass which renders the scene from the light's POV, then I have the GBuffer pass which renders to texture the whole scene. and finally a 3rd pass which computes the ray marching on every pixel and computes the amount of accumulated scattering factor according to its distance of the light in the scene (binding the shadow map form the first pass). Then what ? blending these 3 buffers on a full screen quad finally pass ?? or maybe should I do the ray marching in the resulting buffer of blending the shadow map and the Gbuffer? Thanks in advance
  18. dimi309

    Frogger GameDev Challenge 2018

    This is just a brief note on my participation to the challenge. The game developed to that end is a 3D remake of the original frogger concept and has been made available as an open source product under the BSD (3 clause) license. It is a small casual game, in which the player is controlling a frog. The frog has to get to the other side of a road, avoiding passing cars, and a pond in which wooden planks float. The cars can crush the frog and, if it fails to use the planks when crossing the pond, it drowns. As a side-note I have always wondered why this is so since the 80s. It is an amphibian after all... The game works on Windows, MacOS and Linux and I have used my own small3d framework to develop it, in C++. small3d is also provided as an open source, BSD licensed project. This is not a masterpiece, but I think it's ok for something developed over the course of a couple of weeks. I only noticed the gamedev.net challenge when an announcement was made about the extension to the deadline for submissions.
  19. phil67rpg

    1942 plane game

    Well I have a peculiar problem. When I set my global variable screen=0.0f and I put my drawcollision_one() function in my display function it only draws a static collision sprite. However when I set my screen=0.0001f and I put my drawcollision_one() in my display function it draws an animated collision sprite. Also when I use my screen=-0.0f variable and I put my drawcollision_one() function in my collision detection function it draws a static collision sprite. However when I use my screen=0.0001f variable it does not draw anything when the drawcollision_one() function is in my collision detection function. What I want is to draw the animated collision sprite when the collision detection function is used. Let me know if you need further explanation.I am using freeglut 3.0.0 and SOIL in my program.
  20. I'm totally new to Game Dev. and i wanna say "its not difficult", but sometimes i get stuck in tiny holes with nothing to dig my way out. Basically, I've been following this Java OpenGL (using JOGL) 2D series on YouTube for a while, as an attempt at my first game in java. But clearly its not going well. I followed the series up till episode 18, but after that; in episode 19, the tutor had implemented a KeyListener in order to move the player around. I did the same thing he did, but when i hold down the up/down/right or left key, the player moves for a bit but stops after like 2 seconds. For it to move again, i have to release the key and hold it down again. I personally think this is a problem with JOGL (the library I'm using). But would like to have a solution to the problem since I have already gone through the trouble of making an entire game engine. Anyways, here's the link to the video: Java OpenGL 2D Game Tutorial - Episode 19 - Player Input The code i used for player input is exactly similar to the one used by the tutor! Thanks...
  21. Hello. I can only use vec4 in glsl as out color. How to use other formats like int, uint, ivec4, ..?
  22. Zemlaynin

    The Great Tribes DevBlog #31

    Hello dears! In this short extraordinary diary we decided to tell you about the new system of city expansion, which was added to the game. As you may remember, the original structure of urban areas had a pronounced square structure: To begin with, this was quite enough, but since this graphic element was quite conspicuous and caused natural questions from some users, it was decided to give it a more meaningful form, especially since it was already in our immediate plans. To do this, it was necessary to develop a set of urban areas and their parts, which would give the growing city a visually more natural and pleasant look, as well as the logic of their interrelations. The end result is a set of models that, in theory, should take into account all possible expansion options for flat terrain: Now the starting version of the city looks like this: As you can see, due to additional extensions of residential areas, which are not actually them and serve only for decoration, the city got a more natural and visually pleasing silhouette. For those who are interested in the logic of the use of district models in the expansion of cities, under the spoiler will be attached a number of technical screenshots with explanations: The initial version of the residential area: To smooth its square appearance, additional elements are added to it. These elements, as noted above, are not independent areas, but serve only as a graphic design: The city can expand in any direction. For example, suppose the following urban area appears to the right of an existing one. The current right extension will disappear, and the following type of area will appear in its place: If the area appears at the bottom left diagonally, it will have a square shape, and its additional extensions the following form: If the area is built on top of the original, the city takes the following form: Built a lower right diagonal from the starting. The second built area is replaced by another modification, and an additional area from the bottom to the U-shaped: Although all possible options for development are not clearly presented here, following this logic, theoretically, allows you to build cities of any possible form. Summarizing, you can see a screenshot of a large city, built in the game in this way: As an added bonus, a screenshot of the Outpost: Thank you all for your attention and see you soon!
  23. Hello. I'm tring to implement opencl/opengl interop via clCreateFromGLTexture (texture sharing) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr); With such texture I expected that write_imagei and write_imageui would work but they don't. Only write_imagef works. This behaviour is same for intel and nvidia gpus on my laptop. Why is it and why there is no such information in any documentation and in the entire internet? This pitfall cost me several hours and probably same for many developers.
  24. shockbreak

    Unity Excavator Slingshot

    Hi, please take a look at my new little sparetime - Unity3d - WebGL - Project. Link: Excavator Slingshot I would be happy to get some feedback. Greets!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!