Search the Community

Showing results for tags 'OpenGL'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Audio Jobs
  • Business Jobs
  • Game Design Jobs
  • Programming Jobs
  • Visual Arts Jobs

Categories

  • GameDev Unboxed

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 17422 results

  1. Marching cubes

    I have had difficulties recently with the Marching Cubes algorithm, mainly because the principal source of information on the subject was kinda vague and incomplete to me. I need a lot of precision to understand something complicated Anyhow, after a lot of struggles, I have been able to code in Java a less hardcoded program than the given source because who doesn't like the cuteness of Java compared to the mean looking C++? Oh and by hardcoding, I mean something like this : cubeindex = 0; if (grid.val[0] < isolevel) cubeindex |= 1; if (grid.val[1] < isolevel) cubeindex |= 2; if (grid.val[2] < isolevel) cubeindex |= 4; if (grid.val[3] < isolevel) cubeindex |= 8; if (grid.val[4] < isolevel) cubeindex |= 16; if (grid.val[5] < isolevel) cubeindex |= 32; if (grid.val[6] < isolevel) cubeindex |= 64; if (grid.val[7] < isolevel) cubeindex |= 128; By no mean I am saying that my code is better or more performant. It's actually ugly. However, I absolutely loathe hardcoding. Here's the result with a scalar field generated using the coherent noise library joise :
  2. Recently I've been tackling with more organic low poly terrains. The default way of creating indices for a 3D geometry is the following (credits) : A way to create simple differences that makes the geometry slightly more complicated and thus more organic is to vertically swap the indices of each adjacent quad. In other words, each adjacent quad to a centered quad is its vertical mirror. Finally, by not sharing the vertices and hence by creating two triangles per quad, this is the result with a coherent noise generator (joise) : It is called flat shading.
  3. Hi all, First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource! Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots: The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios. Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
  4. I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines. I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc). This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all: I have these classes: For GPU resources: Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources: Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources: ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle). And my plan is to define everything into these XML documents to abstract away files: Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes: Factory classes for resources: For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles. Factory classes for assets: Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles). Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions! Thanks!
  5. My first 3D game

    I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future. Thanks.
  6. So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing. Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance. Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
  7. Hi, I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity. I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code: #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please? I wish I'd read a reference that would teach me OpenGL through a step-by-step method.
  8. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. It also supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. The engine contains shader source code converter that allows shaders authored in HLSL to be translated to GLSL. The engine currently supports Direct3D11, Direct3D12, and OpenGL/GLES on Win32, Universal Windows and Android platforms. API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Build Instructions Please visit this page for detailed build instructions. Samples The engine contains two graphics samples that demonstrate how the API can be used. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. It can also be thought of as Diligent Engine’s “Hello World” example. Atmospheric scattering sample is a more advanced one. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The engine also includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  9. Isosurface extraction library in Rust

    Pictured are outputs of the Marching Cubes algorithm (left), and surface reconstruction via 'Deferred Rasterisation' (right). These are examples from a little library I wrote for Rust, that provides various implementations of isosurface extraction from volume data. You can find the Apache-2.0 licensed source code on github, or the Rust package on crates.io.
  10. Sorry for making a new thread about this, but I have a specific question which I couldn't find an answer to in any of the other threads I've looked at. I've been trying to get the method shown here to work several days now and I've run out of things to try. I've more or less resorted to using the barebones example shown there (with some very minor modifications as it wouldn't run otherwise), but I still can't get it to work. Either I have misunderstood something completely, or there's a mistake somewhere. My shader code looks like this: Vertex shader: #version 330 core //Vertex shader //Half the size of the near plane {tan(fovy/2.0) * aspect, tan(fovy/2.0) } uniform vec2 halfSizeNearPlane; layout (location = 0) in vec3 clipPos; //UV for the depth buffer/screen access. //(0,0) in bottom left corner (1, 1) in top right corner layout (location = 1) in vec2 texCoord; out vec3 eyeDirection; out vec2 uv; void main() { uv = texCoord; eyeDirection = vec3((2.0 * halfSizeNearPlane * texCoord) - halfSizeNearPlane , -1.0); gl_Position = vec4(clipPos.xy, 0, 1); } Fragment shader: #version 330 core //Fragment shader layout (location = 0) out vec3 fragColor; in vec3 eyeDirection; in vec2 uv; uniform mat4 persMatrix; uniform vec2 depthrange; uniform sampler2D depth; vec4 CalcEyeFromWindow(in float windowZ, in vec3 eyeDirection, in vec2 depthrange) { float ndcZ = (2.0 * windowZ - depthrange.x - depthrange.y) / (depthrange.y - depthrange.x); float eyeZ = persMatrix[3][2] / ((persMatrix[2][3] * ndcZ) - persMatrix[2][2]); return vec4(eyeDirection * eyeZ, 1); } void main() { vec4 eyeSpace = CalcEyeFromWindow(texture(depth, uv).x, eyeDirection, depthrange); fragColor = eyeSpace.rbg; } Where my camera settings are: float fov = glm::radians(60.0f); float aspect = 800.0f / 600.0f; And my uniforms equal: uniform mat4 persMatrix = glm::perspective(fov, aspect, 0.1f, 100.0f) uniform vec2 halfSizeNearPlane = glm::vec2(glm::tan(fov/2.0) * aspect, glm::tan(fov/2.0)) uniform vec2 depthrange = glm::vec2(0.0f, 1.0f) uniform sampler2D depth is a GL_DEPTH24_STENCIL8 texture which has depth values from an earlier pass (if I linearize it and set fragColor = vec3(linearizedZ), it shows up like it should, so nothing seems wrong there). I can confirm that it's wrong because it doesn't give me similar results to what saving position in the G-buffer or reconstructing using inverse matrices does. Is there something obvious I'm missing? To me the logic seems sound, and from the description on the Khronos wiki I can't see where I go wrong. Thanks!
  11. I am trying to rotate my scene by rotating the camera at the press of 'x' key. The scene is rendered correctly but fails to rotate. I am using my own transformation matrix (for good reasons), and it does work fine (it actually rotates the camera position about the x-axis) as indicated in the output below the code. But the camera position in the display() function is not updated. In fact the problem is the display() function is not not called continuously so the rendered scene doesn't reflect the new camera position The print out below the code shows that the display function is not called again, because the x0 in the rotateAbtX(....) function is called and show the camera position changing, but the x1 in the display is called just once and never again so it is not updated Why is this? How can I adjust the code to allow display function to be called continuously so camera position can be updated? Thanks public class Game extends JFrame implements GLEventListener, KeyListener { private static final long serialVersionUID = 1L; final private int width = 800; final private int height = 600; int right=-100, bottom=-100, top=100, left=100, numOfUnits; GLU glu= new GLU(); List<CreateObjVertices> dataArray; ... ... public Game( int units, List<CreateObjVertices> vertXYZ ) { super("Puzzle Game"); Globals.camera = new Point3D(0.0f, 1.4f, 0.0f); Globals.view = new Point3D(0.0f, -1.0f, -3.0f); System.out.println( "x "+Globals.camera.x+" y "+Globals.camera.y+" z "+Globals.camera.z ); dataArray = vertXYZ; numOfUnits = units; t = new Transform3D(); xAxis = new Vector3D(1,0,0); yAxis = new Vector3D(0,1,0); zAxis = new Vector3D(0,0,1); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); GLCanvas canvas = new GLCanvas(capabilities); canvas.addGLEventListener(this); canvas.addKeyListener(this); canvas.setFocusable(true); // To receive key event canvas.requestFocus(); this.setName("Puzzle Game"); this.getContentPane().add(canvas); this.setSize(width, height); this.setLocationRelativeTo(null); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.setVisible(true); this.setResizable(false); canvas.requestFocusInWindow(); } public void play() { } @Override public void init(GLAutoDrawable drawable) { GL2 gl = drawable.getGL().getGL2(); gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f); gl.glClearDepthf(1.0f); gl.glEnable(GL2.GL_DEPTH_TEST); gl.glDepthFunc(GL2.GL_LEQUAL); gl.glHint(GL2.GL_PERSPECTIVE_CORRECTION_HINT, GL2.GL_NICEST); gl.glShadeModel(GL2.GL_SMOOTH); gl.glEnableClientState(GL2.GL_VERTEX_ARRAY); } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int width, int height) { GL2 gl = drawable.getGL().getGL2(); if (height == 0) height = 1; float aspect = (float)width / height; gl.glViewport(0, 0, width, height); gl.glMatrixMode(GL2.GL_PROJECTION); gl.glLoadIdentity(); glu.gluPerspective( 45, aspect, 0.1f, 100.0f); System.out.println( "x2 "+Globals.camera.x+" y2 "+Globals.camera.y+" z2 "+Globals.camera.z ); } @Override public void display(GLAutoDrawable drawable) { GL2 gl = drawable.getGL().getGL2(); gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL2.GL_MODELVIEW); gl.glLoadIdentity(); glu.gluLookAt( Globals.camera.x, Globals.camera.y, Globals.camera.z, Globals.view.x, Globals.view.y, Globals.view.z, 0.0f, 1.0f, 0.0f); System.out.println( "x1 "+Globals.camera.x+" y1 "+Globals.camera.y+" z1 "+Globals.camera.z ); gl.glTranslatef(0.0f, 0.0f, -3.0f); gl.glBegin(GL.GL_TRIANGLE_STRIP); //=========================== START =========================================================== // ************* DRAWING SCENE AND OBJECTS HERE *************** //============================= end ============================================================ gl.glFlush(); } @Override public void dispose(GLAutoDrawable drawable) { } @Override public void keyPressed(KeyEvent e) { if( e.getKeyChar() == 'x'){ rotateAbtX( 1 ); } if( e.getKeyChar() == 'X'){ rotateAbtX( -1 ); } } @Override public void keyReleased(KeyEvent e) { } @Override public void keyTyped(KeyEvent e) { } Transform3D t; Vector3D xAxis, yAxis, zAxis; float rotAng = 60.0f, cosAngle = 0.0f; Vector3D normalAxisVec = new Vector3D(0,0,0), vecObj = new Vector3D(0,0,0), baseLineVec = new Vector3D(0,0,0); public void rotateAbtX( int direction ) { t.rotateCamera( xAxis, rotAng*direction, Globals.view, 0, Globals.camera ); System.out.println( "x0 "+Globals.camera.x+" y0 "+Globals.camera.y+" z0 "+Globals.camera.z ); } } x* 0.0 y* 1.4 z* 0.0 x2 0.0 y2 1.4 z2 0.0 x1 0.0 y1 1.4 z1 0.0 x0 0.0 y0 -2.3980765 z0 0.57846117 x0 0.0 y0 -4.7980766 z0 -2.4215393 x0 0.0 y0 -3.3999999 z0 -6.000001 x0 0.0 y0 0.39807725 z0 -6.578461 x0 0.0 y0 2.7980769 z0 -3.57846 x0 0.0 y0 1.3999994 z0 1.4305115E-6 x0 0.0 y0 -2.398078 z0 0.57846117 x0 0.0 y0 -4.7980776 z0 -2.4215407 x0 0.0 y0 -3.3999991 z0 -6.000002 x0 0.0 y0 0.39807856 z0 -6.578461 x0 0.0 y0 2.7980776 z0 -3.5784588 x0 0.0 y0 1.3999987 z0 2.3841858E-6 x0 0.0 y0 -2.3980794 z0 0.57846117 x0 0.0 y0 -4.798078 z0 -2.4215417
  12. I am currently implementing UBOs and Buffer Textures in an effort to go from using glUniforms which are quite decent performance to something more per-frame and per-world-transition. I have a few thousands of mesh chunks that are generated for coordinate (0, 0) so that I can translate them wherever I need. I'm now doing the translation with a glUniform call each time I make a draw call. I would like to transition away from this by using UBOs (once per frame to setup all per-frame stuff) and then using texture buffers to translate and get better control over chunks. 1. Is this new method much faster than before? If the answer is no, it might not be worth it for me 2. How do I make sure that a given mesh that knows nothing about itself can sample from the right index in the texture buffer? There might be a ray of hope here if each draw call could be numbered from 0 .... N-1. EDIT: I baked in a mesh ID in all meshes and used a vec3 buffer texture as translation to avoid setting any kind of uniform data at all. I didn't notice any performance improvements.
  13. Hello all, I'm very new on OpenGL and at this beginning I've found it very complex. I would think C++ is the most complex language but it's better. Anyway, the code below is for rendering my first triangle. Please take a look: #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //********************************* int main() { glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); GLFWwindow* window = glfwCreateWindow(800, 600, "The First Triangle", NULL, NULL); if (window == NULL) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } glViewport(0, 0, 700, 500); float vertices[] = { -0.5f, -0.5f, 0.5f, 0.5f, -0.5f, 0.5f, 0.0f, 0.5f, 0.0f }; unsigned int VBO; // Creating a vertex buffer object glGenBuffers(1, &VBO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); // Creating the Vertex Shader const char* vertexShaderSource = "#version 330 core\nlayout (location = 0)" "in vec3 aPos;\n\nvoid main()\n{\ngl_Position =" "vec4(aPos.x, aPos.y, aPos.z, 1.0);\n}\n\0"; unsigned int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); //check the vertex shader compilation error(s) int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // Creating the Fragment Shader const char* fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n\nvoid main()\n{\n" "FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n}\n\0"; unsigned int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); //check the fragment shader compilation error(s) glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // Linking both shaders into a shader program for rendering unsigned int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); //check the shader program linking error(s) glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::PROGRAM::SHADER::LINKING_FAILED\n" << infoLog << endl; } glUseProgram(shaderProgram); // We no longer need the prior shaders after the linking glDeleteShader(vertexShader); glDeleteShader(fragmentShader); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); unsigned int VAO; glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); glDrawArrays(GL_TRIANGLES, 0, 3); system("pause"); return 0; } the output is the following image. My questions are: 1- why doesn't the code render the triangle which is meant in the code please? 2- Apart from that part, is the code standard? That is is the code the one a teacher would write for a student to be well written and good code?
  14. GL.glDrawArrays(GL.GL_TRIANGLES,0,model1.vertexcou nt)gives me an error saying:ctypes.ArgumentError:argument 3:<class 'TypeError'>: wrong typewhich i interpret as:the 3rd argument given to this function/method is in the wrong typeso,i search on the internet for what type they want for the third argument ,it says:it wants GLsizei type,which is a non-negative binary integer.After that,i checked the model1.vertexcount's type by doing print(type(model1.vertexcount)) using python,it prints <class 'float'>I tried to change it to integer by doing int(model1.vertexcount),but it gives me an error:OSError:exception:access violation reading 0x0A56BAA8I also tried to input the value 6 myself,instead of letting the raw model class do the job,but it still output an Error:OSError:exception:access violation reading 0x0CD5DD98Notes:1)The hexadecimal number at the back part of the error differ in every run.2)i am learning opengl from Thin Matrix.3)i am using numpy to create the arrays4)If more information is needed,ask below in the reply....Hope anyone can help meThanks.....
  15. I am working on a large scale mobile game and one thing I notice is that mobiles are very bad with textures. I started by making the textures in the game 2048* 2048 then because of memory limits I dropped down to 1024*1024. While doing basic profiling I noticed that my textures are by far the most graphical impacting factor even when using Unity's mobile shaders. So I am thinking about discarding textures or at least dropping them down to 256*256 or 128*128 for basic info only. I would use Unity materials with tiling textures to create material types. A screen space Ambient occlusion shader. No normal maps as I would instead increase the polycount, I noticed that replacing my models with higher poly models has very little impact. I could use very basic vector calculations to blend colors, think matcap shaders. Matcap shaders would be key to this looking good. My question is: Would something like this even work? Cell shaded games is the nearest I have seen to this concept, yet they often rely on heavy composing that reduces all benefits. Wouldn't the extra draw calls reduce my performance gain? I could switch to Unreal to do this as I noticed it has a much better performance with complex shaders and it's instanced material system even works on animated objects.
  16. Hi all, im loading an FBX file using Assimp and everything is OK including the animation but there is one certain FBX file that seems that the other meshes are not in the proper place. see attached screenshot (not align mesh.png), the correct ones should be the blades should be on the side of the character and not going thru its center. i have used assimp viewer and load and view the FBX file can it can be loaded and viewed correctly, which means Assimp can load the FBX format properly. Assimp Viewer output: the FBX file has 4 meshes, all the other 3 mesh including the entire body mesh has been positioned correctly except for the second mesh (index 1) which is the blade. I have been going thru my code again and again and can't find the problem, what could have i missed? here is my assimp loader code (without the animation and bone processing part as this problem happen with static mesh loading) bool AssimpMesh::Load(const std::string& Filename) { CleanUp(); m_pScene = m_Importer.ReadFile(Filename.c_str(), aiProcess_Triangulate | aiProcess_GenSmoothNormals | aiProcess_FlipUVs | aiProcess_JoinIdenticalVertices); bool Ret = false; if (m_pScene) { m_GlobalInverseTransform = AiToGLMMat4(m_pScene->mRootNode->mTransformation); m_GlobalInverseTransform = glm::inverse(m_GlobalInverseTransform); Ret = InitFromScene(m_pScene, Filename); } else { printf("Error parsing '%s': '%s'\n", Filename.c_str(), m_Importer.GetErrorString()); } return Ret; } bool AssimpMesh::InitFromScene(const aiScene* pScene, const std::string& Filename) { m_Entries.resize(pScene->mNumMeshes); m_Textures.resize(pScene->mNumMaterials); // Initialize the meshes in the scene one by one for (unsigned int i = 0; i < pScene->mNumMeshes; i++) { m_Entries[i].MaterialIndex = pScene->mMeshes[i]->mMaterialIndex; m_Entries[i].NumIndices = pScene->mMeshes[i]->mNumFaces * 3; const aiMesh* paiMesh = pScene->mMeshes[i]; std::vector<glm::vec3> Positions; std::vector<glm::vec3> Normals; std::vector<glm::vec2> TexCoords; std::vector<unsigned int> Indices; std::vector<VertexBoneData> Bones; Bones.resize(pScene->mMeshes[i]->mNumVertices); InitMesh(i, paiMesh, Positions, Normals, TexCoords, Bones, Indices); MeshEntry* entry = &m_Entries[i]; glGenVertexArrays(1, &entry->m_VAO); glBindVertexArray(entry->m_VAO); // Create the buffers for the vertices attributes glGenBuffers(ARRAY_SIZE_IN_ELEMENTS(entry->m_Buffers), entry->m_Buffers); // Generate and populate the buffers with vertex attributes and the indices glBindBuffer(GL_ARRAY_BUFFER, entry->m_Buffers[POS_VB]); glBufferData(GL_ARRAY_BUFFER, sizeof(Positions[0]) * Positions.size(), &Positions[0], GL_STATIC_DRAW); glEnableVertexAttribArray(POSITION_LOCATION); glVertexAttribPointer(POSITION_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, entry->m_Buffers[TEXCOORD_VB]); glBufferData(GL_ARRAY_BUFFER, sizeof(TexCoords[0]) * TexCoords.size(), &TexCoords[0], GL_STATIC_DRAW); glEnableVertexAttribArray(TEX_COORD_LOCATION); glVertexAttribPointer(TEX_COORD_LOCATION, 2, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, entry->m_Buffers[NORMAL_VB]); glBufferData(GL_ARRAY_BUFFER, sizeof(Normals[0]) * Normals.size(), &Normals[0], GL_STATIC_DRAW); glEnableVertexAttribArray(NORMAL_LOCATION); glVertexAttribPointer(NORMAL_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, entry->m_Buffers[BONE_VB]); glBufferData(GL_ARRAY_BUFFER, sizeof(Bones[0]) * Bones.size(), &Bones[0], GL_STATIC_DRAW); glEnableVertexAttribArray(BONE_ID_LOCATION); glVertexAttribIPointer(BONE_ID_LOCATION, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0); glEnableVertexAttribArray(BONE_WEIGHT_LOCATION); glVertexAttribPointer(BONE_WEIGHT_LOCATION, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)16); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, entry->m_Buffers[INDEX_BUFFER]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices[0]) * Indices.size(), &Indices[0], GL_STATIC_DRAW); glBindVertexArray(0); } if (!InitMaterials(pScene, Filename)) { return false; } return true; } bool AssimpMesh::InitFromScene(const aiScene* pScene, const std::string& Filename) { m_Entries.resize(pScene->mNumMeshes); m_Textures.resize(pScene->mNumMaterials); // Initialize the meshes in the scene one by one for (unsigned int i = 0; i < pScene->mNumMeshes; i++) { m_Entries[i].MaterialIndex = pScene->mMeshes[i]->mMaterialIndex; m_Entries[i].NumIndices = pScene->mMeshes[i]->mNumFaces * 3; const aiMesh* paiMesh = pScene->mMeshes[i]; std::vector<glm::vec3> Positions; std::vector<glm::vec3> Normals; std::vector<glm::vec2> TexCoords; std::vector<unsigned int> Indices; std::vector<VertexBoneData> Bones; Bones.resize(pScene->mMeshes[i]->mNumVertices); InitMesh(i, paiMesh, Positions, Normals, TexCoords, Bones, Indices); MeshEntry* entry = &m_Entries[i]; glGenVertexArrays(1, &entry->m_VAO); glBindVertexArray(entry->m_VAO); // Create the buffers for the vertices attributes glGenBuffers(ARRAY_SIZE_IN_ELEMENTS(entry->m_Buffers), entry->m_Buffers); // Generate and populate the buffers with vertex attributes and the indices glBindBuffer(GL_ARRAY_BUFFER, entry->m_Buffers[POS_VB]); glBufferData(GL_ARRAY_BUFFER, sizeof(Positions[0]) * Positions.size(), &Positions[0], GL_STATIC_DRAW); glEnableVertexAttribArray(POSITION_LOCATION); glVertexAttribPointer(POSITION_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, entry->m_Buffers[TEXCOORD_VB]); glBufferData(GL_ARRAY_BUFFER, sizeof(TexCoords[0]) * TexCoords.size(), &TexCoords[0], GL_STATIC_DRAW); glEnableVertexAttribArray(TEX_COORD_LOCATION); glVertexAttribPointer(TEX_COORD_LOCATION, 2, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, entry->m_Buffers[NORMAL_VB]); glBufferData(GL_ARRAY_BUFFER, sizeof(Normals[0]) * Normals.size(), &Normals[0], GL_STATIC_DRAW); glEnableVertexAttribArray(NORMAL_LOCATION); glVertexAttribPointer(NORMAL_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ARRAY_BUFFER, entry->m_Buffers[BONE_VB]); glBufferData(GL_ARRAY_BUFFER, sizeof(Bones[0]) * Bones.size(), &Bones[0], GL_STATIC_DRAW); glEnableVertexAttribArray(BONE_ID_LOCATION); glVertexAttribIPointer(BONE_ID_LOCATION, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0); glEnableVertexAttribArray(BONE_WEIGHT_LOCATION); glVertexAttribPointer(BONE_WEIGHT_LOCATION, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)16); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, entry->m_Buffers[INDEX_BUFFER]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices[0]) * Indices.size(), &Indices[0], GL_STATIC_DRAW); glBindVertexArray(0); } if (!InitMaterials(pScene, Filename)) { return false; } return true; } void AssimpMesh::InitMesh(unsigned int MeshIndex, const aiMesh* paiMesh, std::vector<glm::vec3>& Positions, std::vector<glm::vec3>& Normals, std::vector<glm::vec2>& TexCoords, std::vector<VertexBoneData>& Bones, std::vector<unsigned int>& Indices) { const aiVector3D Zero3D(0.0f, 0.0f, 0.0f); // Populate the vertex attribute vectors for (unsigned int i = 0; i < paiMesh->mNumVertices; i++) { const aiVector3D* pPos = &(paiMesh->mVertices[i]); const aiVector3D* pNormal = &(paiMesh->mNormals[i]); const aiVector3D* pTexCoord = paiMesh->HasTextureCoords(0) ? &(paiMesh->mTextureCoords[0][i]) : &Zero3D; Positions.push_back(glm::vec3(pPos->x, pPos->y, pPos->z)); Normals.push_back(glm::vec3(pNormal->x, pNormal->y, pNormal->z)); TexCoords.push_back(glm::vec2(pTexCoord->x, pTexCoord->y)); } LoadBones(MeshIndex, paiMesh, Bones); // Populate the index buffer for (unsigned int i = 0; i < paiMesh->mNumFaces; i++) { const aiFace& Face = paiMesh->mFaces[i]; assert(Face.mNumIndices == 3); Indices.push_back(Face.mIndices[0]); Indices.push_back(Face.mIndices[1]); Indices.push_back(Face.mIndices[2]); } } void AssimpMesh::Render() { for (unsigned i = 0; i < m_Entries.size(); i++) { const unsigned MaterialIndex = m_Entries[i].MaterialIndex; assert(MaterialIndex < m_Textures.size()); if (m_Textures[MaterialIndex]) { m_Textures[MaterialIndex]->Bind(/*GL_TEXTURE0*/); } MeshEntry* entry = &m_Entries[i]; glBindVertexArray(entry->m_VAO); glDrawElements(GL_TRIANGLES, m_Entries[i].NumIndices, GL_UNSIGNED_INT, (void*)0); glBindVertexArray(0); } } I'm stuck on this one for hours already as it is just basic static mesh loading and i cannot find the problem what could i have missed or overlooked? (I attached the FBX file as well) Hellmech_Priest.FBX
  17. In c++, How to create a DLL that has a function that gets the Main window handle of the program that called it, and then render openGL graphics to that window ? In Windows, preferably the easiest method ?
  18. Hi, I have found this paper dealing with how to compute the perfect bias when dealing with shadow map. The idea is to: get the texel used when sampling the shadowMap project the texel location back to eyeSpace (ray tracing) get the difference between your frament.z and the intersection with the fragment's face and your ray. This way you have calculated the error which serve as the appropriate bias for z-fighting. Now I am trying to implement it, but I experiment some troubles: I am using a OrthoProjectionMatrix, so i think I don't need to divide by w back and forth. I am good until I am computing the ray intersection with the face. I have a lot of faces failing the test, and my bias is way to important. This is my fragment shader code: float getBias(float depthFromTexture) { vec3 n = lightFragNormal.xyz; //no need to divide by w, we got an ortho projection //we are in NDC [-1,1] we go to [0,1] //vec4 smTexCoord = 0.5 * shadowCoord + vec4(0.5, 0.5, 0.5, 0.0); vec4 smTexCoord = lightProjectionMatrix * lightFragmentCoord; smTexCoord = 0.5 * smTexCoord + vec4(0.5, 0.5, 0.5, 0.5); //we are in [0,1] we go to texture_space [0,1]->[0,shadowMap.dimension]:[0,1024] //get the nearest index in the shadow map, the texel corresponding to our fragment //we use floor (125.6,237.9) -> (125,237) vec2 delta = vec2(xPixelOffset, yPixelOffset); vec2 textureDim = vec2(1/xPixelOffset, 1/yPixelOffset); vec2 index = floor(smTexCoord.xy * textureDim); //we get the center of the current texel, we had 0.5 to put us in the middle (125,237) -> (125.5,237.5) //we go back to [0,1024] -> [0,1], (125.5,237.5) -> (0.125, 0.235) vec2 nlsGridCenter = delta*(index + vec2(0.5f, 0.5f)); // go back to NDC [0,1] -> [-1,1] vec2 lsGridCenter = 2.0 * nlsGridCenter - vec2(1.0); //compute lightSpace grid direction, multiply by the inverse projection matrice or vec4 lsGridCenter4 = inverse(lightProjectionMatrix) * vec4(lsGridCenter, -frustrumNear, 0); vec3 lsGridLineDir = vec3(normalize(lsGridCenter4)); /** Plane ray intersection **/ // Locate the potential occluder for the shading fragment //compute the distance t we need to continue in the gridDir direction, the point is "t" far float ls_t_hit = dot(n, lightFragmentCoord.xyz) / dot(n, lsGridLineDir); if(ls_t_hit<=0){ return 0; // i got a lot of negativ values it shouldn t be the case } //compute the point p with the face vec3 ls_hit_p = ls_t_hit * lsGridLineDir; float intersectionDepth = lightProjectionMatrix * vec4(ls_hit_p, 1.0f).z / 2 + 0.5; float fragmentDepth = lightProjectionMatrix * lightFragmentCoord.z / 2 + 0.5; float result = abs(intersectionDepth - fragmentDepth); return result; } My intersectionDepth don't match my fragmentDepth, they should be really close I am struggling with this line of code and the ray plane intersection: vec4 lsGridCenter4 = inverse(lightProjectionMatrix) * vec4(lsGridCenter, -1.0, 0); I need to go from NDC space to Eye space, i don't know what should be my z and w component. I am starting with a point in the shadowMap I think my z component should match the near plane so in NDC space it should be -1 but i am not sure. Same for w it's a ray so maybe 0 is a better a choice than one. I don't know if my plane/ray intersection is wrong but from wikipedia: \(d = { dot(Po - O) \cdot n \over dir \cdot n}\) where: dir = my vector normalized direction Po = a point belonging to the the plane O = point belonging to the ray, the origin should match. in eye space the origin should be my light position -> (0,0,0) ? n = normal of the plane, the normal of my fragment in eyespace
  19. Hi, I am trying to optimize shadow mapping for an rts-view (nearly top down) with directional light. My approach so far is to intersect the camera view frustum with the plane of the terrain on which all the units are moving and fit a box around the four intersection points: Then I use the center of the box and the light direction to construct a view matrix. To construct the orthographic projection matrix I use the corners of my box. With that I somehow do not get the wanted results (no shadows) /shadow map is not correctly created for the view of the camera. I think I am maybe missing some translation/rotation? Is there a better way for rts-views with a single shadow map ? Thanks for your help beforehand!
  20. Hi guys, With OpenGL not having a dedicated SDK, how were libraries like GLUT and the likes ever written? Could someone these days write an OpenGL library from scratch? How would you even go about this? Obviously this question stems from the fact that there is no OpenGL SDK. DirectX is a bit different as MS has the advantage of having the relationship with the vendors and having full access to OS source code and the entire works. If I were to attempt to write the most absolute basic lib to access OpenGL on the GPU, how would I go about this?
  21. Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on. Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page: For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note! So, here's what the plan is so far as far as loading goes: Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either: Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once. The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders. So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with? With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
  22. I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
  23. A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run. -What I'm using: C++;. Since im learning this language while in college and its one of the popular language to make games with why not. Visual Studios; Im using a windows so yea. SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL. -Questions Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?
  24. Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine. But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance? Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
  25. Before using void glBindImageTexture( GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.