• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By elect
      ok, so, we are having problems with our current mirror reflection implementation.
      At the moment we are doing it very simple, so for the i-th frame, we calculate the reflection vectors given the viewPoint and some predefined points on the mirror surface (position and normal).
      Then, using the least squared algorithm, we find the point that has the minimum distance from all these reflections vectors. This is going to be our virtual viewPoint (with the right orientation).
      After that, we render offscreen to a texture by setting the OpenGL camera on the virtual viewPoint.
      And finally we use the rendered texture on the mirror surface.
      So far this has always been fine, but now we are having some more strong constraints on accuracy.
      What are our best options given that:
      - we have a dynamic scene, the mirror and parts of the scene can change continuously from frame to frame
      - we have about 3k points (with normals) per mirror, calculated offline using some cad program (such as Catia)
      - all the mirror are always perfectly spherical (with different radius vertically and horizontally) and they are always convex
      - a scene can have up to 10 mirror
      - it should be fast enough also for vr (Htc Vive) on fastest gpus (only desktops)

      Looking around, some papers talk about calculating some caustic surface derivation offline, but I don't know if this suits my case
      Also, another paper, used some acceleration structures to detect the intersection between the reflection vectors and the scene, and then adjust the corresponding texture coordinate. This looks the most accurate but also very heavy from a computational point of view.

      Other than that, I couldn't find anything updated/exhaustive around, can you help me?
      Thanks in advance
    • By kanageddaamen
      Hello all,
      I am currently working on a game engine for use with my game development that I would like to be as flexible as possible.  As such the exact requirements for how things should work can't be nailed down to a specific implementation and I am looking for, at least now, a default good average case scenario design.
      Here is what I have implemented:
      Deferred rendering using OpenGL Arbitrary number of lights and shadow mapping Each rendered object, as defined by a set of geometry, textures, animation data, and a model matrix is rendered with its own draw call Skeletal animations implemented on the GPU.   Model matrix transformation implemented on the GPU Frustum and octree culling for optimization Here are my questions and concerns:
      Doing the skeletal animation on the GPU, currently, requires doing the skinning for each object multiple times per frame: once for the initial geometry rendering and once for the shadow map rendering for each light for which it is not culled.  This seems very inefficient.  Is there a way to do skeletal animation on the GPU only once across these render calls? Without doing the model matrix transformation on the CPU, I fail to see how I can easily batch objects with the same textures and shaders in a single draw call without passing a ton of matrix data to the GPU (an array of model matrices then an index for each vertex into that array for transformation purposes?) If I do the matrix transformations on the CPU, It seems I can't really do the skinning on the GPU as the pre-transformed vertexes will wreck havoc with the calculations, so this seems not viable unless I am missing something Overall it seems like simplest solution is to just do all of the vertex manipulation on the CPU and pass the pre-transformed data to the GPU, using vertex shaders that do basically nothing.  This doesn't seem the most efficient use of the graphics hardware, but could potentially reduce the number of draw calls needed.

      Really, I am looking for some advice on how to proceed with this, how something like this is typically handled.  Are the multiple draw calls and skinning calculations not a huge deal?  I would LIKE to save as much of the CPU's time per frame so it can be tasked with other things, as to keep CPU resources open to the implementation of the engine.  However, that becomes a moot point if the GPU becomes a bottleneck.
    • By DiligentDev
      I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub.
      True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported:
      Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics
      The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode:
      #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ...  GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State
      Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.)
      Creating Shaders
      To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member:
      SHADER_SOURCE_LANGUAGE_DEFAULT  - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL  - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL  - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] =  {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object
      To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format:
      // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below:
      // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO:
      m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources
      Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state:
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details.
      Setting the Pipeline State and Invoking Draw Command
      Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context:
      // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples
      The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage.
      Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API.   Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers.   Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader.   Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy.   Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance.   Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads.   Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe.   Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm.   Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes.
      AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface.

      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. 

      The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. 

      Integration with Unity
      Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.

    • By Yxjmir
      I'm trying to load data from a .gltf file into a struct to use to load a .bin file. I don't think there is a problem with how the vertex positions are loaded, but with the indices. This is what I get when drawing with glDrawArrays(GL_LINES, ...):

      Also, using glDrawElements gives a similar result. Since it looks like its drawing triangles using the wrong vertices for each face, I'm assuming it needs an index buffer/element buffer. (I'm not sure why there is a line going through part of it, it doesn't look like it belongs to a side, re-exported it without texture coordinates checked, and its not there)
      I'm using jsoncpp to load the GLTF file, its format is based on JSON. Here is the gltf struct I'm using, and how I parse the file:
      #define GLTF_TARGET_ARRAY_BUFFER (34962) #define GLTF_TARGET_ELEMENT_ARRAY_BUFFER (34963) #define GLTF_COMPONENT_TYPE_BYTE (5120) #define GLTF_COMPONENT_TYPE_UNSIGNED_BYTE (5121) #define GLTF_COMPONENT_TYPE_SHORT (5122) #define GLTF_COMPONENT_TYPE_UNSIGNED_SHORT (5123) #define GLTF_COMPONENT_TYPE_INT (5124) #define GLTF_COMPONENT_TYPE_UNSIGNED_INT (5125) #define GLTF_COMPONENT_TYPE_FLOAT (5126) #define GLTF_COMPONENT_TYPE_DOUBLE (5127) #define GLTF_PARAMETER_TYPE_BYTE (5120) #define GLTF_PARAMETER_TYPE_UNSIGNED_BYTE (5121) #define GLTF_PARAMETER_TYPE_SHORT (5122) #define GLTF_PARAMETER_TYPE_UNSIGNED_SHORT (5123) #define GLTF_PARAMETER_TYPE_INT (5124) #define GLTF_PARAMETER_TYPE_UNSIGNED_INT (5125) #define GLTF_PARAMETER_TYPE_FLOAT (5126) #define GLTF_PARAMETER_TYPE_FLOAT_VEC2 (35664) #define GLTF_PARAMETER_TYPE_FLOAT_VEC3 (35665) #define GLTF_PARAMETER_TYPE_FLOAT_VEC4 (35666) struct GLTF { struct Accessor { USHORT bufferView; USHORT componentType; UINT count; vector<INT> max; vector<INT> min; string type; }; vector<Accessor> m_accessors; struct Asset { string copyright; string generator; string version; }m_asset; struct BufferView { UINT buffer; UINT byteLength; UINT byteOffset; UINT target; }; vector<BufferView> m_bufferViews; struct Buffer { UINT byteLength; string uri; }; vector<Buffer> m_buffers; vector<string> m_Images; struct Material { string name; string alphaMode; Vec4 baseColorFactor; UINT baseColorTexture; UINT normalTexture; float metallicFactor; }; vector<Material> m_materials; struct Meshes { string name; struct Primitive { vector<UINT> attributes_indices; UINT indices; UINT material; }; vector<Primitive> primitives; }; vector<Meshes> m_meshes; struct Nodes { int mesh; string name; Vec3 translation; }; vector<Nodes> m_nodes; struct Scenes { UINT index; string name; vector<UINT> nodes; }; vector<Scenes> m_scenes; vector<UINT> samplers; struct Textures { UINT sampler; UINT source; }; vector<Textures> m_textures; map<UINT, string> attributes_map; map<UINT, string> textures_map; }; GLTF m_gltf; // This is actually in the Mesh class bool Mesh::Load(string sFilename) { string sFileAsString; stringstream sStream; ifstream fin(sFilename); sStream << fin.rdbuf(); fin.close(); sFileAsString = sStream.str(); Json::Reader r; Json::Value root; if (!r.parse(sFileAsString, root)) { string errors = r.getFormatedErrorMessages(); if (errors != "") { // TODO: Log errors return false; } } if (root.isNull()) return false; Json::Value object; Json::Value value; // Load Accessors array, these are referenced by attributes with their index value object = root.get("accessors", Json::Value()); // store object with key "accessors", if not found it will default to Json::Value() if (!object.isNull()) { for (Json::ValueIterator it = object.begin(); it != object.end(); it++) { GLTF::Accessor accessor; value = (*it).get("bufferView", Json::Value()); if (!value.isNull()) accessor.bufferView = value.asUINT(); else return false; value = (*it).get("componentType", Json::Value()); if (!value.isNull()) accessor.componentType = value.asUINT(); else return false; value = (*it).get("count", Json::Value()); if (!value.isNull()) accessor.count = value.asUINT(); else return false; value = (*it).get("type", Json::Value()); if (!value.isNull()) accessor.type = value.asString(); else return false; m_gltf.accessors.push_back(accessor); } } else return false; object = root.get("bufferViews", Json::Value()); if(!object.isNull()) { for (Json::ValueIterator it = object.begin(); it != object.end(); it++) { GLTF::BufferView bufferView; value = (*it).get("buffer", Json::Value()); if(!value.isNull()) bufferView.buffer = value.asUInt(); else return false; value = (*it).get("byteLength", Json::Value()); if(!value.isNull()) bufferView.byteLength = value.asUInt(); else return false; value = (*it).get("byteOffset", Json::Value()); if(!value.isNull()) bufferView.byteOffset = value.asUInt(); else return false; value = (*it).get("target", Json::Value()); if(!value.isNull()) bufferView.target = value.asUInt(); else return false; m_gltf.m_bufferViews.push_back(bufferView); } } else return false; object = root.get("buffers", Json::Value()); if(!object.isNull()) { for (Json::ValueIterator it = object.begin(); it != object.end(); it++) { GLTF::Buffer buffer; value = (*it).get("byteLength", Json::Value()); if(!value.isNull()) buffer.byteLength = value.asUInt(); else return false; // Store the filename of the .bin file value = (*it).get("uri", Json::Value()); if(!value.isNull()) buffer.uri = value.asString(); else return false; } } else return false; object = root.get("meshes", Json::Value()); if(!object.isNull()) { for(Json::ValueIterator it = object.begin(); it != object.end(); it++) { GLTF::Meshes mesh; value = (*it).get("primitives", Json::Value()); for(Json::ValueIterator value_it = value.begin(); value_it != value.end(); value_it++) { GLTF::Meshes::Primitive primitive; Json::Value attributes; attributes = (*value_it).get("attributes", Json::Value()); vector<string> memberNames = attributes.getMemberNames(); for(size_t i = 0; i < memberNames.size(); i++) { Json::Value member; member = attributes.get(memeberNames[i], Json::Value()); if(!member.isNull()) { primitive.attributes_indices.push_back(member.asUInt()); m_gltf.attributes_map[member.asUInt()] = memberNames[i]; // Each of these referes to an accessor by indice, so each indice should be unique, and they are when loading a cube } else return false; } // Indice of the accessor used for indices Json::Value indices; indices = (*value_it).get("indices", Json::Value()); primitive.indices = indices.asUInt(); mesh.primitives.push_back(primitive); } m_gltf.m_meshes.push_back(mesh); } } vector<float> vertexData; vector<USHORT> indiceData; int vertexBufferSizeTotal = 0; int elementBufferSizeTotal = 0; GLTF::Meshes mesh = m_gltf.m_meshes[0]; vector<GLTF::Meshes::Primitive> primitives = mesh.primitives; // trying to make the code easier to read for (size_t p = 0; p < primitive.size(); p++) { vector<UINT> attributes = primitives[p].attributes_indices; for(size_t a = 0; a < attributes.size(); a++) { GLTF::Accessor accessor = m_gltf.m_accessors[attributes[a]]; GLTF::BufferView bufferView = m_gltf.m_bufferViews[accessor.bufferView]; UINT target = bufferView.target; if(target == GLTF_TARGET_ARRAY_BUFFER) vertexBufferSizeTotal += bufferView.byteLength; } UINT indice = primitives[p].indices; GLTF::BufferView bufferView = m_gltf.m_bufferViews[indice]; UINT target = bufferView.target; if(target == GLTF_TARGET_ELEMENT_ARRAY_BUFFER) elementBufferSizeTotal += bufferView.byteLength; } // These have already been generated glBindVertexArray(g_pGame->m_VAO); glBindBuffer(GL_ARRAY_BUFFER, g_pGame->m_VBO); glBufferData(GL_ARRAY_BUFFER, vertexBufferSizeTotal, nullptr, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, g_pGame->m_EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, elementBufferSizeTotal, nullptr, GL_STATIC_DRAW); int offset = 0; int offset_indice = 0; for (size_t p = 0; p < primitive.size(); p++) { vector<UINT> attributes = primitives[p].attributes_indices; int pos = sFilename.find_last_of('\\') + 1; string sFolder = sFilename.substr(0, pos); for (size_t a = 0; a < attributes.size(); a++) { LoadBufferView(sFolder, attributes[a], data, offset); } UINT indice = primitives[p].indices; GLTF::BufferView bufferView_indice = m_gltf.m_bufferViews[indice]; UINT target_indice = bufferView_indice.target; bool result = LoadBufferView(sFolder, indice, data, offset_indice); if(!result) return false; } return true; } bool Mesh::LoadBufferView(string sFolder, UINT a, vector<float> &vertexData, vector<float> &indiceData, int &offset_indice) { ifstream fin; GLTF::Accessor accessor = m_gltf.m_accessors[a]; GLTF::BufferView bufferView = m_gltf.m_bufferViews[accessor.bufferView]; GLTF::Buffer buffer = m_gltf.m_buffers[bufferView.buffer]; const size_t count = accessor.count; UINT target = bufferView.target; int elementSize; int componentSize; int numComponents; string sFilename_bin = sFolder + buffer.uri; fin.open(sFilename_bin, ios::binary); if (fin.fail()) { return false; } fin.seekg(bufferView.byteOffset, ios::beg); switch (accessor.componentType) { case GLTF_COMPONENT_TYPE_BYTE: componentSize = sizeof(GLbyte); break; case GLTF_COMPONENT_TYPE_UNSIGNED_BYTE: componentSize = sizeof(GLubyte); break; case GLTF_COMPONENT_TYPE_SHORT: componentSize = sizeof(GLshort); break; case GLTF_COMPONENT_TYPE_UNSIGNED_SHORT: componentSize = sizeof(GLushort); break; case GLTF_COMPONENT_TYPE_INT: componentSize = sizeof(GLint); break; case GLTF_COMPONENT_TYPE_UNSIGNED_INT: componentSize = sizeof(GLuint); break; case GLTF_COMPONENT_TYPE_FLOAT: componentSize = sizeof(GLfloat); break; case GLTF_COMPONENT_TYPE_DOUBLE: componentSize = sizeof(GLfloat); break; default: componentSize = 0; break; } if (accessor.type == "SCALAR") numComponents = 1; else if (accessor.type == "VEC2") numComponents = 2; else if (accessor.type == "VEC3") numComponents = 3; else if (accessor.type == "VEC4") numComponents = 4; else if (accessor.type == "MAT2") numComponents = 4; else if (accessor.type == "MAT3") numComponents = 9; else if (accessor.type == "MAT4") numComponents = 16; else return false; vector<float> fSubdata; // I'm pretty sure this is one of the problems, or related to it. If I use vector<USHORT> only half of the vector if filled, if I use GLubyte, the entire vector is filled, but the data might not be right vector<GLubyte> nSubdata; elementSize = (componentSize) * (numComponents); // Only fill the vector I'm using if (accessor.type == "SCALAR") { nSubdata.resize(count * numComponents); fin.read(reinterpret_cast<char*>(&nSubdata[0]), count/* * elementSize*/); // I commented this out since I'm not sure which size the .bin is storing the indice values, and I kept getting runtime errors, no matter what type I used for nSubdata } else { fSubdata.resize(count * numComponents); fin.read(reinterpret_cast<char*>(&fSubdata[0]), count * elementSize); } switch (target) { case GLTF_TARGET_ARRAY_BUFFER: { vertexData.insert(vertexData.end(), fSubdata.begin(), fSubdata.end()); glBindBuffer(GL_ARRAY_BUFFER, g_pGame->m_VBO); glBufferSubData(GL_ARRAY_BUFFER, offset, fSubdata.size() * componentSize, &fSubdata[0]); int attribute_index = 0; // I'm only loading vertex positions, the only attribute stored in the files for now glEnableVertexAttribArray(attribute_index); glVertexAttribPointer(0, numComponents, GL_FLOAT, GL_FALSE, componentSize * numComponents, (void*)(offset)); }break; case GLTF_TARGET_ELEMENT_ARRAY_BUFFER: { indiceData.insert(indiceData.end(), nSubdata.begin(), nSubdata.end()); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, g_pGame->m_EBO); // This is another area where I'm not sure of the correct values, but if componentSize is the correct size for the type being used it should be correct glBufferSubData is expecting the size in bytes, right? glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, offset, nSubdata.size() * componentSize, &nSubdata[0]); }break; default: return false; } if (accessor.type == "SCALAR") offset += nSubdata.size() * componentSize; else offset += fSubdata.size() * componentSize; fin.close(); return true; } these are the draw calls, I only use one at a time, but neither is currently display properly, g_pGame->m_indices is the same as indiceData vector, and vertexCount contains the correct vertex count, but I forgot to copy the lines of code containing where I set them, which is at the end of Mesh::Load(), I double checked the values to make sure.
      glDrawElements(GL_LINES, g_pGame->m_indices.size(), GL_UNSIGNED_BYTE, (void*)0); // Only shows with GL_UNSIGNED_BYTE
      glDrawArrays(GL_LINES, 0, g_pGame->m_vertexCount);
      So, I'm asking what type should I use for the indices? it doesn't seem to be unsigned short, which is what I selected with the Khronos Group Exporter for blender. Also, am I reading part or all of the .bin file wrong?
    • By ritzmax72
      That means how do I use base DirectX or OpenGL api's to make a physics based destruction simulation? 
      Will it be just smart rendering or something else is required?
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL ? techniques to create best possible star fields

This topic is 1884 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Bottom line:  I'm trying to figure out the best technique to create extremely high-quality real-time star fields for simulations and games.


I compiled a catalog of the brightest 1.2 billion stars, complete to nearly magnitude 21, a brightnesses range of roughly 500,000,000 to 1.  The data includes position (micro-arc-seconds), spectral types, brightness in multiple colors (including 3 that correspond well enough to RGB), proper motion, parallax (distance), and so forth.


My goal is to be able to display accurate, realistic star backgrounds in simulations and games.  The display is not limited to what a human can see with their naked eye (without optical aid).  That would only be 10,000 stars or so.  The goal is to be able to display wide fields of view (30 ~ 150 degrees), but also narrow fields as visually observed or captured with CCDs through binoculars and telescopes, including large telescopes.  So the field of view can be as wide as 120 degrees or more, but also as narrow as 0.01 degrees or less.  The 3D engine is already capable of unlimited zoom factors (with 90-degrees vertical being designated z0om == 1.00000).


Finally, the display of the sky should look good (in fact, "awesome"), and not take too much CPU time from the simulation or game.


I am fairly certain these goals can be met, but implementation will not be trivial.  Some approaches I consider promising adopt aspects of OpenGL that I have little experience with.  So I invite anyone who thinks they can give good advice to do so.


Oh, one more requirement that makes certain techniques "not work".  The display of occultation of bright stars must be displayed correctly.  What does this mean?  For practical purposes, all stars are point sources.  In fact, the angular extent of most stars in the catalog are less than one-millionth of an arc-second.  A small percentage of stars that are close and large enough span a few thousandths of an arc-second.  Thus only a tiny sample of stars can even be displayed as larger than a literal point source with the very best modern telescopes and techniques.  So for practical purposes (99.99999% of the stars) can be treated as literal point sources.  In fact, for the first implementation, all stars will be treated as literal point sources.


So what?  As you know, when you look at the night sky from a dark location, the brighter the star, the bigger the star looks to human vision (and photographs, and CCD images, etc).  This appearance exists not because brighter stars are larger, but because the light "blooms"... in the eyeball, on the film, on the CCD surface, etc.  Stated otherwise, not all light is absorbed by eyes or electronic sensors.  Some of the light is scattered, diffused, reflected and generally "bounces around".  This is true for all stars, but the scattering near bright stars is bright enough to be seen or photographed, and this is what makes them "appear larger".


So what is the significance of this when it comes to "occultations"?  An "occultation" simply means some object closer than the star passes between the star and the observer.  Consider the instant just before the object covers the star, when the point image of the star almost touches the object from the point of view of the observer.  In terms of pixels on the display, the star is within a fraction of a pixel of the object.


Because this bright star is a point source, the object blocks none of the star light.  The image of this bright star still "blooms" on the eye, film or CCD sensor.  So the image of the star may be 4, 8, 16, 32, maybe even 64 pixels in diameter on the screen, even though the star may be some infintesimal fraction of a pixel away from the object.  In fact, we are displaying a view through a large telescope, the bloom of the star might fill the entire display!


The point is, up to half the "image" of a star covers the image of nearby object on the display.  The object is completely opaque, but on our display, up to half of every star overlaps opaque objects next to them.


To choose a specific case, let's say this bright star image is (blooms to) 64-pixels in diameter (as far as we can see).  As the image of the star slowly moves closer and closer to the object on the display, first the leading edge of the 64-pixel diameter star image overlaps the object, then more and more of that 64-pixel star image overlaps the object... until...


At the instant when the exact center of the star image reaches the edge of the object, ALL light from the star vanishes... instantly.


That is how my engine must display such events.  As long as the exact center of a star is covered by another (closer) object, no light from the star reaches our eyes or sensors, and our display shows no evidence of the star.  And further, as long as the exact center of any star image is not covered by any other (closer) object, our display shows the entire circular image of the star, including 100% of the intensity of the star.


Remember the way this works, because it means certain common and/or easy techniques "just won't work properly".


Perhaps the most obvious technique that "bytes the dust" is the classic "skybox" technique, which I assume is always implemented with a "cube map".  Clearly if we display the sky with a cube map skybox, then any object that slowly moves in front of a 64-pixel diameter star image will gradually cover larger and large portions of the 64-pixel image of the star.  Which is wrong, wrong, wrong!


My first "brilliant idea" was to omit the brightest 100 or so stars when I create the skybox, so every star on the skybox is only about 1 pixel in diameter.  Then I'd have the engine display the skybox first, then all objects, then individually draw the brightest 100 stars as a final "special step".  This would need to be a "special step", because the engine would have to read the pixel in the framebuffer at the exact center of the star image, then either draw or fail-to-draw the entire star image depending on the depth of that pixel.  If the depth was "infinity" or "frustum far plane", then the star is not covered by anything, and the engine draws the entire image.  Otherwise, it skips the star and repeats this process for every bright star.


The problem with this is fairly obvious.  Which stars bloom to more than 1 pixel in diameter is extremely variable!  Oh, if I was only making a trivial application that only had a single zoom factor and "sensor sentivity" and no ability to "observe through optical instruments", then I might get away with that approach.  But clearly that technique does not support many situations, scenarios and cases my engine needs to support.  A star that is too faint to even register to the naked eye might bloom to hundreds of pixels diameter through a large telescope!  And the star brightness that begins to bloom even varies as an observer cranks the sensitivity or integration knob on the application.


I tried to find tricky, then elaborate, then... well... just about psychotic ways to patch up this approach.  But in the end, I realized this approach was futile.


I won't elaborate on those crazy attempts, or mention other "image based" techniques (where "image based" just means we create images (of various sorts, in various forms) of portions of the sky in advance, then display them (and patch them up) as needed.


At this point, I'm fairly sure any successful technique will be entirely "procedural".  Which means, the GPU will execute procedures that draw every star within the frustum [that is detectable under the current circumstances, and is not obscured by any object].


Of course, this whole category of approaches has its own demons.


For example, culling!  For example, CPU/transfer efficiency!  For example, GPU efficiency!  Are we really going to keep a database of 1.2 billions stars in GPU memory?  I don't think so!  Especially since that database is over 150GB on my hard drive.  Maybe in 10~20 years nvidia will sell GPU cards with 256GB of video memory, but until then, not gonna happen!


Finally we're getting down to the nitty gritty, where I need to tap into the experiences others have had with niche parts of OpenGL, and niche aspects of game engines --- like fancy culling, perhaps?


At the moment, it seems clear that the engine needs to have one to several big, fixed-size static buffers in GPU memory that my engine "rolls stars through".  Let's say we allocate 1GB of GPU memory for stars.  I think my nvidia GTX-680 card has something like 4GB of RAM, so 1GB for stars seems plausible (especially since I'm designing this engine for applications to be competed 3~5 years from now, with whatever GPUs are medium to high end then).  If the data for each star consumes 64 bytes, that means we have room in GPU memory for up to 200 million stars.  Since a 2560x1600 screen has 4 million pixels, our 1GB of video memory can hold up to 50 stars for each pixel.  That should be sufficient!  There's no point in turning the entire display white.


Of course, as the camera rotates we will be feeding new stars into video RAM, overwriting stars in "least recently accessed" regions of the sky.  In fact, depending on many factors, we might need to predictively feed stars into GPU memory 2 ~ 4 frames before they appear in the frustum.  So it is entirely possible only 1/4 of our 1GB of video memory is being actively displayed on the current frame.  And there might even be cases where that is optimistic, but let's hope not too much so.  After allowing for these memory inefficiencies, our 1GB of GPU memory still holds something like 8 to 16 stars for each pixel on the display.  Which should be plenty sufficient, maybe even conservative.


This analysis raises a few questions.  Can the GPU render 32 million stars per frame [without consuming too much GPU bandwidth]?  Let's see, 32-million * 60 frames per second == 2-billion stars per second.  Can the GPU render 2-billion stars per second without breaking a sweat... leaving enough GPU compute time for the rest of the simulation or game (including any GPGPU processing)?


Maybe this question is a bit more complicated.  For example, if 90% of the display is covered by other objects, the GPU can and will automatically discard a huge percentage of stars in the fragment shader on the basis of z-buffer test.  Even more interesting, it is my understanding that modern GPUs know how to discard even earlier in the pipeline.  Since the vast majority of stars are tiny... as in very, very tiny (as in 1 or 3 pixels diameter), they should be the easiest entities to discard early in the pipeline.  If this is true, then the average GPU resources consumed per pixel are greatly reduced.


In the extreme opposite case the entire display is covered by stars, with only 0% to 10% of the display covered by other objects (spacecraft and asteroids, perhaps).  In this case almost all stars in the frustum will be drawn, and consume the maximum GPU bandwidth.  However, this scenario presumes that very little else is happening in the engine, or at least, not much more than stars are being rendered by the GPU.


Which raises the question that is giving me the worst nightmares lately... culling.  Or to put this another way, how does the engine decide which stars need to be moved into GPU memory, and over which "old stars" in GPU memory should those "new stars" be written over?  AARG.


Though this question is driving me nuts, it may be a tiny bit less overwhelming today than last week.  It seems one benefit of trying in vain to find a way to implement the sky with a skybox forced me to question the original (and current) organization of my star database.  Thinking like the lifelong astronomy and telescope nerd I am, I sliced and diced the sky into 5400 x 2700 "square" chunks of the sky, each of which is 4 arc-minutes square.  I refer to standard astronomy "polar coordinates" thinking, with 5400 chunks in "right-ascension" (west-to-east), and 2700 chunks in "declination" (south-to-north).  While this is a totally natural way for any astronomy type to think about the sky, it has some extremely annoying characteristics.  For example, at the north and south poles, each "square" chunk of the sky is actually extremely tiny in terms of how much area on the celestial sphere it contains.  For example, the 5400 "square" chunks at the north and south poles cover only an 8 arc-minute diameter circle on the sky each... while a single "square" chunk at the equator covers a 4 arc-minute square!


This organization of the database might seem silly and inconvenient.  But I originally created this database for an automated telescope, and for that application the organization is perfectly fine, natural, intuitive and plenty convenient in most ways (except chasing objects close to the polar regions, but not exactly across them).


Last night I realized a far more natural and efficient organization for this star database for the purposes of this 3D engine is a cube map.  If we imagine a cube map with 1K x 1K pixels on each face, then each square pixel covers a 360-degree / 4096 "square" region of the sky.  So each pixel covers roughly a 5.2 arc-minutes square of the sky, and all of these regions are fairly close to the same actual size on the celestial sphere than my old configuration.  So the cube map approach is not only more evenly distributed over the sky, but also contains no "problematic" pseudo-singularities like the north and south celestial poles.


In case the above isn't entirely clear, the organization is this.  All the stars that lie within that part of the sky covered by each pixel of the 1K x 1K pixel square cube map faces are stored together (one after the other) in the database.  Furthermore, for reasons too complex and specific to the telescope application, individual stars within each region were stored from west-to-east order in the current organization, but can be stored "brightest-to-faintest" in the new configuration.  This is extremely convenient and efficient for the 3D engine application, because we always need to display for each region "the brightest stars down through some faintness limit determined by the sensor, zoom-level, and other factors).


Nonetheless, I still have the same problem!  Which is a question for one of the math-geniuses here!  That question is... how to quickly and efficiently compute which regions/chunks/cube-map-pixels are within the current frustum (and just outside in any given direction we might be rotating the camera towards)?  If this seems like a brain-dead simple task, remember that the camera can be rotated and tipped in any orientation on the 3D axes.  There is no "up" direction, for example!  Oh no!  Not in space there isn't (though you are free to consider the north celestial pole to be "up" if you wish).  Clearly the stars and the camera are both in "world-coordinates"... or to be more precise in this case, "universe coordinates" (though claiming the natural coordinates of the universe somehow accidentally correspond to the rotational axis of earth is delusion of grandeur to the point of being a bit wacko).


If you imagine a wide angle of the sky is currently displayed (perhaps well over 180-degrees in the horizontal or diagonal directions), the outline of the frustum might even cover some portion of all 6 faces of the cube map!  It is definitely not at all obvious to me how to quickly or conveniently determine which regions of sky (which pixels on those 1K x 1K faces of the cube map) are within the frustum.


Perhaps the quick and conventional answer is... "just have the CPU test all 6 million of those cube-map pixels against the frustum, and there you have your answer".  And yes, indeed we have... after making the CPU test 6 million projected squares against the frustum every frame.  Hear what we're saying here?  That's "the CPU" and "6 million tests against the frustum" and "every frame".  It sure would be a lot quicker (assuming someone is smart enough to figure out "how"), to compute a list of "the first and final" pixel on each row of each face of the cube map that is within the frustum!  Then we simply include all those other pixels between the first and final on each row, and not need to test them against the frustum.  Does someone know how to walk the boundary of the frustum across the cube map faces to compile such a list of "first and finals"?  Or perhaps some other technique.  But this one requires someone a whole lot smarter and more intuitive in math than me.


A couple thoughts that I forgot to mention in the natural places above:


#1:  Note that we must draw all objects before we draw the stars.  Otherwise we waste our time drawing zillions of stars that will later be covered up by objects.  But more importantly, we must draw the objects first so the bloomed star images overlap the objects.  Remember, star images must overlap parts of objects, but objects must never overlap parts of star images (only extinguish them entirely).


#2:  When we draw stars into the framebuffer, we must add or accumulate intensity at each framebuffer pixel.  We must always fetch the current RGB intensity at that pixel in the framebuffer (whether from an object the star image overlaps, or from previously drawn stars), add the RGB intensity of the current star at that pixel, and write the sum back to the framebuffer.  In this way multiple stars can contribute to each pixel.  Remember, there may be zero, one, two, five, twenty, fifty or hundreds of stars within each individual pixel.  When many stars are within a framebuffer pixel, the intensity of that pixel is the sum of the intensity of all star images that contribute to that pixel.


Finally, we slide into the question "how do we draw each 1, 3, 5, 7, 9.... 59, 61, 63... (??? or larger ???) star image?


For stars near the limit of being capable of detection at the current settings (sensor sensitivity, integration time, etc), we only need to add a tiny quantity of intensity to one pixel.  Or is this even true?  To be sure, if the star is precisely in the center of a pixel, and the star is barely bright enough to contribute, there is no need to add any intensity to any other, adjacent pixels.  So we can say this is a 1-pixel star image.  But even this simple case raises questions.  What if a faint star falls on (or close to) the edges of two adjacent pixels... or even the corner of 4 pixels?  Do we contribute light to 2 or 4 pixels?


This is not a simple question for me to answer.  On the one hand, I want stars to be as tiny and sharp as possible, as they might look in a perfect optical system.  I'm not particularly interested in adding coma, astigmatism and other aberrations to images, though I certainly could given the fact that I've written optical design software.  So I don't want to blur images without cause.  Furthermore, consider this question.  Consider the situation I just mentioned.  We have 3 stars of exactly the same color and brightness.  One falls smack in the center of a pixel, another star close-by falls on the edge of two pixels, and another star close-by falls on the corner of 4 pixels.  Now, given the fact that all 3 stars are absolutely identical, should they not look identical?  Should one be 1/2 as bright and smeared into a 2-by-1 pixel horizontal smear?  Should another be 1/4 as bright and smeared into a 2-by-2 pixel square?  Or should all 3 stars appear the same?


My intuition says "they should look the same".  Which means, my intuition says "let 1-pixel stars contribute only to 1 pixel".  But... what other consequences might flow from this decision?  Would it appear like stars "jerk from pixel to pixel"?  Would we see any hint of "swimming effects" similar to what aliasing causes in extended objects?  My guess is... no (or not much, especially for fainter stars).  But that is only a guess.  Maybe some of you know.  I suspect that even if some negative effects can be visible, they will not be substantial, if only because the typical framebuffer pixel will contain intensity contributions from multiple stars, and therefore the instantaneous transfer of intensity in or out of any given pixel by any one star will be less "jarring", since the pixel intensity itself is usually an average.  Or put another way, sky pixels won't often change from "back" to "white" (and the adjacent pixel from "white" to "black"), they will usually change from 30% intensity reddish-grey to 50% intensity yellowish-grey.  But I welcome any comments about the practical consequences of handling faint stars in various different ways.


Once the contribution of a star exceeds 1/2 or 3/4 or 7/8 of the maximum intensity a pixel can achieve, even brighter stars can only be represented by star images larger than 1 pixel in size.  Here we run into a whole new series of questions similar to the questions we asked about 1-pixel faint stars, except more complex.  For example, should we represent stars as only odd-pixel sizes (1, 3, 5, 7, 9... 59, 61, 63)?  Clearly we want to put most of the light intensity in the center pixel of a 3-pixel by 3-pixel star image, and spread a small quantity of light into the 8 pixels surrounding the central pixel.  This keeps brighter stars as bright and sharp as we can make them, but also lets them grow larger in a way that corresponds with the blooming star images in real sensors and the real world.  But if the position of the star is exactly on the edge between two pixels, or the corner of 4 pixels, we are clearly "cheating" slightly on the location of the center of star images... in order to get a more realistic appearance.  Or so it would seem... that contributing only 1/4 as much light to four adjacent pixels would make the star appear different than other stars nearby... and even significantly different than it would appear if the star is to move half a pixel away... to a pixel center.  So maybe not spreading light from a star into adjacent pixels causes as much or more "swimming effects" than always drawing stars on pixel centers!  Hmmm.  Strange!


A related practical question is... how should the engine draw stars?  For 1-pixel diameter stars, we have the option to draw them as literal points... no textures, just RGB intensity on some 1 pixel.  A question... are there certain modes in which OpenGL would draw a 1-pixel diameter "simple point" over more than 1 pixel?  I would guess not, but that is only a guess (having very limited experiences drawing points, or with fancy OpenGL modes).  Do NEAREST and LINEAR mean anything to 1-pixel points?


For larger than 1-pixel diameter stars, we can't just draw simple OpenGL points.  One option that sounds attractive to me based upon reading various places is OpenGL "point sprites".  I'm guessing we don't want to adopt any form of free "mipmapping" that OpenGL might offer, partly because we only want sprite textures that are an odd number of pixels on a side (so the bright center always falls in the middle of "one central pixel", not across "four almost central pixels").  So I'll ignore mipmapping unless someone drags me back to this option for some reason.


I guess one question is... do we really want to have separate 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63 pixel diameter textures in order to support those 32 star sizes?  I don't even think OpenGL provides a way to do this (have more than 32 texture units).  All textures in a texture array must be the same size, don't they?  Maybe we can figure out a way to put all 32 (or more) sizes of star images on a single texture, and somehow manipulate the texture-coordinates to the appropriate offset and scale based upon the star brightness.


While I'm not 100% certain how to make this happen yet, my general feeling is to "do everything procedurally" in the fragment shader (starting with "point sprites" primitives).  Somehow the fragment shader needs to know at every invocation (at every pixel in the [point sprite] image) how bright is the star, and "where are we" in the [point sprite] image.  As I understand it, OpenGL automatically sends texture coordinates into every invocation of a fragment shader processing a point sprite to indicate precisely where in the n-by-n pixel diameter sprite we currently are.  If we also know how large the [point sprite] star image is, we should be able to generate an intensity from the texture coordinates with some simple function.  For example, perhaps something like:


float ss = (s * 2.0) - 1.0;

float tt = (t * 2.0) - 1.0;

float intensity == (1.000 - ((ss * ss) + (tt * tt)));   // center == 1.000 : edges == 0.0000


That's not the "response function" shape I'd like to achieve, but some math wizard needs to tell me a fast, efficient equation for a response function given those texture coordinate inputs.  The shape I refer-to is a bit like a sine function from minimum through minimum with maximum in the center... except something more concentrated in the central region, falling off closer to the center, and a wide, low, gradually falling outer region.  Whatever the intensity at each pixel, we then multiply this per-pixel intensity by the star RGB color to get the final pixel color.


I guess we can't have "simple 1-pixel points" for faint stars, and "point sprites" for brighter stars, because all stars need to be the same primitive and processed with the same OpenGL "mode/state settings".  So I guess all stars need to be "point sprites" (or something).


A procedural approach like this is very attractive to me for several reasons.  First, no GPU texture memory accesses required.  With thousands of GPU cores making simultaneous texture fetches to draw millions of stars... ehhh... I'd rather avoid so many simultaneous texture fetches!  Second, it is so trivial to add "twinkle" in a procedural routine by simply adding a line or two to tweak the intensity with the output of a noise function.  Third, it is also fairly simple to add "seeing" (atmospheric turbulance) in a procedural routine.  Optimally this requires two parts... one part to tweak the size of the [point sprite] star somewhat larger and smaller with a noise function as time passes, and another part to distort the brightness at various locations within the star image with noise-like functions.  It is also natural and convenient in procedural routines to support adjustable contrast and other image processing settings (though this can be done whether the rest of the routine is procedural or not).


Well, I guess I've spent more time than I wanted describing my currently favored approach, and less time asking for opinions about other approaches that the many 3D graphics geniuses here might already know or invent on the spot.  Nonetheless, hopefully the extensive description of what I'm doing also states clearly what my requirements are.  That's important, because most "obvious" approaches are not capable of satisfying my requirements.


Oh, one more thing I forgot to mention.  I assume the engine will compute pixel intensities in 32-bit floating point variables.  Exactly how the GPU ends up converting those 32-bit values to displayable intensities is a question I'd love for someone to explain to me.



Better approaches, anyone?


Are OpenGL "point sprites" efficient for this application?


Do you know any efficient "response functions" that might work for me.


Any comments about aliasing or "swimming effect" problems I might have?


Any brilliant ways to efficiently perform the sky-region culling process that I mentioned, or a better alternative?


Any gotchas that make my approach not work, not achieve my goals and requirements, or be "just too damn slow"?



Share this post

Link to post
Share on other sites

[quote name='maxgpgpu' timestamp='1358757683' post='5023810']
Bottom line: I'm trying to figure out the best technique to create extremely high-quality real-time star fields for simulations and games.

No, either extremely high quality or realtime. Not only because you need to compromise for performance reasons, but because there is no time to see the extreme accuracy of a realtime, in-game animation; it is going to be undistinguishable from approximations.

In a game, you presumably want realistic and/or good looking but not exact starfields; slowly rendered, very high quality displays make sense, for example, in a rather scientific star map.


[quote name='maxgpgpu' timestamp='1358757683' post='5023810']
I compiled a catalog of the brightest 1.2 billion stars, complete to nearly magnitude 21, a brightnesses range of roughly 500,000,000 to 1. The data includes position (micro-arc-seconds), spectral types, brightness in multiple colors (including 3 that correspond well enough to RGB), proper motion, parallax (distance), and so forth.

My goal is to be able to display accurate, realistic star backgrounds in simulations and games.

Your catalog is good source material to render skyboxes, but what's the use of star motion data?

[quote name='maxgpgpu' timestamp='1358757683' post='5023810']
To choose a specific case, let's say this bright star image is (blooms to) 64-pixels in diameter (as far as we can see). As the image of the star slowly moves closer and closer to the object on the display, first the leading edge of the 64-pixel diameter star image overlaps the object, then more and more of that 64-pixel star image overlaps the object... until...

At the instant when the exact center of the star image reaches the edge of the object, ALL light from the star vanishes... instantly.

That is how my engine must display such events. As long as the exact center of a star is covered by another (closer) object, no light from the star reaches our eyes or sensors, and our display shows no evidence of the star. And further, as long as the exact center of any star image is not covered by any other (closer) object, our display shows the entire circular image of the star, including 100% of the intensity of the star.

Remember the way this works, because it means certain common and/or easy techniques "just won't work properly".

You are stating that your engine is tragically unable to draw alpha-blended sprites. Treating a 64 pixels wide star as a point is wrong, and the star isn't going to remain 64 pixels wide when you zoom in or out. What's the reason for these limitations? Can't you fix your engine? It might be the crux of your problems, since drawing sprites correctly would let you build and cache detailed textures, to be used as billboards, from a properly indexed star catalog.


You also seem to miss the fact that you need a floating point frame buffer and/or floating point textures to add up very small contributions from very many stars.

Share this post

Link to post
Share on other sites

Until proven otherwise, I will continue to assume I can generate "awesome" night skies.  As for "not noticing" when there's lots of action happening, that is somewhat true, but in all but the most wild and crazy games, often the action slows or stops and the operator has time to pause and notice.  And that's certainly true in any game I'd be personally interested in.  But also, I intend for many settings to be adjustable, even in real time.  So number of stars (for example) can be reduced to free up time for other aspects of the application.


I originally created the star catalog (and galaxy, cluster, nebula and other catalogs) for astronomical telescopes.  In fact, the first instance was a 1.5 meter aperture telescope that chases satellites and debris, and needs to compute the exact position of the moving object every frame (15 to 30 per second) and re-compute orbit to refine its tracking.  This is one reason the data is so complete, but it is so much effort to compile such a large and complete catalog, that including as much information in as great as possible detail and precision makes the catalog more likely to be helpful for other applications.  In practice, in the observatory, a new catalog is generated every day for the next evening.  That process generates precise positions for each night by performing precession and parallax computations to the original catalog data.  Clearly there's not need for that in a typical game!  However, there is need for that in many astronomy applications, for example, computing the exact positions and orbits of asteroids and other unknown objects when they are first observed.  In fact, just recognizing that a tiny dot in your field of view is unrecognized usually requires an extensive star/object catalog.


Where did I say my engine is "tragically unable to draw alpha-blended sprites"?  Where you get the impression every star is 64-pixels diameter?


My current assumption is that that vertex shader will set the star size based upon star brightness and settings that are constant for the current frame (including zoom factor, sensor sensitivity, integration time, etc).  Didn't I say stars would be 1, 3, 5, 7, 9, 11, 13, 15... 57, 59, 61, 63 pixels diameter depending upon their brightness and other settings?  Hopefully I explained all sizes would be odd numbers so the center of the star is always in the middle of a pixel.


Yes, I understand that I should accumulate in floating-point (RGB intensity) variables.  However, when it finally comes time to display any brightness, the physical display only has a limited integer range of brightnesses (for each RGB color).  It is my assumption that I should be aware of this through the entire process for various reasons.  Like, for example, not ending up with an entirely white sky.  Like, for example, not spending GPU time computing stars that will contribute only 0.01 ~ 0.10 DAC unit to the final displayed result (depending on how many stars end up on each pixel, on average).

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement