Jump to content
  • Advertisement

Search the Community

Showing results for tags '3D' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 261 results

  1. This is a follow up to a previous post. MrHallows had asked me to post the project, so I am going to with a new fresh thread so that I can get the most needed help. I have put the class in the main .cpp to simplify for your debugging purposes. My error is : C1189 #error: OpenGL header already included, remove this include, glad already provides it I tried adding : #define GLFW_INCLUDE_NONE, and tried adding this as a preprocessor definitions too. I also tried to change the #ifdef - #endif, except I just couldn't get it working. The code repository URL is : https://github.com/Joshei/GolfProjectRepo/tree/combine_sources/GOLFPROJ The branch is : combine_sources The Commit ID is: a4eaf31 The files involved are : shader_class.cpp, glad.h, glew.h glad1.cpp was also in my project, I removed it to try to solve this problem. Here is the description of the problem at hand: Except for glcolor3f and glRasterPos2i(10,10); the code works without glew.h. When glew is added there is only a runtime error (that is shown above.) I could really use some exact help. You know like, "remove the include for gl.h on lines 50, 65, and 80. Then delete the code at line 80 that states..." I hope that this is not to much to ask for, I really want to win at OpenGL. If I can't get help I could use a much larger file to display the test values or maybe it's possible to write to an open file and view the written data as it's outputted. Thanks in advance, Josheir
  2. I'm looking to create a small game engine, though my main focus is the renderer. I'm trying to decide which of these techniques I like better: Deferred Texturing or Volume Tiled Forward Shading ( https://github.com/jpvanoosten/VolumeTiledForwardShading ). Which would you choose,if not something else? Here are my current goals: I want to keep middleware to a minimum I want to use either D3D12 or Vulkan. However I understand D3D best so that is where I'm currently siding. I want to design for today's high-end GPU's and not worry too much about compatibility, as I'm assuming this is going to take a long time anyway I'm only interested in real-time ray-tracing if/when it can be done without an RTX-enabled card PBR pipeline that DOES NOT INCLUDE METALNESS. I feel there are better ways of doing this (hint: I like cavity maps) I want dynamic resolution scaling. I know it's simply a form of super-sampling, but I haven't found many ideal sources that explain super-sampling in a way that I would understand. I don't want to use any static lighting. I have good reasons which I'd be happy to explain. So I guess what I'm asking you fine people, is that if time were not a concern, or money, what type of renderer would you write and more importantly "WHY"? Thank you for your time.
  3. Hi, I have C++ Vulkan based project using Qt framework. QVulkanInstance and QVulkanWindow does lot of things for me like validation etc. but I can't figure out due Vulkan low level API how to troubleshoot Vulkan errors. I am trying to render terrain using tessellation shaders. I am learning from SaschaWillems tutorial for tessellation rendering. I think I am setting some value for rendering pass wrong in MapTile.cpp but unable to find which cause I dont know how to troubleshoot it. Whats the problem? App freezes on second end draw call Why? QVulkanWindow: Device lost Validation layers debug qt.vulkan: Vulkan init (vulkan-1.dll) qt.vulkan: Supported Vulkan instance layers: QVector(QVulkanLayer("VK_LAYER_NV_optimus" 1 1.1.84 "NVIDIA Optimus layer"), QVulkanLayer("VK_LAYER_RENDERDOC_Capture" 0 1.0.0 "Debugging capture layer for RenderDoc"), QVulkanLayer("VK_LAYER_VALVE_steam_overlay" 1 1.1.73 "Steam Overlay Layer"), QVulkanLayer("VK_LAYER_LUNARG_standard_validation" 1 1.0.82 "LunarG Standard Validation Layer")) qt.vulkan: Supported Vulkan instance extensions: QVector(QVulkanExtension("VK_KHR_device_group_creation" 1), QVulkanExtension("VK_KHR_external_fence_capabilities" 1), QVulkanExtension("VK_KHR_external_memory_capabilities" 1), QVulkanExtension("VK_KHR_external_semaphore_capabilities" 1), QVulkanExtension("VK_KHR_get_physical_device_properties2" 1), QVulkanExtension("VK_KHR_get_surface_capabilities2" 1), QVulkanExtension("VK_KHR_surface" 25), QVulkanExtension("VK_KHR_win32_surface" 6), QVulkanExtension("VK_EXT_debug_report" 9), QVulkanExtension("VK_EXT_swapchain_colorspace" 3), QVulkanExtension("VK_NV_external_memory_capabilities" 1), QVulkanExtension("VK_EXT_debug_utils" 1)) qt.vulkan: Enabling Vulkan instance layers: ("VK_LAYER_LUNARG_standard_validation") qt.vulkan: Enabling Vulkan instance extensions: ("VK_EXT_debug_report", "VK_KHR_surface", "VK_KHR_win32_surface") qt.vulkan: QVulkanWindow init qt.vulkan: 1 physical devices qt.vulkan: Physical device [0]: name 'GeForce GT 650M' version 416.64.0 qt.vulkan: Using physical device [0] qt.vulkan: queue family 0: flags=0xf count=16 supportsPresent=1 qt.vulkan: queue family 1: flags=0x4 count=1 supportsPresent=0 qt.vulkan: Using queue families: graphics = 0 present = 0 qt.vulkan: Supported device extensions: QVector(QVulkanExtension("VK_KHR_8bit_storage" 1), QVulkanExtension("VK_KHR_16bit_storage" 1), QVulkanExtension("VK_KHR_bind_memory2" 1), QVulkanExtension("VK_KHR_create_renderpass2" 1), QVulkanExtension("VK_KHR_dedicated_allocation" 3), QVulkanExtension("VK_KHR_descriptor_update_template" 1), QVulkanExtension("VK_KHR_device_group" 3), QVulkanExtension("VK_KHR_draw_indirect_count" 1), QVulkanExtension("VK_KHR_driver_properties" 1), QVulkanExtension("VK_KHR_external_fence" 1), QVulkanExtension("VK_KHR_external_fence_win32" 1), QVulkanExtension("VK_KHR_external_memory" 1), QVulkanExtension("VK_KHR_external_memory_win32" 1), QVulkanExtension("VK_KHR_external_semaphore" 1), QVulkanExtension("VK_KHR_external_semaphore_win32" 1), QVulkanExtension("VK_KHR_get_memory_requirements2" 1), QVulkanExtension("VK_KHR_image_format_list" 1), QVulkanExtension("VK_KHR_maintenance1" 2), QVulkanExtension("VK_KHR_maintenance2" 1), QVulkanExtension("VK_KHR_maintenance3" 1), QVulkanExtension("VK_KHR_multiview" 1), QVulkanExtension("VK_KHR_push_descriptor" 2), QVulkanExtension("VK_KHR_relaxed_block_layout" 1), QVulkanExtension("VK_KHR_sampler_mirror_clamp_to_edge" 1), QVulkanExtension("VK_KHR_sampler_ycbcr_conversion" 1), QVulkanExtension("VK_KHR_shader_draw_parameters" 1), QVulkanExtension("VK_KHR_storage_buffer_storage_class" 1), QVulkanExtension("VK_KHR_swapchain" 70), QVulkanExtension("VK_KHR_variable_pointers" 1), QVulkanExtension("VK_KHR_win32_keyed_mutex" 1), QVulkanExtension("VK_EXT_conditional_rendering" 1), QVulkanExtension("VK_EXT_depth_range_unrestricted" 1), QVulkanExtension("VK_EXT_descriptor_indexing" 2), QVulkanExtension("VK_EXT_discard_rectangles" 1), QVulkanExtension("VK_EXT_hdr_metadata" 1), QVulkanExtension("VK_EXT_inline_uniform_block" 1), QVulkanExtension("VK_EXT_shader_subgroup_ballot" 1), QVulkanExtension("VK_EXT_shader_subgroup_vote" 1), QVulkanExtension("VK_EXT_vertex_attribute_divisor" 3), QVulkanExtension("VK_NV_dedicated_allocation" 1), QVulkanExtension("VK_NV_device_diagnostic_checkpoints" 2), QVulkanExtension("VK_NV_external_memory" 1), QVulkanExtension("VK_NV_external_memory_win32" 1), QVulkanExtension("VK_NV_shader_subgroup_partitioned" 1), QVulkanExtension("VK_NV_win32_keyed_mutex" 1), QVulkanExtension("VK_NVX_device_generated_commands" 3), QVulkanExtension("VK_NVX_multiview_per_view_attributes" 1)) qt.vulkan: Enabling device extensions: QVector(VK_KHR_swapchain) qt.vulkan: memtype 0: flags=0x0 qt.vulkan: memtype 1: flags=0x0 qt.vulkan: memtype 2: flags=0x0 qt.vulkan: memtype 3: flags=0x0 qt.vulkan: memtype 4: flags=0x0 qt.vulkan: memtype 5: flags=0x0 qt.vulkan: memtype 6: flags=0x0 qt.vulkan: memtype 7: flags=0x1 qt.vulkan: memtype 8: flags=0x1 qt.vulkan: memtype 9: flags=0x6 qt.vulkan: memtype 10: flags=0xe qt.vulkan: Picked memtype 10 for host visible memory qt.vulkan: Picked memtype 7 for device local memory qt.vulkan: Color format: 44 Depth-stencil format: 129 qt.vulkan: Creating new swap chain of 2 buffers, size 600x370 qt.vulkan: Actual swap chain buffer count: 2 (supportsReadback=1) qt.vulkan: Allocating 1027072 bytes for transient image (memtype 8) qt.vulkan: Creating new swap chain of 2 buffers, size 600x368 qt.vulkan: Releasing swapchain qt.vulkan: Actual swap chain buffer count: 2 (supportsReadback=1) qt.vulkan: Allocating 1027072 bytes for transient image (memtype 8) QVulkanWindow: Device lost qt.vulkan: Releasing all resources due to device lost qt.vulkan: Releasing swapchain I am not so sure if this debug helps somehow :(( I dont want you to debug it for me. I just want to learn how I should debug it and find where problem is located. Could you give me guide please? Source code Source code rendering just few vertices (working) Difference between links are: Moved from Qt math libraries to glm Moved from QImage to gli for Texture class Added tessellation shaders Disabled window sampling Rendering terrain using heightmap and texturearray (Added normals and UV) Thanks
  4. Hey guys! In my 3D terrain generator, I calculate simple texture coordinates based on x,z (y being up down) coordinates as if terrain was flat - simple planar projection. That of course introduces texture stretching on sloped parts of the terrain. Trying to solve that I first implemented tri-planar mapping (like this), but it is really performance(PS) heavy and the results are very weird looking in some cases. Then I found another technique, which looks better, and most importantly, the heavy work is done in a preprocess - generating an indirection map of terrain which is then used in pixel shader to offset uv coords: Indirection mapping for on quasi-conformal relief texturing Has anyone ever implemented this solution and is willing to share some code for indirection map generation (spring grid relaxation)? I couldnt find any implementation or sample, and am really not sure how to go about it. Thanks!
  5. Hello guys! So, I'm currently working on our senior game-dev project and I'm currently tasked with implementing animations in DirectX. It's been a few weeks of debugging and I've gotten pretty far, only a few quirks left to fix but I can't figure this one out. So, what happens is when a rotation becomes too big on a particular joint, it completely flips around. This seems to be an issue in the FBX data extraction and I've isolated it to the key animation data. First off, here's what the animation looks like with a small rotation: Small rotation in Maya Small rotation in Engine Looks as expected! (Other than the flipped direction, which I'm not too concerned about at this point; however, if you think this is part of the issue please let me know!) Now, here's an animation with a big rotation (+360 around Y then back to 0): Big rotation in Maya Big rotation in Engine As you can see the animation completely flips here and there. Here's how the local animation data for each joint is retrieved: while (currentTime < endTime) { FbxTime takeTime; takeTime.SetSecondDouble(currentTime); // #calculateLocalTransform FbxAMatrix matAbsoluteTransform = GetAbsoluteTransformFromCurrentTake(skeleton->GetNode(), takeTime); FbxAMatrix matParentAbsoluteTransform = GetAbsoluteTransformFromCurrentTake(skeleton->GetNode()->GetParent(), takeTime); FbxAMatrix matInvParentAbsoluteTransform = matParentAbsoluteTransform.Inverse(); FbxAMatrix matTransform = matInvParentAbsoluteTransform * matAbsoluteTransform; // do stuff with matTransform } // GetAbsoluteTransformFromCurrentTake() returns: // pNode->GetScene()->GetAnimationEvaluator()->GetNodeGlobalTransform(pNode, time); This seems to work well, but on the keys when the flip happens it returns a matrix where the non-animated rotations (Y and Z in this case) have a value of 180, rather than 0. The Y value also starts "moving" in the opposite direction. From the Converter we save out the matrix components as T, R, S (R in Euler) and during import in engine the rotation is converted to a quaternion for interpolation. I'm not sure what else I can share that might help give a clue as to what the issue is, but if you need anything to help me just let me know! Any help/ideas are very much appreciated! ❤️ E. Finoli
  6. So as I am toying around with lighting shaders, great looking results can be achieved. However, I struggle to fully grasp the idea behind it. Namely, the microfacet BRDF doesn't line up with how I intuitively understand the process. Expectedly, the perceived brightness on a surface is highest at NdotH, but this gets to be increased two-fold by the denominator as the L and V angles diverge. The implicit geometry term would cancel this out, but something like Smith-Schlick with a low roughness input would not do much in that department, making gracing angles very bright despite there being no fresnel involved. The multiplication of the whole BRDF with NdotL then only partially cancels it out. Am I missing something, or should a relatively smooth metallic surface indeed have brighter highlights when staring at it with a punctual light near the horizon of said surface?
  7. Hi,guys. I need to project a picture from a projector(maybe a camera) onto some meshes and save those into the mesh texture according to the mesh's unfolded UV.It just like the light map which encode the lighting-info into the texture instead of the project-info. The following picture is an example(But it just project without writting into texture).I noticed blender actually has this function that allow you to draw a texture on to a mesh.But i have no idea on how to save those project pixel into the mesh's texture. I think maybe i can finish this function if i have a better understanding about how to produce Light map.Any advises or matertials can help me out?(any idea,any platform,or reference)>?
  8. Hi guys, I wanted my roads to look a little more bumpy on my terrain so I added in bump mapping based on what i had working for the rest of the models. It works and looks nice enough (I'll need to fiddle with the normal map to get the pebble looking just the right amount of sharpness) but anyway.. a problem cropped up that hadn't occurred to me: I don't want it applied to the whole terrain, just the roads. The road texture is simply added using a blend map with green for grass, red for rock, blue for road. So the more blue there more the road texture is used. I don't wan't the other textures bump mapped.. i mean I guess i could but for now i'd rather not. So the code is something like: float3 normalFromMap = PSIn.Normal; if (BumpMapping) { // read the normal from the normal map normalFromMap = tex2D(RoadNormalMapSampler, PSIn.TexCoord * 4); //tranform to [-1,1] normalFromMap = 2.0f * normalFromMap - 1.0f; //transform into world space normalFromMap = mul(normalFromMap, PSIn.WorldToTangentSpace); } else { //tranform to [-1,1] normalFromMap = 2.0f * normalFromMap - 1.0f; } //normalize the result normalFromMap = normalize(normalFromMap); //output the normal, in [0,1] space Output.Normal.rgb = 0.5f * (normalFromMap + 1.0f); I tried checking if the blendmap's blue component was > 0 then use the bump mapping but that just makes a nasty line where it switches between just using the normal of the whole vertex or using the normal map. How do I blend between the two methods? Thanks
  9. I'm trying to add some details like grass, rocks, trees, etc. to my little procedurally-generated planet. The meshes for the terrain are created from a spherified cube which is split in chunks (chunked LOD). To do this I've wrote a geometry shader that takes a mesh as input and uses its vertex positions as locations where the patches of grass will be placed (as textured quads). For an infinite flat world (not spherical) I'd use the terrain mesh as input to the geometry shader, but I've found that this won't work well on a sphere, since the vertex density is not homogeneous across the surface. So the main question would be: How to create a point cloud for each terrain chunk whose points were equally distributed across the chunk? Note: I've seen some examples where these points are calculated from intersecting a massive rain of totally random perpendicular rays from above... but I found this solution overkill, to say the least. Another related question would be: Is there something better/faster than the geometry shader approach, maybe using compute shaders and instancing?
  10. Hello, I am university student. This year I am going to write bachelor thesis about Vulkan app using terrain to render real places based on e.g. google maps data. I played World of Warcraft for 10 years and I did some research about their terrain rendering. They render map as grid of tiles. Each tile had 4 available textures to paint (now 8 by expansion WoD) However I found issue about this implementation, that's gaps between tiles. Is there any technique which solves this problem? I read on stackoverflow that guys find only solution in using smooth tool and fixing it manually. Main question: is this terrain rendering technique obsolete? Is there any new technique that replaces this method to render large map as small tiles? Should I try to implement rendering terrain as grid of tiles or should I use some modern technique (can you tell me which is modern for you?). If I should implement terrain as one large map to prevent gaps between tiles how textures are applied for such large map? Thanks for any advice.
  11. We have an engine based on Direct3D11 that uses ID3D11Device::CreateTexture2D to create its textures passing in whatever format we read from the dds file header. We also have a previous version of our engine that uses the DX9 fixed function bump map feature for bump maps. This feature takes bump map textures in U8V8 format as input but it also takes textures in DXT5 and A8R8G8B8 and converts them into U8V8 using D3DXCreateTextureFromFileInMemoryEx in d3dx9). Our current D3D11 engine handles the U8V8 textures just fine (I think it feeds it to CreateTexture2D as DXGI_FORMAT_R8G8_TYPELESS) and has some shader code that emulates the fixed function bump map feature without problems. But now we want to add support for the DXT5 and A8R8G8B8 bump maps. Does anyone know where I can find code for Direct3D11 (or just plain code with no dependence on specific graphics APIs) that can convert the DXT5 or A8R8G8B8 texture data into U8V8 data in the same way as D3DXCreateTextureFromFileInMemoryEx and the other D3DX9 functions would do? (Someone out there must have written some code to convert between different texture formats I am sure, I just can't find it)
  12. Hello all I am not sure if this topic is programming or more Visual Arts, but I am trying to figure out a texture image format of a game that is soon to be discontinued in order to not loose the data once it is shut down and I was hoping that with what I have figured out so far maybe someone recognizes a pattern that they know an algorithm for. The file format itself is actually a container consisting of a header with the information of widht, height, depth, etc as well as data for the mip levels. There are over 20 different possible formats that can be indicated in the header, many of them are quite simple, like uncompressed ARGB, single channel R, DXTn, cube textures. However I am struggling to understand how one of them could be interpreted. I have figured out the following things so far: In the game the textures are present as DXT5, so I guess its a compression over DXT (crunch?) It is variable rate. You can see massive differences in size between two images of the same dimension Only that format uses additional fields in the header, the most interesting part are 3 uint32 values that seem to actually be 4 values of 24 bits each. There are only a few pattern for these 4 values I have seen so far. One is 0x00004b, 0x00004B, 0x00004B, 0x00004B. Others are 0x00005C, 0x00005C, 0x00004B, 0x00004B and similar. Less often there is 0x00005C, 0x00005C, 0xFF014B, 0x00004B 1x1 mip layers are usually 6 bytes (padded to 16 bytes), but for the last pattern above (with 0xFF014B) its only 4 bytes When comparing various 1x1 mip layers there is no common pattern except that they are usually 6 bytes I was thinking that it could be crunch over DXT5 as mentioned above, but I am not sure if that somehow fits the 4 3 byte values i mentioned above. It does not seem like the individual layers contain any additional information like a codebook for colors or dictionaries or something like that. Maybe are there any other ideas what it could be? Thank you very much in advance, Cromon
  13. I've learned that the triangle clipping in the rasterization process usually using Sutherland–Hodgman algorithm. I also found an algorithm called "Guard-band". I'm writing a software raster so I want to know what technical the GPU use, I want to implement it for study. Thanks! updated: what's the more proper triangulate algorithm?
  14. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  15. Hi all! I'm trying to implement the technique: Light Indexed Deferred Rendering. I modified the original demo: 1) Has removed some UI elements. 2) Has removed View Space light calculation, 3) I fill the Light indices during startup. 4) Optional: I tried to use UBO instead of Texture1D uncomment //#define USE_UBO My modified version of demo My implementation details: I use the constant buffers instead of Texture1D for storing of the light source information. Instead of OpenGL, I use Direct3D11. My implementation is divided on following parts: 1) Packing of light indices for each light during startup: void LightManager::LightManagerImpl::FillLightIndices() { int n = static_cast<int>(lights.size()); for (int lightIndex = n - 1; lightIndex >= 0; --lightIndex) { Vector4D& OutColor = lightIndices.push_back(); // Set the light index color ubyte convertColor = static_cast<ubyte>(lightIndex + 1); ubyte redBit = (convertColor & (0x3 << 0)) << 6; ubyte greenBit = (convertColor & (0x3 << 2)) << 4; ubyte blueBit = (convertColor & (0x3 << 4)) << 2; ubyte alphaBit = (convertColor & (0x3 << 6)) << 0; OutColor = Vector4D(redBit, greenBit, blueBit, alphaBit); const float divisor = 255.0f; OutColor /= divisor; } } 2) Optional/Test implementation: Update lights positions(animation). 3) Rendering The Light source Geometry into RGBA RenderTarget (Light sources Buffer) using 2 shaders from demo: Pixel shader: uniform float4 LightIndex : register(c0); struct PS { float4 position : POSITION; }; float4 psMain(in PS ps) : COLOR { return LightIndex; }; Vertex shader: uniform float4x4 ViewProjMatrix : register(c0); uniform float4 LightData : register(c4); struct PS { float4 position : POSITION; }; PS vsMain(in float4 position : POSITION) { PS Out; Out.position = mul(float4(LightData.xyz + position.xyz * LightData.w, 1.0f), ViewProjMatrix); return Out; } These shaders is compiled in 3DEngine into C++ code. 4) Calculating of the final lighting, using the prepared texture with light indices. The pixel shaders can be found in attached project. The final shaders: Pixel: // DeclTex2D(tex1, 0); // terrain first texture DeclTex2D(tex2, 1); // terrain second texture DeclTex2D(BitPlane, 2); // Light Buffer struct Light { float4 posRange; // pos.xyz + w - Radius float4 colorLightType; // RGB color + light type }; // The light list uniform Light lights[NUM_LIGHTS]; struct VS_OUTPUT { float4 Pos: POSITION; float2 texCoord: TEXCOORD0; float3 Normal: TEXCOORD1; float4 lightProjSpaceLokup : TEXCOORD2; float3 vVec : TEXCOORD3; }; // Extract light indices float4 GetLightIndexImpl(Texture2D BitPlane, SamplerState sBitPlane, float4 projectSpace) { projectSpace.xy /= projectSpace.w; projectSpace.y = 1.0f - projectSpace.y; float4 packedLight = tex2D(BitPlane, projectSpace.xy); float4 unpackConst = float4(4.0, 16.0, 64.0, 256.0) / 256.0; float4 floorValues = ceil(packedLight * 254.5); float4 lightIndex; for(int i = 0; i < 4; i++) { packedLight = floorValues * 0.25; floorValues = floor(packedLight); float4 fracParts = packedLight - floorValues; lightIndex[i] = dot(fracParts, unpackConst); } return lightIndex; } #define GetLightIndex(tex, pos) GetLightIndexImpl(tex, s##tex, pos) // calculate final lighting float4 CalculateLighting(float4 color, float3 vVec, float3 Normal, float4 lightIndex) { float3 ambient_color = float3(0.2f, 0.2f, 0.2f); float3 lighting = float3(0.0f, 0.0f, 0.0f); for (int i = 0; i < 4; ++i) { float lIndex = 255.0f * lightIndex[i]; // read the light source data from constant buffer Light light = lights[int(lIndex)]; // Get the vector from the light center to the surface float3 lightVec = light.posRange.xyz - vVec; // original from demo doesn't work correctly #if 0 // Scale based on the light radius float3 lVec = lightVec / light.posRange.a; float atten = 1.0f - saturate(dot(lVec, lVec)); #else float d = length(lightVec) / light.posRange.a; const float3 ConstantAtten = float3(0.4f, 0.01f, 0.01f); float atten = 1.0f / (ConstantAtten.x + ConstantAtten.y * d + ConstantAtten.z * d * d); #endif lightVec = normalize(lightVec); float3 H = normalize(lightVec + vVec); float diffuse = saturate(dot(lightVec, Normal)); float specular = pow(saturate(dot(lightVec, H)), 16.0); lighting += atten * (diffuse * light.colorLightType.xyz * color.xyz + color.xyz * ambient_color + light.colorLightType.xyz * specular); } return float4(lighting.xyz, color.a); } float4 psMain(in VS_OUTPUT In) : COLOR { float4 Color1 = tex2D(tex1, In.texCoord); float4 Color2 = tex2D(tex2, In.texCoord); float4 Color = Color1 * Color2; float3 Normal = normalize(In.Normal); // get light indices from Light Buffer float4 lightIndex = GetLightIndex(BitPlane, In.lightProjSpaceLokup); // calculate lightung float4 Albedo = CalculateLighting(Color, In.vVec, Normal, lightIndex); Color.xyz += Albedo.xyz; return Color; } Vertex Shaders: // uniform float4x4 ViewProjMatrix : register(c0); struct VS_OUTPUT { float4 Pos: POSITION; float2 texCoord: TEXCOORD0; float3 Normal: TEXCOORD1; float4 lightProjSpaceLokup : TEXCOORD2; float3 vVec : TEXCOORD3; }; float4 CalcLightProjSpaceLookup(float4 projectSpace) { projectSpace.xy = (projectSpace.xy + float2(projectSpace.w, projectSpace.w)) * 0.5; return projectSpace; } VS_OUTPUT VSmain(float4 Pos: POSITION, float3 Normal: NORMAL, float2 texCoord: TEXCOORD0) { VS_OUTPUT Out; Out.Pos = mul(float4(Pos.xyz, 1.0f), ViewProjMatrix); Out.texCoord = texCoord; Out.lightProjSpaceLokup = CalcLightProjSpaceLookup(Out.Pos); Out.vVec = Pos.xyz; Out.Normal = Normal; return Out; } The result: We can show the Light sources Buffer - texture with light indices:(console command: enableshowlightbuffer 1) If we try to show the light geometry we will see the following result(console enabledrawlights 1) And my the demo of Light indexed deferred rendering: https://www.dropbox.com/s/5t9f5vpg83sspfs/3DMove_multilighting_gd.net.7z?dl=0 1) Try to run demo, moving on terrain using W,A,S,D. 2) Try to show light geometry(console command enabledrawlights 1), light buffer(console command: enableshowlightbuffer 1) What do i do wrong ? how to fix the calculation of lighting ?
  16. So some popular PBR workflows parameterize both metalness and reflectance. Trying to grasp the paradigm myself, I wondered if the latter is really necessary. Since the variance of reflectance of non-metals is pretty low, and the reflectance of metals is always high, what if these parameters were merged? As it currently is, they ostensibly do the same thing, with the caveat that metallness also affects the specular color, which could perhaps be compensated for.
  17. I know that is a noob question but, between OpenGL 2.0 and OpenGL ES 2.0, which one got better performance to desktop and/or mobile devices? I have read in somewhere that the performance opengl is based on code but some games we can compare oepngl version performances, so idk. Which one of both use less CPU & GPU/ got better performance? Thanks
  18. Hi, We know that it is possible to modify the pixel's depth value using "System Value" semantic SV_Depth in this way: struct PixelOut { float4 color : SV_Target; float depth : SV_Depth; }; PixelOut PS(VertexOut pin) { PixelOut pout; // … usual pixel work pout.Color = float4(litColor, alpha); // set pixel depth in normalized [0, 1] range pout.depth = pin.PosH.z - 0.05f; return pout; } As many post-effect requires the depth value of current pixel (such as fog, screen space reflection), we need to acquire it in the PS. A common way to do that is to render the depth value to a separate texture and sample it in the PS. But I found this method a bit clumsy because we already have depth value stored in the depth-stencil buffer, so I wonder whether it is possible to access from NATIVE depth buffer instead of ANOTHER depth texture. I found this on MSDN: https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx-graphics-hlsl-semantics that mentions READ depth data in shader. I tried this in Unity: half4 frag (Vert2Frag v2f, float depth : SV_Depth) : SV_Target { return half4(depth, depth, depth, 1); } However it turns out to be a pure white image, which means this depth values in all pixels are 1. So is that MSDN wrong? Is it possible to sampling a NATIVE depth buffer? Thanks!
  19. Hi, guys. I am developing a path tracing baking renderer now, which is based on OpenGL and OpenRL. It can bake some scenes e.g. the following, I am glad it can bake bleeding diffuse color. : ) I store the irradiance directly, like this. Albedo * diffuse color has already come into irradiance calculation when baking, both direct and indirect together. After baking, In OpenGL fragment shader, I use the light map directly. I think I have got something wrong, due to most game engines don't do like this. Which kind of data should I choose to store into the light maps? I need diffuse only. Thanks in advance!
  20. So the foolproof way to store information about emission would be to dedicate a full RGB data set to do the job, but this is seemingly wasteful, and squeezing everything into a single buffer channel is desirable and indeed a common practice. The thing is that there doesn't seem to be one de facto standard technique to achieve this. A commonly suggested solution is to perform a simple glow * albedo multiplication, but it's not difficult to imagine instances where this strict interdependence would become an impenetrable barrier. What are some other ideas?
  21. Hey, I have to cast camera rays through the near plane of the camera and the first approach in the code below is the one I've come up with and I understand it precisely. However, I've come across much more elegant and shorter solution which looks to give exacly the same results (at least visually in my app) and this is the "Second approach" below. struct VS_INPUT { float3 localPos : POSITION; }; struct PS_INPUT { float4 screenPos : SV_POSITION; float3 localPos : POSITION; }; PS_INPUT vsMain(in VS_INPUT input) { PS_INPUT output; output.screenPos = mul(float4(input.localPos, 1.0f), WorldViewProjMatrix); output.localPos = input.localPos; return output; } float4 psMain(in PS_INPUT input) : SV_Target { //First approach { const float3 screenSpacePos = mul(float4(input.localPos, 1.0f), WorldViewProjMatrix).xyw; const float2 screenPos = screenSpacePos.xy / screenSpacePos.z; //divide by w taken above as third argument const float2 screenPosUV = screenPos * float2(0.5f, -0.5f) + 0.5f; //invert Y axis for the shadow map look up in future //fov is vertical float nearPlaneHeight = TanHalfFov * 1.0f; //near = 1.0f float nearPlaneWidth = AspectRatio * nearPlaneHeight; //position of rendered point projected on the near plane float3 cameraSpaceNearPos = float3(screenPos.x * nearPlaneWidth, screenPos.y * nearPlaneHeight, 1.0f); //transform the direction from camera to world space const float3 direction = mul(cameraSpaceNearPos, (float3x3)InvViewMatrix).xyz; } //Second approach { //UV for shadow map look up later in code const float2 screenPosUV = input.screenPos.xy * rcp( renderTargetSize ); const float2 screenPos = screenPosUV * 2.0f - 1.0f; // transform range 0->1 to -1->1 // Ray's direction in world space, VIEW_LOOK/RIGHT/UP are camera basis vectors in world space //fov is vertical const float3 direction = (VIEW_LOOK + TanHalfFov * (screenPos.x*VIEW_RIGHT*AspectRatio - screenPos.y*VIEW_UP)); } ... } I cannot understand what happens in the second approach right at the first 2 lines. input.screenPos.xy is calculated in vs and interpolated here but it's still before the perspective divide right? So for example y coordinate of input.screenPos should be in range -|w| <= y <= |w| where w is the z coordinate of the point in camera space, so maximally w can be equal to Far and minimally to Near plane right? How come dividing y by the renderTargetSize above yield the result supposedly in <0,1> range? Also screenPosUV seems to have already inverted Y axis for some reason I also don't understand - and that's why probably the minus sign in the calculation of direction. In my setup for example renderTargetSize is (1280, 720), Far = 100, Near = 1.0f, I use LH coordinate system and camera by default looks towards positive Z axis. Both approaches first and second give me the same results but I would like to understand this second approach. Would be very grateful for any help!
  22. I'm trying to use Perlin Noise to paint landscapes on a sphere. So far I've been able to make this: (the quad is just to get a more flat vision of the height map) I'm not influencing the mesh vertices height yet, but I am creating the noise map from the CPU and passing it to the GPU as a texture, which is what you see above. I've got 2 issues though: Issue #1 If I get a bit close to the sphere, the detail in the landscapes look bad. I'm aware that I can't get too close, but I also feel that I should be able to get better quality at the distance I show above. The detail in the texture looks blurry and stretched...it just looks bad. I'm not sure what I can do to improve it. Issue #2 I believe I know why the second issue occurs, but don't know how to solve it. If I rotate the sphere, you'll notice something. Click on the image for a better look: (notice the seam?) What I think is going on is that some land/noise reaches the end of the uv/texture and since the sphere texture is pretty much like if you wrap paper around the sphere, the beginning and end of the texture map connect, and both sides have different patterns. Solutions I have in mind for Issue #2: A) Maybe limiting the noise within a certain bounding box, make sure "land" isn't generated around the borders or poles of the texture. Think Islands. I just have no idea how to do that. B) Finding a way to make the the noise draw at the beginning of the uv/texture once it reaches the end of it. That way the beginning and ends connect seamlessly, but again, I have no idea how to do that. I'm kind of rooting for the solution a though. I would be able to make islands that way. Hope I was able to explain myself. If anybody needs anymore information, let me know. I'll share the function in charge of making this noise below. The shader isn't doing anything special but drawing the texture. Thanks! CPU Noise Texture: const width = 100; const depth = 100; const scale = 30.6; const pixels = new Uint8Array(4 * width * depth); let i = 0; for (let z = 0; z < depth; z += 1) { for (let x = 0; x < width; x += 1) { const octaves = 8; const persistance = 0.5; const lacunarity = 2.0; let frequency = 1.0; let amplitude = 1.0; let noiseHeight = 0.0; for (let i = 0; i < octaves; i += 1) { const sampleX = x / scale * frequency; const sampleZ = z / scale * frequency; let n = perlin2(sampleX, sampleZ); noiseHeight += n * amplitude; amplitude *= persistance; frequency *= lacunarity; } pixels[i] = noiseHeight * 255; pixels[i+1] = noiseHeight * 255; pixels[i+2] = noiseHeight * 255; pixels[i+3] = 255; i += 4; } } GPU GLSL: void main () { vec3 diffusemap = texture(texture0, uvcoords).rgb; color = vec4(diffusemap, 1.0); }
  23. I'm trying to use values generated with a 2D Perlin noise function to determine the height of each vertex on my sphere. Just like a terrain height map but spherical. Unfortunately I can't seem to figure it out. So far it's easy to push any particular vertex along its calculated normal, and that seems to work. As you can see in the following image, I'm pulling only one vertex along its normal vector. This was accomplished with the following code. No noise yet btw: // Happens after normals are calculated for every vertex in the model // xlen and ylen are the segments and rings of the sphere for(let x = 0; x <= xLen; x += 1){ for(let y = 0; y <= yLen; y += 1){ // Normals const nx = model.normals[index]; const ny = model.normals[index + 1]; const nz = model.normals[index + 2]; let noise = 1.5; // Just pull one vert... if (x === 18 && y === 12) { // Verts model.verts[index] = nx * noise; model.verts[index + 1] = ny * noise; model.verts[index + 2] = nz * noise; } index += 3; } } But what if I want to use 2D Perlin noise values on my sphere to create mountains on top of it? I thought it would be easy displacing the sphere's vertices using its normals and Perlin noise, but clearly I'm way off: This horrible object was created with the following code: // Happens after normals are calculated for every vertex in the model // xlen and ylen are the segments and rings of the sphere // Keep in mind I'm not using height map image. I'm actually feeding the noise value directly. for(let x = 0; x <= xLen; x += 1){ for(let y = 0; y <= yLen; y += 1){ // Normals const nx = model.normals[index]; const ny = model.normals[index + 1]; const nz = model.normals[index + 2]; const sampleX = x * 1.5; const sampleY = y * 1.5; let noise = perlin2(sampleX, sampleY); // Update model verts height model.verts[index] = nx * noise; model.verts[index + 1] = ny * noise; model.verts[index + 2] = nz * noise; index += 3; } } I have a feeling the direction I'm pulling the vertices are okay, the problem might be the intensity, perhaps I need to clamp the noise value? I've seen terrain planes where they create the mesh based on the height map image dimensions. In my case, the sphere model verts and normals are already calculated and I want to add height afterwards (but before creating the VAO). Is there a way I could accomplish this so my sphere displays terrain like geometry on it? Hope I was able to explain myself properly. Thanks!
  24. Hello, I have a custom binary ImageFile, it is essentially a custom version of DDS made up of 2 important parts: struct FileHeader { dword m_signature; dword m_fileSize; }; struct ImageFileInfo { dword m_width; dword m_height; dword m_depth; dword m_mipCount; //atleast 1 dword m_arraySize; // atleast 1 SurfaceFormat m_surfaceFormat; dword m_pitch; //length of scanline dword m_byteCount; byte* m_data; }; It uses a custom BinaryIO class i wrote to read and write binary, the majority of the data is unsigned int which is a dword so ill only show the dword function: bool BinaryIO::WriteDWord(dword value) { if (!m_file && (m_mode == BINARY_FILEMODE::READ)) { //log: file null or you tried to read from a write only file! return false; } byte bytes[4]; bytes[0] = (value & 0xFF); bytes[1] = (value >> 8) & 0xFF; bytes[2] = (value >> 16) & 0xFF; bytes[3] = (value >> 24) & 0xFF; m_file.write((char*)bytes, sizeof(bytes)); return true; } //----------------------------------------------------------------------------- dword BinaryIO::ReadDword() { if (!m_file && (m_mode == BINARY_FILEMODE::WRITE)) { //log: file null or you tried to read from a write only file! return NULL; } dword value; byte bytes[4]; m_file.read((char*)&bytes, sizeof(bytes)); value = (bytes[0] | (bytes[1] << 8) | (bytes[2] << 16) | bytes[3] << 24); return value; } So as you can Imagine you end up with a loop for reading like this: byte* inBytesIterator = m_fileInfo.m_data; for (unsigned int i = 0; i < m_fileInfo.m_byteCount; i++) { *inBytesIterator = binaryIO.ReadByte(); inBytesIterator++; } And finally to read it into dx11 buffer memory we have the following: //Pass the Data to the GPU: Remembering Mips D3D11_SUBRESOURCE_DATA* initData = new D3D11_SUBRESOURCE_DATA[m_mipCount]; ZeroMemory(initData, sizeof(D3D11_SUBRESOURCE_DATA)); //Used as an iterator byte* source = texDesc.m_data; byte* endBytes = source + m_totalBytes; int index = 0; for (int i = 0; i < m_arraySize; i++) { int w = m_width; int h = m_height; int numBytes = GetByteCount(w, h); for (int j = 0; j < m_mipCount; j++) { if ((m_mipCount <= 1) || (w <= 16384 && h <= 16384)) { initData[index].pSysMem = source; initData[index].SysMemPitch = GetPitch(w); initData[index].SysMemSlicePitch = numBytes; index++; } if (source + numBytes > endBytes) { LogGraphics("Too many Bytes!"); return false; } //Divide by 2 w = w >> 1; h = h >> 1; if (w == 0) { w = 1; } if (h == 0) { h = 1; } } } It seems rather slow particularly for big textures, is there any way i could optimize this? as the render grows too rendering multiple textured objects the loading times may become problematic. At the moment it takes around 2 seconds to load a 4096x4096 texture, you can see the output in the attached images. Thanks.
  25. Howdy! I was wondering let's say you have a mesh and the most blended vertex is attached to 4 bones (and so 4 weights that aren't 0 or 1). So the rest of the mesh vertices are also attached to 4 bones. However let's say one vertex is only attached to 1 single bone, so it has a 1.0 weight. What other 3 bones do you attach that vertex to? 1. The root bone with 0.0 weights? Or; 2. attach it to -1 ('no bone') and then use if() statement in hlsl for the transformation calculation? Thanks for your input!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!