• Advertisement

KarimIO

Member
  • Content count

    107
  • Joined

  • Last visited

Community Reputation

271 Neutral

About KarimIO

  • Rank
    Member

Personal Information

  • Interests
    Art
    Audio
    Design
    Programming
  1. I was considering only using those kinds of approaches on smaller areas, if at all (RSM, VoxelGI, and the like would be supported in volumes) because they're so expensive. But I'll check out your link, thanks
  2. Hey guys, Are lightmaps still the best way to handle static diffuse irradiance, or is SH used for both diffuse and specular irradiance now? Also, do any modern games use direct light in lightmaps, or are all direct lighting handled by shadow maps now? Finally, how is SH usually baked? Thanks!
  3. Hey guys So I was wondering how modern terrain and water geometry works both with and without tesselation. Essentially: 1) Is Geoclipmapping still the best CPU tesselation technique? 2) Is Geoclipmapping still used with tesselation? 3) Is non-tesselated water just flat? Is there any other (reasonable) ways to simulate it? Do people use Geoclipmapping for that too? Thanks!
  4. DX11 Copy Z-Buffer in DirectX

    To clarify to those finding this later, and so someone can correct me if I'm wrong, this is a good solution, but you can not use it with the default depth buffer in opengl. This can be solved by using an fbo in the middle, and simply doing something that doesn't output depth, such as post processing, at the end.
  5. DX11 Copy Z-Buffer in DirectX

    Forgive me if this is marginally off topic, but with OpenGL, is that done this way? glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glBindFramebuffer(GL_FRAMEBUFFER, fbo2); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0);
  6. DX11 Copy Z-Buffer in DirectX

    Are you suggesting sharing an image/texture (specifically the depth buffer) between two framebuffers? I had considered that but I thought it might have issues.
  7. Hey guys, I'm trying to work on adding transparent objects to my deferred-rendered scene. The only issue is the z-buffer. As far as I know, the standard way to handle this is copying the buffer. In OpenGL, I can just blit it. What's the alternative for DirectX? And are there any alternatives to copying the buffer? Thanks in advance!
  8. 3D Do models go inside a BSP?

    I didn't mean frustum culling of pixels, I'm aware that happens automatically, I meant of subtrees and entire objects or areas. Awesome! Thanks! I'll check it out soon. And is this used in addition to the previously mentioned techniques, or instead of it? Thanks again, JoeJ and Hodgman!
  9. 3D Do models go inside a BSP?

    As I said, I currently use only models for geometry. I wanted to add CSG-geometry for blocking things out, as well as for people who come from the Source Engine that might find it useful. This is pretty much what I planned to do before wanting to look for more effective methods. Quick question though, should frustum culling be done before or after occlusion culling? So an optimal approach would be using the PVS for static objects and occlusion queries for dynamic ones, while checking the partitioning tree for where the dynamic objects are to reduce the amount of occlusions, correct? I know nothing of Hi-Z Pyramids, but I'll check them out!
  10. 3D Do models go inside a BSP?

    Alrighty so basically I should use CSG geometry without the BSP partitioning. PVS is an all-encompassing thing, right, not just an individual technology? But in regard to partitioning and visibility, I remember Frostbite used sphere-based partitioning. Is that still used today, or can you suggest another partition method.
  11. 3D Do models go inside a BSP?

    Thanks for responding quickly and helpfully as usual, Hodgman! Doesn't Source use BSP for practically for everything under the sun and Unreal use it for blocking? It's not just partitioning and visibility I'd like it for, but also that blocking. Yeah that makes sense, but it shouldn't be difficult to overcome. How is this done though? The only thing that comes to mind is having an interior and exterior collision mesh, so anything including the interior mesh would block other BSP elements and anything in the exterior would represent what leaf the actual model is in.
  12. Hey guys, So I'm starting to think about BSP for my engine, which so far only supports models. And I've got a couple stupid questions. How are static and dynamic models usually used with the BSP visibility side of things? As far as I'm aware, dynamic models only use the BSP tree as a way to cull before visibility culling, so i guess I can compare an OOBB to the BSP tree leaves - is this correct? How do static meshes affect the BSP tree in terms of blocking visibility? Thanks in advance!
  13. In the main prepass vertex shader, I need to use this: #pragma pack_matrix( row_major ) Multiplication should be column major by default because I use GLM. Yet, for one specific file, I need to use row_major.
  14. The problem is, I'm doing no such thing. GLM outputs the same column-major matrices, but for some reason one shader requires row major and the other column major. I understand everything you're talking about, it's just DirectX likes it one way in one shader and another way in another shader.
  15. @Hodgman Sorry! Didn't see the response until now! Keep in mind as I used GLM, it's column major. Here's the code that works: //////////////////////////////////////////////////////////////////////////////// // Filename: mainVert.vs //////////////////////////////////////////////////////////////////////////////// ///////////// // GLOBALS // ///////////// #pragma pack_matrix( row_major ) cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; ////////////// // TYPEDEFS // ////////////// struct VertexInputType { float3 position : POSITION; float3 normal : NORMAL; float3 tangent : TANGENT; float2 texCoord : TEXCOORD0; }; struct PixelInputType { float4 position : SV_POSITION; float3 worldPosition : POSITION; float3 normal : NORMAL; float3 tangent : TANGENT; float2 texCoord : TEXCOORD0; }; //////////////////////////////////////////////////////////////////////////////// // Vertex Shader //////////////////////////////////////////////////////////////////////////////// PixelInputType main(VertexInputType input) { float4 position; PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. position = float4(input.position, 1.0f); // Calculate the position of the vertex against the world, view, and projection matrices. position = mul(position, worldMatrix); output.worldPosition = position.xyz; position = mul(position, viewMatrix); output.position = mul(position, projectionMatrix); output.normal = normalize(mul(float4(input.normal, 1.0), worldMatrix).xyz); output.tangent = normalize(mul(float4(input.tangent, 1.0), worldMatrix).xyz); output.texCoord = float2(input.texCoord.x, -input.texCoord.y); return output; } //////////////////////////////////////////////////////////////////////////////// // Filename: pointLightFrag.ps //////////////////////////////////////////////////////////////////////////////// #pragma pack_matrix( column_major ) #include "inc_transform.hlsl" #include "inc_light.hlsl" ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 texCoord : TEXCOORD0; float3 viewRay : POSITION; }; Texture2D shaderTexture[4]; SamplerState SampleType[4]; cbuffer MatrixInfoType { matrix invView; matrix invProj; float4 eyePos; float4 resolution; }; cbuffer Light { float3 lightPosition; float lightAttenuationRadius; float3 lightColor; float lightIntensity; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float depth = shaderTexture[3].Sample(SampleType[0], input.texCoord).r; float3 Position = WorldPosFromDepth(invProj, invView, depth, input.texCoord); //return float4(position, 1.0); /*float near = 0.1; float far = 100; float ProjectionA = far / (far - near); float ProjectionB = (-far * near) / (far - near); depth = ProjectionB / ((depth - ProjectionA)); float4 position = float4(input.viewRay * depth, 1.0);*/ // Convert to World Space: // position = mul(invView, position); float3 Albedo = shaderTexture[0].Sample(SampleType[0], input.texCoord).rgb; float3 Normal = shaderTexture[1].Sample(SampleType[0], input.texCoord).rgb; float4 Specular = shaderTexture[2].Sample(SampleType[0], input.texCoord); float3 lightPow = lightColor * lightIntensity; float3 outColor = LightPointCalc(Albedo.rgb, Position.xyz, Specular, Normal.xyz, lightPosition, lightAttenuationRadius, lightPow, eyePos.xyz); // hdrGammaTransform() return float4(hdrGammaTransform(outColor), 1.0f); }
  • Advertisement