• 12
• 12
• 9
• 10
• 13
• ### Similar Content

• By isu diss
I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color.
float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.

• By dgi
Hey all ,
For a few days I'm trying to solve some problems with my engine's memory management.Basically what is have is a custom heap with pre allocated memory.Every block has a header and so on.I decided to leave it like that(not cache friendly) because my model is that every block will be large and I will have a pool allocators and stack allocators dealing with those blocks internally. So far so good I figure out how to place my per scene resources . There is one thing that I really don't know how to do and thats dealing with containers.What kind of allocation strategy to use here.
If I use vector for my scene objects(entities , cameras , particle emitters .. ) I will fragment my custom heap if I do it in a standard way , adding and removing objects will cause a lot of reallocations . If I use a linked list this will not fragment the memory but it's not cache friendly.I guess if a reserve large amount of memory for those vectors it will work but then I will waste a lot memory.I was thinking for some sort of mix between a vector and a linked list , where you have block of memory that can contain lets say 40 items and if you go over that number a new one will be created and re location of the data would not be needed.There would be some cache misses but it will reduce the fragmentation.

How you guys deal with that ? Do you just reserve a lot data ?

dgi
• By Hermetix
I am trying to setup the custom wizard for making a 3ds MAX 2018 plug-in (to export a character animation data), but I can't locate the wizard file folder to put the .vsz file in. In the 3ds MAX 2018 docs, it only mentions where the folder is in VS 2015 (VC/vcprojects). It's a VC++ project, but I don't see any folder in VC for the wizard files. I'm using VS 2017 update 15.5.6 Enterprise, and the folders in VC are: Auxiliary, Redist and Tools.

Thanks.
• By KarimIO
Hey guys! Three questions about uniform buffers:
1) Is there a benefit to Vulkan and DirectX's Shader State for the Constant/Uniform Buffer? In these APIs, and NOT in OpenGL, you must set which shader is going to take each buffer. Why is this? For allowing more slots?
2) I'm building an wrapper over these graphics APIs, and was wondering how to handle passing parameters. In addition, I used my own json format to describe material formats and shader formats. In this, I can describe which shaders get what uniform buffers. I was thinking of moving to support ShaderLab (Unity's shader format) instead, as this would allow people to jump over easily enough and ease up the learning curve. But ShaderLab does not support multiple Uniform Buffers at all, as I can tell, let alone what parameters go where.
So to fix this, I was just going to send all Uniform Buffers to all shaders. Is this that big of a problem?
3) Do you have any references on how to organize material uniform buffers? I may be optimizing too early, but I've seen people say what a toll this can take.
• By cozzie
Hi all,
As a part of the debug drawing system in my engine,  I want to add support for rendering simple text on screen  (aka HUD/ HUD style). From what I've read there are a few options, in short:
1. Write your own font sprite renderer
2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer
3. Use an external library, like the directx toolkit etc.
I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device.  But other articles tell that this was 'patched' later on and should work now.
Can someone shed some light on this and ideally provide me an example or article on  how to set this up?
All input is appreciated.

# DX11 GPU Raytracing

## Recommended Posts

Hi everybody!

I am currently trying to write my own GPU Raytracer.

I am using DirectX 11 and Compute Shader.

Here is what I've tried so far:

RayTracer.hlsl

Spoiler


SamplerState gWrapSampler : register(s0);
TextureCube gObjBackground : register(t0); // Not used right now
RWTexture2D<float4> gResult : register(u0);

cbuffer cbCamera : register(b0)
{
Camera gCamera;
};

cbuffer cbGeneralInfo : register(b1)
{
uint gWidth;
uint gHeight;
uint gMaxRecursion; // Not used right now
float gRayMaxLength;
};

void main( uint3 DTid : SV_DispatchThreadID,
{
gResult[DTid.xy] = float4(0, 0, 0, 1);
float u = (float) DTid.x / (float) gWidth;
float v = (float) DTid.y / (float) gHeight;

float4 currentRayDirection = gCamera.TopLeftCorner + u * (gCamera.TopRightCorner - gCamera.TopLeftCorner)
+ v * (gCamera.BottomLeftCorner - gCamera.TopLeftCorner);
currentRayDirection.w = 0.0f;
currentRayDirection = normalize(currentRayDirection);
Ray r;
r.Origin = gCamera.CamPosition.xyz;
r.Direction = currentRayDirection.xyz;
r.length = gRayMaxLength;

Sphere s1;
s1.Position = float3(0, 0, 10);
if (s1.Intersect(r))
{
float4 reflectedColor = float4(1, 0, 0, 1);
gResult[DTid.xy] = reflectedColor;
}

}

Spoiler


class Camera
{
float4 CamPosition;
float4 TopLeftCorner;
float4 TopRightCorner;
float4 BottomLeftCorner;
};

class Ray
{
float3 Origin;
float3 Direction;
float length;
};

class Sphere
{
float3 Position;
bool Intersect(Ray r)
{
float3 OriginToSphere = Position - r.Origin;
float projection = dot(OriginToSphere, r.Direction);
float3 distanceVector = OriginToSphere - projection * r.Direction;
float distanceVectorLengthSQ = dot(distanceVector, distanceVector);
return false;

float newLength = projection - sqrt(radiusSQ - distanceVectorLengthSQ);
if (newLength < 0.0f || newLength > r.length)
return false;

r.length = newLength;

return true;
}
};

But the result is not what I expected.

For example, the sphere is located at (0,0,10) with radius 1, but this is the result when CamPos is 4.5, which I think is wrong.

Also, for some reason, when I rotate the camera, the sphere expands.

##### Share on other sites

Are TopLeftCorner, BottomLeftCorner, and TopRightCorner relative to the camera origin? or are they the positions in the 3d world? If they are the location in the 3d world, you'll have to subtract CamPosition from currentRayDirection

##### Share on other sites
34 minutes ago, iedoc said:

Are TopLeftCorner, BottomLeftCorner, and TopRightCorner relative to the camera origin? or are they the positions in the 3d world? If they are the location in the 3d world, you'll have to subtract CamPosition from currentRayDirection

They are location in 3d world.

I took your advice, it worked. Thank you very very very much!