Baemz

Help with frustum-culling.

Recommended Posts

Hello,

I've been working on some culling-techniques for a project. We've built our own engine so pretty much everything is built from scratch. I've set up a frustum with the following code, assuming that the FOV is 90 degrees.

float angle = CU::ToRadians(45.f);

Plane<float> nearPlane(Vector3<float>(0, 0, aNear), Vector3<float>(0, 0, -1));
Plane<float> farPlane(Vector3<float>(0, 0, aFar), Vector3<float>(0, 0, 1));

Plane<float> right(Vector3<float>(0, 0, 0), Vector3<float>(angle, 0, -angle));
Plane<float> left(Vector3<float>(0, 0, 0), Vector3<float>(-angle, 0, -angle));
Plane<float> up(Vector3<float>(0, 0, 0), Vector3<float>(0, angle, -angle));
Plane<float> down(Vector3<float>(0, 0, 0), Vector3<float>(0, -angle, -angle));

myVolume.AddPlane(nearPlane);
myVolume.AddPlane(farPlane);
myVolume.AddPlane(right);
myVolume.AddPlane(left);
myVolume.AddPlane(up);
myVolume.AddPlane(down);

When checking the intersections I am using a BoundingSphere of my models, which is calculated by taking the average position of all vertices and then choosing the furthest distance to a vertex for radius. The actual intersection test looks like this, where the "myFrustum90" is the actual frustum described above.
The orientationInverse is the viewMatrix in this case.

bool CFrustum::Intersects(const SFrustumCollider& aCollider)
{
		CU::Vector4<float> position = CU::Vector4<float>(aCollider.myCenter.x, aCollider.myCenter.y, aCollider.myCenter.z, 1.f) * myOrientationInverse;
		return myFrustum90.Inside({ position.x, position.y, position.z }, aCollider.myRadius);
}

The Inside() function looks like this.

template <typename T>
bool PlaneVolume<T>::Inside(Vector3<T> aPosition, T aRadius) const
{
	for (unsigned short i = 0; i < myPlaneList.size(); ++i)
	{
		if (myPlaneList[i].ClassifySpherePlane(aPosition, aRadius) > 0)
		{
			return false;
		}
	}
	return true;
}

And this is the ClassifySpherePlane() function. (The plane is defined as a Vector4 called myABCD, where ABC is the normal)

template <typename T>
inline int Plane<T>::ClassifySpherePlane(Vector3<T> aSpherePosition, float aSphereRadius) const
{
	float distance = (aSpherePosition.Dot(myNormal)) - myABCD.w;
		// completely on the front side
	if (distance >= aSphereRadius)
	{
		return 1;
	}
		// completely on the backside (aka "inside")
	if (distance <= -aSphereRadius)
	{
		return -1;
	}
		//sphere intersects the plane
	return 0;
}

 

Please bare in mind that this code is not optimized nor well-written by any means. I am just looking to get it working.
The result of this culling is that the models seem to be culled a bit "too early", so that the culling is visible and the models pops away.
How do I get the culling to work properly?
I have tried different techniques but haven't gotten any of them to work.
If you need more code or explanations feel free to ask for it.

Thanks.

 

Edited by Baemz
The full post didn't come through at creation. Here comes the rest.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Forum Statistics

    • Total Topics
      628283
    • Total Posts
      2981823
  • Similar Content

    • By GreenGodDiary
      I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
      Assume I'm doing something to a UAV in my CS:
      RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.
       
      I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):
       
      DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();  
      After I render the scene, I dispatch like this:
      gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
      I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

      If someone with more experience could point me in the right direction I would really appreciate it!

      On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

      Thank you <3
       
      P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

      P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
    • By noodleBowl
      I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
      So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
      I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
      When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
      Am I thinking of this comparison function all wrong?
      Vertex data just in case
      //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };  
    • By Rannion
      Hi,
      I'm trying to fill a win64 Console with ASCII char.
      At the moment I have 2 solutions: one using std::cout for each line, let's say 30 lines at once using std::endl at the end of each one.
      The second solution is using FillConsoleOutputCharacter. This method seems a lot more robust and with less flickering. But I'm guessing, internally it's using a different table than the one used by std::cout. I'm trying to fill the console with the unsigned char 0xB0 which is a sort of grey square when I use std::cout but when using FillConsoleOutputCharacter it is outputted as the UTF8 char '°'.
      I tried using SetConsoleOutputCP before but could not find a proper way to force it to only use the non-extended ASCII code page...
      Has anyone a hint on this one?
      Cheers!
    • By Vortez
      Hi guys, i know this is stupid but i've been trying to convert this block of asm code in c++ for an hour or two and im stuck
      ////////////////////////////////////////////////////////////////////////////////////////////// /////// This routine write the value returned by GetProcAddress() at the address p /////////// ////////////////////////////////////////////////////////////////////////////////////////////// bool SetProcAddress(HINSTANCE dll, void *p, char *name) { UINT *res = (UINT*)ptr; void *f = GetProcAddress(dll, name); if(!f) return false; _asm { push ebx push edx mov ebx, f mov edx, p mov [ebx], edx // <--- put edx at the address pointed by ebx pop edx pop ebx } return res != 0; } ... // ie: SetProcAddress(hDll, &some_function, "function_name"); I tried:
      memcmp(p, f, sizeof(p)); and UINT *i1 = (*UINT)p; UINT *i2 = (*UINT)f; *f = *p; The first one dosent seem to give the right retult, and the second one won't compile.
      Any idea?
  • Popular Now