• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By ucfchuck
      I am feeding in 16 bit unsigned integer data to process in a compute shader and i need to get a standard deviation.
      So I read in a series of samples and push them into float arrays
      float vals1[9], vals2[9], vals3[9], vals4[9]; int x = 0,y=0; for ( x = 0; x < 3; x++) { for (y = 0; y < 3; y++) { vals1[3 * x + y] = (float) (asuint(Input1[threadID.xy + int2(x - 1, y - 1)].x)); vals2[3 * x + y] = (float) (asuint(Input2[threadID.xy + int2(x - 1, y - 1)].x)); vals3[3 * x + y] = (float) (asuint(Input3[threadID.xy + int2(x - 1, y - 1)].x)); vals4[3 * x + y] = (float) (asuint(Input4[threadID.xy + int2(x - 1, y - 1)].x)); } } I can send these values out directly and the data is as expected

                             
      Output1[threadID.xy] = (uint) (vals1[4] ); Output2[threadID.xy] = (uint) (vals2[4] ); Output3[threadID.xy] = (uint) (vals3[4] ); Output4[threadID.xy] = (uint) (vals4[4] ); however if i do anything to that data it is destroyed.
      If i add a
      vals1[4] = vals1[4]/2; 
      or a
      vals1[4] = vals[1]-vals[4];
      the data is gone and everything comes back 0.
       
       
      How does one go about converting a uint to a float and performing operations on it and then converting back to a rounded uint?
    • By fs1
      I have been trying to see how the ID3DInclude, and how its methods Open and Close work.
      I would like to add a custom path for the D3DCompile function to search for some of my includes.
      I have not found any working example. Could someone point me on how to implement these functions? I would like D3DCompile to look at a custom C:\Folder path for some of the include files.
      Thanks
    • By stale
      I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white. 

      The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane).

      Here is my pixel shader. As mentioned, I simply hard code it to output white:
      float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.
    • By evelyn4you
      Hello,
      i try to implement voxel cone tracing in my game engine.
      I have read many publications about this, but some crucial portions are still not clear to me.
      At first step i try to emplement the easiest "poor mans" method
      a.  my test scene "Sponza Atrium" is voxelized completetly in a static voxel grid 128^3 ( structured buffer contains albedo)
      b. i dont care about "conservative rasterization" and dont use any sparse voxel access structure
      c. every voxel does have the same color for every side ( top, bottom, front .. )
      d.  one directional light injects light to the voxels ( another stuctured buffer )
      I will try to say what i think is correct ( please correct me )
      GI lighting a given vertecie  in a ideal method
      A.  we would shoot many ( e.g. 1000 ) rays in the half hemisphere which is oriented according to the normal of that vertecie
      B.  we would take into account every occluder ( which is very much work load) and sample the color from the hit point.
      C. according to the angle between ray and the vertecie normal we would weigth ( cosin ) the color and sum up all samples and devide by the count of rays
      Voxel GI lighting
      In priciple we want to do the same thing with our voxel structure.
      Even if we would know where the correct hit points of the vertecie are we would have the task to calculate the weighted sum of many voxels.
      Saving time for weighted summing up of colors of each voxel
      To save the time for weighted summing up of colors of each voxel we build bricks or clusters.
      Every 8 neigbour voxels make a "cluster voxel" of level 1, ( this is done recursively for many levels ).
      The color of a side of a "cluster voxel" is the average of the colors of the four containing voxels sides with the same orientation.

      After having done this we can sample the far away parts just by sampling the coresponding "cluster voxel with the coresponding level" and get the summed up color.
      Actually this process is done be mip mapping a texture that contains the colors of the voxels which places the color of the neighbouring voxels also near by in the texture.
      Cone tracing, howto ??
      Here my understanding is confus ?? How is the voxel structure efficiently traced.
      I simply cannot understand how the occlusion problem is fastly solved so that we know which single voxel or "cluster voxel" of which level we have to sample.
      Supposed,  i am in a dark room that is filled with many boxes of different kind of sizes an i have a pocket lamp e.g. with a pyramid formed light cone
      - i would see some single voxels near or far
      - i would also see many different kind of boxes "clustered voxels" of different sizes which are partly occluded
      How do i make a weighted sum of this ligting area ??
      e.g. if i want to sample a "clustered voxel level 4" i have to take into account how much per cent of the area of this "clustered voxel" is occluded.
      Please be patient with me, i really try to understand but maybe i need some more explanation than others
      best regards evelyn
       
       
    • By Endemoniada

      Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look:
       
      // get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33;  
      That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong.
      If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ?
      I had this working in the past but I can't find my old code
      Please help. Thank you.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 Where to set meshes, materials etc?

This topic is 458 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all,

I've arrived at the moment that I need to choose 'where' to implement functions for setting a mesh, material etc. (in my 3d engine).

For sure I'll be using a MeshRenderer system/class, that will be a member of the Renderer class. The renderer class will have a DrawMesh function, that'll be forwarded to the MeshRenderer.

 

The engine works with inheritance:

- IDevice base class -> DX11Device class inherits

- IBuffer base class -> DX11Buffer class (with GetAPIPtr virtual function)

etc.

 

I've thought of 2 approaches:

 

 

1. IDevice gets members for SetMesh (i.e. buffers), SetMaterial etc., all taking abstracted class objects (API independent).
The low-level API calls are all done within the IDevice (virtual), implemented in DX11Device (SetMesh etc.).
For example: IDevice::SetMesh -> DX11Device::SetMesh, will do something like mD3ddev->SetIndexbuffer(pBuffer->GetAPIPtr())
The MeshRenderer is API independent, and will do the calls to IDevice.
 
2. I create a IMeshRenderer with all virtual functions (SetMesh, SetMaterial etc), and a child class: DX11MeshRenderer.
The inherited class will do the lowlevel API calls using IDevice ptr->GetAPIPtr(). Here also MeshRenderer is member of Renderer
 
From a design point of view I'm leaning towards option 1.
But the con here is that IDevice will get quite a list of member functions, within this it's already like below.
 
Any input is appreciated.
 
class IDevice 
{
public:
	IDevice(CRendererSettings *pSettings, const CR_BUFFER_FORMAT pBackBufferFormat, const CR_BUFFER_FORMAT pDepthBufferFormat);
	virtual ~IDevice();

	template<typename T> IBuffer* CreateBuffer(const CR_BUFFER_TYPE pType, const T* pData, const size_t pNrElements, const bool pDynamic, const bool pGPUWrite)
	{
		return OnCreateBuffer(pType, pData, sizeof(T)*pNrElements, pDynamic, pGPUWrite);
	}

	virtual void* GetAPIDevicePtr()		const = 0;
	virtual void* GetAPIContextPtr()	const = 0;

	virtual bool Startup() = 0;
	virtual bool UpdateFullscreenState() = 0;
	virtual bool Resize() = 0;
	virtual bool CheckAASupport(const CR_BUFFER_FORMAT pFormat, const uint pNrSamples) = 0;

	virtual bool SetRenderTargets(const IRenderTarget **pRT, const uint pNumRt) = 0;
	virtual void ResetRenderTargets() = 0;
	virtual bool Present(const IRenderTarget &pRT) = 0;

	virtual bool ClearRTView(const IRenderTarget &pRT, const float pColor[4]) = 0;
	virtual bool ClearDepthView() = 0;

	virtual bool SetViewports(const CViewport **pViewports, const uint pNumVp) = 0;

	virtual IBuffer* OnCreateBuffer(const CR_BUFFER_TYPE pType, const void* pData, const size_t pBufferSize, const bool pDynamic, const bool pGPUWrite) = 0;

	// Rendering and drawing
	// OR IMeshRenderer???

protected:
	virtual bool Create() = 0;

	virtual bool CreateSwapchain() = 0;
	virtual bool ResizeSwapchain() = 0;
	virtual bool CreateDepthBuffer() = 0;

	CRendererSettings	*mSettings;

	CR_BUFFER_FORMAT	mBackBufferFormat;
	CR_BUFFER_FORMAT	mDepthBufferFormat;
};

Share this post


Link to post
Share on other sites
Advertisement
Sign in to follow this  

  • Advertisement