Sign in to follow this  
Deadly_kom

DX11 The native side of Hololens

Recommended Posts

Good time of the day, members of the forum!
    I do not think that many have tried themselves in the development under
 Hololens, but nevertheless decided to write. In any case, all work with
 Hololens is reduced to UWP and DirectX 11. So I wrote a small prototype
 for easy initialization of primitives, etc. In the scene, and began testing
 on the piece of iron ... And I was very upset by the performance tests.
       
    A short excursion, for those who are not in the subject: - the construction
of an image under Hololens, naturally a stereo image for this reason I use
 DrawInstanced with pre-prepared shaders. Shaders are very simple, conversion
 to world coordinates and projection of the species, color is set straight in
 the shader. Plus to switch the render in 2D Texture Arrays I use the geometry
 shader, that's basically all.

   For the test, I generated a grid of 10k - 30k and output them about 10
 times ... for 10k grid and 10 calls - 30fps, for the last about 18 - 20fps.
 Sadness, I thought, and I decided that my hands are not growing in size and
 my brain does not want to work anymore ...
   I decided to look for flaws in my shit ... I measured the time for drawing
10 calls - about 800 ticks, it's kind of not that bad. Slightly optimized the
sorting by materials, namely, switching between shaders minimized but in the
 loop there were updates of constant values for the shader namely transform for
each object. Fps grew by 2-3 but the time to render the frame did not measure ...
   Although Unity seems to hold the bar 1.2 - 1.3 mm triangles at 10-15 fps ...
I myself did not check the word for people.

    So, maybe someone can decide what can be crooked and what can be patched up ...

 

P.S.  The buffer of depth is 16 bit, and I use similar indices. Thank you in advance.

Edited by Deadly_kom

Share this post


Link to post
Share on other sites

He meant not to use the 'Formatted' font (it's for code anyway), e.g. like so

Quote

Good time of the day, members of the forum!
    I do not think that many have tried themselves in the development under
 Hololens, but nevertheless decided to write. In any case, all work with
 Hololens is reduced to UWP and DirectX 11. So I wrote a small prototype
 for easy initialization of primitives, etc. In the scene, and began testing
 on the piece of iron ... And I was very upset by the performance tests.        
    A short excursion, for those who are not in the subject: - the construction
 of an image under Hololens, naturally a stereo image for this reason I use
 DrawInstanced with pre-prepared shaders. Shaders are very simple, conversion
 to world coordinates and projection of the species, color is set straight in
 the shader. Plus to switch the render in 2D Texture Arrays I use the geometry
 shader, that's basically all.
   For the test, I generated a grid of 10k - 30k and output them about 10
 times ... for 10k grid and 10 calls - 30fps, for the last about 18 - 20fps.
 Sadness, I thought, and I decided that my hands are not growing in size and
 my brain does not want to work anymore ...
   I decided to look for flaws in my shit ... I measured the time for drawing
 10 calls - about 800 ticks, it's kind of not that bad. Slightly optimized the
 sorting by materials, namely, switching between shaders minimized but in the
 loop there were updates of constant values for the shader namely transform for
 each object. Fps grew by 2-3 but the time to render the frame did not measure ...
   Although Unity seems to hold the bar 1.2 - 1.3 mm triangles at 10-15 fps ...
 I myself did not check the word for people.

    So, maybe someone can decide what can be crooked and what can be patched up ...

fixed.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628293
    • Total Posts
      2981869
  • Similar Content

    • By esenthel
      Just finished making latest WebGL demo for my game engine:
      http://esenthel.com/?id=live_demo
      Let me know what you think,
      as of now only Chrome and Firefox can run it.
      Edge, Safari, Opera have some unresolved bugs at the moment.
    • By ramirofages
      Hello everyone, I was following this article:
      https://mattdesl.svbtle.com/drawing-lines-is-hard#screenspace-projected-lines_2
      And I'm trying to understand how the algorithm works. I'm currently testing it in Unity3D to first get a grasp of it and later port it to webgl.
      What I'm having problems with is the space in which the calculations take place. First the author calculates the position in NDC and takes into account the aspect ratio of the screen.  Later, he calculates a displacement vector which he calls offset, and adds that to the position that is still in projective space, with the offset having a W value of 1. What's going on here? why can you add a vector in NDC to the resulting position of the projection? what's the relation there?. Also, what is that value of 1 in W doing? shouldn't it be 0 ?
      Supposedly this algorithm makes the thickness of the line independent of the depth, but I'm failing to see why.
      Any help is appreciated. Thanks
    • By reders
      Hi, everyone!
      I "finished" building my first game. Obviously Pong.
      It's written in C++ on Visual Studio with SFML.
      Pong.cpp
      What do you think? What should I consider doing to improve the code?
      Thank you very much.
       
      EDIT: added some screenshot and .zip file of the playable game
       
      Pong.zip


    • By GreenGodDiary
      I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
      Assume I'm doing something to a UAV in my CS:
      RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.
       
      I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):
       
      DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();  
      After I render the scene, I dispatch like this:
      gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
      I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

      If someone with more experience could point me in the right direction I would really appreciate it!

      On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

      Thank you <3
       
      P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

      P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
    • By noodleBowl
      I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
      So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
      I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
      When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
      Am I thinking of this comparison function all wrong?
      Vertex data just in case
      //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };  
  • Popular Now