Jump to content

  • Log In with Google      Sign In   
  • Create Account

nicolas.bertoa

Member Since 24 Feb 2008
Offline Last Active Yesterday, 10:44 PM

#5302872 [D3D12] Swapchain::present() Glitches

Posted by on 27 July 2016 - 11:13 PM

Yes, I think I am using fences correctly. To be sure about that I replaced my fences mechanism by FlushCommandQueue() (that basically does not begin to render next frame until current frame was completely executed by GPU) and I had the same problem.




#5300479 [D3D12] Multithread Architecture - A First Approach

Posted by on 13 July 2016 - 12:22 AM

Hi community

 

I want to share a new post about a basic multithread architecture for DirectX12.

 

https://nbertoa.wordpress.com/2016/07/13/directx12-multithread-architecture-a-first-approach/

 

The purpose of the post is to show my progress and the most important, to receive comments and suggestions about what I did.

 

Thanks!




#5299602 Is DirectXMath thread safe?

Posted by on 07 July 2016 - 09:04 AM

@Hodgman:

Yes, I share the data once it is properly initialized.

I did several tests yesterday, reading different DirectXMath types by different threads, and there were no problems (crashes, data modified, etc)


#5299405 Is DirectXMath thread safe?

Posted by on 06 July 2016 - 05:13 PM

Ok. Thanks for the answers.

I think I should pass that matrix by value instead so each thread has its own copy and we avoid potential problems.


#5299052 [D3D11] Deferred Shading Presentation

Posted by on 04 July 2016 - 03:33 PM

Hi community

 

I want to share a post where I implemented Deferred Shading with DirectX 11. I created this post some months ago but now I added a video presentation explaining how I solved this problem in detail.

 

https://nbertoa.wordpress.com/2016/01/25/directx11-deferred-shading/

 

 




#5297284 New Post about Gamma Correction

Posted by on 20 June 2016 - 02:16 AM

Hi community

 

I just finished a new post about Gamma Correction.

 

https://nbertoa.wordpress.com/2016/06/20/gamma-correction/

 

 




#5289385 Multithreading exercise - Bouncing particles

Posted by on 30 April 2016 - 12:49 AM

Hi community

 

I want to share a post I did about an exercise with multithreading

 

https://nbertoa.wordpress.com/2016/03/10/multithreading-exercises-1-bouncing-particles/

 

 

Hope it is useful :)




#5289383 [D3D11] Vertex Shader vs Instancing vs Geometry Shader

Posted by on 30 April 2016 - 12:42 AM

Hi community

 

I compared 3 techniques to draw the same geometry in different locations. These are the results

 

https://nbertoa.wordpress.com/2016/02/02/instancing-vs-geometry-shader-vs-vertex-shader/

 

https://nbertoa.wordpress.com/2016/02/04/instancing-vs-geometry-shader-vs-vertex-shader-round-2/




#5249611 [D3D12] Direct3D 12 Documentation in PDF :)

Posted by on 29 August 2015 - 03:11 PM

Hi community

 

I converted MSDN DirectX12 documentation into a pdf. I attached the pdf to this thread and also uploaded it

 

www.mediafire.com/view/ezaroryt3eaicnx/DirectX12_Documentation.pdf​​

 

 

I hope you find it useful​

Attached Files




#5157831 [DirectX] Particle Systems - Compute Shader

Posted by on 03 June 2014 - 09:05 AM

Hi Jason

 

I am using 2 ID3D11Buffer where I store information for all the particles (positions and velocities). Particles are living forever, they are created once.

 

Those buffers are binded through an Unordered Resource View to Compute Shader and through a Shader Resource View to Vertex Shader.

 

I use a Draw call with the number of particles. SV_VertexID is used in the shader to identify what particle should I process in a Vertex Shader in particular.

 

Particles are represented as points and expanded in a quad that face the camera, inside Geometry Shader.

 

 

I plan to use Append/Consume buffer in the future to create/destroy particles dinamically

 

 

Thanks!




#5157722 [DirectX] Particle Systems - Compute Shader

Posted by on 02 June 2014 - 10:43 PM

Hi community

 
I want to share some demos about Particle Systems I was working on.
 
I implemented those systems with DirectX 11 and Compute Shader
 
I was working on Visual C++ 2013, Windows 8.1, Phenom II x4 965, 8GB Ram, Radeon HD 7850. They run at 60FPS (I limited FPS to 60 FPS)
 
Please, watch demos in HD 1080p.
 
 
Video 1:
There are 1.000.000 particles in this demo which are in 5 different areas.
As demo progress, you can see how particles in each area begin to organize
 
 
 
Video 2:
There are 640.000 particles forming a sphere. Particles are moving slowly to the center of the sphere.
 
 
 
Video 3:
There are 640.000 particles organized in 100 layers of 6400 particles each. Particles move at random speed and direction through the plane of its layer.
 
 
 
Extras
I tested those demos with 10.000.000 particles and they run at 30 FPS approximately. Apparently alpha blending and z tests slow them, because when they are separated, then FPS increment.
 
Future work
Find bottlenecks and learn how to do that faster. I know in AMD video cards, threads per group should be multiple of 64.
Implement systems where particles interact physically between them or with complex AI. I was using steering behaviors in demo 1
 
Web



#5040083 DirectCompute - CUDA - OpenCL are they used?

Posted by on 06 March 2013 - 01:04 PM

I also found the following:

 

http://www.tomshardware.com/reviews/directcompute-opencl-gpu-acceleration,3146-5.html

 

"Civilization 5
Civilization 5 uses DirectX 11 and DirectCompute to leverage a variable bit rate texture codec algorithm. The algorithm is so efficient that 2 GB of leader textures compress down to less than 150 MB of disk storage.

DiRT 3
DiRT 3 employs DirectCompute for its high-definition ambient occlusion (HDAO) effect. Unfortunately, there is no equivalent effect in the game based on pixel shading, so we can’t compare the two directly.

Metro 2033
The advanced depth of field (DOF) effect in Metro 2033 needs three rendering passes. Two of these employ pixel shading, while the third uses DirectCompute."




#5031475 [D3D11] Displacement Mapping

Posted by on 12 February 2013 - 11:38 AM

Thanks riuthamus!

 

I was trying to get a good displacement mapping effect using cubes, but I need to improve tessellation on the edges. Basically I am using the same algorithm for all the shapes. You can check it in my repository, the project is called DisplacementMapping.




#5031419 [D3D11] Displacement Mapping

Posted by on 12 February 2013 - 08:27 AM

Hi community. I finished a new mini project about Displacement Mapping using DirectX 11.

 

DESCRIPTION:
A normal map is a texture, but instead of storing RGB data at each texel,
we store a compressed x-coordinate, y-coordinate and z-coordinate in
the red component, green component, and blue component, respectively.
These coordinates define a normal vector, thus a normal map stores a
normal vector at each pixel.

The strategy of normal mapping is to texture our polygons with normal
maps. We then have per-pixel normals which capture the fine details of a
surface like bumps, scratches and crevices. We then use these per-pixel
normals from the normal map in our lighting calculations, instead of
the interpolated vertex normal.

Normal mapping just improves the lighting detail, but it does not
improve the detail of the actual geometry. So in a sense, normal mapping
is just a lighting trick.

The idea of displacement mapping is to utilize an additional map, called a heightmap, which describes the
bumps and crevices of the surface. In other words, whereas a normal map
has three color channels to yield a normal vector (x, y, z) for each
pixel, the heightmap has a single color channel to yield a height value h
at each pixel. Visually, a heightmap is just a grayscale image (grays
because there is only one color channel), where each pixel is
interpreted as a height value, it is basically a discrete representation
of a 2D scalar field h = f(x, z). When we tessellate the mesh, we
sample the heightmap in the domain shader to offset the vertices in the
normal vector direction to add geometric detail to the mesh.

While tessellating geometry adds triangles, it does not add detail
on its own. That is, if you subdivide a triangle several times, you just
get more triangles that lie on the original triangle plane. To add
detail, then you need to offset the tessellated vertices in some way. A
heightmap is one input source that can be used to displace the
tessellated vertices.

To generate heightmaps you could use NVIDIA Photoshop's plugin or CrazyBump for example

BIBLIOGRAPHY:
Introduction to 3D Game programming using DirectX 11.
Real Time Rendering

VIDEO DEMONSTRATION:


SOURCE CODE:
http://code.google.com/p/dx11/




#5031054 [D3D11] Normal Mapping

Posted by on 11 February 2013 - 09:01 AM

Hi community. I finished a new demo about normal mapping using DirectX 11.

DESCRIPTION:
When we apply a brick texture to a cone shaped column, the specular
highlights looks unnaturally smooth compared to the bumpiness of the
brick texture. This is because the underlying mesh geometry is smooth,
and we have merely applied the image of bumpy bricks over the smooth
cylindrical surface. However, the lighting calculations are performed
based on the mesh geometry (in particular the interpolated vertex
normals), and not the texture image. Thus the lighting is not completely
consistent with the texture.

 

The strategy of normal mapping is to texture our polygons with normal
maps. We then have per-pixel normals which capture the fine details of a
surface like bumps, scratches and crevices. We then use these per-pixel
normals from the normal map in our lighting calculations, instead of
the interpolated vertex normal.

A normal map is a texture, but instead of storing RGB data at each texel,
we store a compressed x-coordinate, y-coordinate and z-coordinate in
the red component, green component, and blue component, respectively.
These coordinates define a normal vector, thus a normal map stores a
normal vector at each pixel.

To generate normal maps we can use a NVIDIA Photoshop Plugin or CrazyBump software.

The coordinates of the normals in a normal map are relative to the
texture space coordinate system. Consequently, to do lighting
calculations, we need to transform the normal from the texture space to
the world space so that the lights and normals are in the same
coordinate system. The TBN-bases (Tangent, Bitangent, Normal) built at
each vertex facilitates the transformation from texture space to world
space.

BIBLIOGRAPHY:
Introduction to 3D Game programming using DirectX 11.
Real Time Rendering

VIDEO DEMONSTRATION:


SOURCE CODE:
http://code.google.com/p/dx11/






PARTNERS