Jump to content

  • Log In with Google      Sign In   
  • Create Account

Nikko_Bertoa

Member Since 24 Feb 2008
Offline Last Active Dec 05 2014 08:18 AM

#5157831 [DirectX] Particle Systems - Compute Shader

Posted by Nikko_Bertoa on 03 June 2014 - 09:05 AM

Hi Jason

 

I am using 2 ID3D11Buffer where I store information for all the particles (positions and velocities). Particles are living forever, they are created once.

 

Those buffers are binded through an Unordered Resource View to Compute Shader and through a Shader Resource View to Vertex Shader.

 

I use a Draw call with the number of particles. SV_VertexID is used in the shader to identify what particle should I process in a Vertex Shader in particular.

 

Particles are represented as points and expanded in a quad that face the camera, inside Geometry Shader.

 

 

I plan to use Append/Consume buffer in the future to create/destroy particles dinamically

 

 

Thanks!




#5157722 [DirectX] Particle Systems - Compute Shader

Posted by Nikko_Bertoa on 02 June 2014 - 10:43 PM

Hi community

 
I want to share some demos about Particle Systems I was working on.
 
I implemented those systems with DirectX 11 and Compute Shader
 
I was working on Visual C++ 2013, Windows 8.1, Phenom II x4 965, 8GB Ram, Radeon HD 7850. They run at 60FPS (I limited FPS to 60 FPS)
 
Please, watch demos in HD 1080p.
 
 
Video 1:
There are 1.000.000 particles in this demo which are in 5 different areas.
As demo progress, you can see how particles in each area begin to organize
 
 
 
Video 2:
There are 640.000 particles forming a sphere. Particles are moving slowly to the center of the sphere.
 
 
 
Video 3:
There are 640.000 particles organized in 100 layers of 6400 particles each. Particles move at random speed and direction through the plane of its layer.
 
 
 
Extras
I tested those demos with 10.000.000 particles and they run at 30 FPS approximately. Apparently alpha blending and z tests slow them, because when they are separated, then FPS increment.
 
Future work
Find bottlenecks and learn how to do that faster. I know in AMD video cards, threads per group should be multiple of 64.
Implement systems where particles interact physically between them or with complex AI. I was using steering behaviors in demo 1
 
Web



#5040083 DirectCompute - CUDA - OpenCL are they used?

Posted by Nikko_Bertoa on 06 March 2013 - 01:04 PM

I also found the following:

 

http://www.tomshardware.com/reviews/directcompute-opencl-gpu-acceleration,3146-5.html

 

"Civilization 5
Civilization 5 uses DirectX 11 and DirectCompute to leverage a variable bit rate texture codec algorithm. The algorithm is so efficient that 2 GB of leader textures compress down to less than 150 MB of disk storage.

DiRT 3
DiRT 3 employs DirectCompute for its high-definition ambient occlusion (HDAO) effect. Unfortunately, there is no equivalent effect in the game based on pixel shading, so we can’t compare the two directly.

Metro 2033
The advanced depth of field (DOF) effect in Metro 2033 needs three rendering passes. Two of these employ pixel shading, while the third uses DirectCompute."




#5031475 [D3D11] Displacement Mapping

Posted by Nikko_Bertoa on 12 February 2013 - 11:38 AM

Thanks riuthamus!

 

I was trying to get a good displacement mapping effect using cubes, but I need to improve tessellation on the edges. Basically I am using the same algorithm for all the shapes. You can check it in my repository, the project is called DisplacementMapping.




#5031419 [D3D11] Displacement Mapping

Posted by Nikko_Bertoa on 12 February 2013 - 08:27 AM

Hi community. I finished a new mini project about Displacement Mapping using DirectX 11.

 

DESCRIPTION:
A normal map is a texture, but instead of storing RGB data at each texel,
we store a compressed x-coordinate, y-coordinate and z-coordinate in
the red component, green component, and blue component, respectively.
These coordinates define a normal vector, thus a normal map stores a
normal vector at each pixel.

The strategy of normal mapping is to texture our polygons with normal
maps. We then have per-pixel normals which capture the fine details of a
surface like bumps, scratches and crevices. We then use these per-pixel
normals from the normal map in our lighting calculations, instead of
the interpolated vertex normal.

Normal mapping just improves the lighting detail, but it does not
improve the detail of the actual geometry. So in a sense, normal mapping
is just a lighting trick.

The idea of displacement mapping is to utilize an additional map, called a heightmap, which describes the
bumps and crevices of the surface. In other words, whereas a normal map
has three color channels to yield a normal vector (x, y, z) for each
pixel, the heightmap has a single color channel to yield a height value h
at each pixel. Visually, a heightmap is just a grayscale image (grays
because there is only one color channel), where each pixel is
interpreted as a height value, it is basically a discrete representation
of a 2D scalar field h = f(x, z). When we tessellate the mesh, we
sample the heightmap in the domain shader to offset the vertices in the
normal vector direction to add geometric detail to the mesh.

While tessellating geometry adds triangles, it does not add detail
on its own. That is, if you subdivide a triangle several times, you just
get more triangles that lie on the original triangle plane. To add
detail, then you need to offset the tessellated vertices in some way. A
heightmap is one input source that can be used to displace the
tessellated vertices.

To generate heightmaps you could use NVIDIA Photoshop's plugin or CrazyBump for example

BIBLIOGRAPHY:
Introduction to 3D Game programming using DirectX 11.
Real Time Rendering

VIDEO DEMONSTRATION:


SOURCE CODE:
http://code.google.com/p/dx11/




#5031054 [D3D11] Normal Mapping

Posted by Nikko_Bertoa on 11 February 2013 - 09:01 AM

Hi community. I finished a new demo about normal mapping using DirectX 11.

DESCRIPTION:
When we apply a brick texture to a cone shaped column, the specular
highlights looks unnaturally smooth compared to the bumpiness of the
brick texture. This is because the underlying mesh geometry is smooth,
and we have merely applied the image of bumpy bricks over the smooth
cylindrical surface. However, the lighting calculations are performed
based on the mesh geometry (in particular the interpolated vertex
normals), and not the texture image. Thus the lighting is not completely
consistent with the texture.

 

The strategy of normal mapping is to texture our polygons with normal
maps. We then have per-pixel normals which capture the fine details of a
surface like bumps, scratches and crevices. We then use these per-pixel
normals from the normal map in our lighting calculations, instead of
the interpolated vertex normal.

A normal map is a texture, but instead of storing RGB data at each texel,
we store a compressed x-coordinate, y-coordinate and z-coordinate in
the red component, green component, and blue component, respectively.
These coordinates define a normal vector, thus a normal map stores a
normal vector at each pixel.

To generate normal maps we can use a NVIDIA Photoshop Plugin or CrazyBump software.

The coordinates of the normals in a normal map are relative to the
texture space coordinate system. Consequently, to do lighting
calculations, we need to transform the normal from the texture space to
the world space so that the lights and normals are in the same
coordinate system. The TBN-bases (Tangent, Bitangent, Normal) built at
each vertex facilitates the transformation from texture space to world
space.

BIBLIOGRAPHY:
Introduction to 3D Game programming using DirectX 11.
Real Time Rendering

VIDEO DEMONSTRATION:


SOURCE CODE:
http://code.google.com/p/dx11/




#4991819 accessing data from other .cpp files

Posted by Nikko_Bertoa on 19 October 2012 - 10:43 AM

If your vector in game.cpp is

std::vector<someType> someName;

you should write in inputSystem.cpp

extern std::vector<someType> someName;



#4991462 SFML is shift pressed check

Posted by Nikko_Bertoa on 18 October 2012 - 10:09 AM

My previous solution is to process when the user pressed shift key. With sf::Keyboard::isKeyPressed you can check the state of that key in real-time, if for example the user keep it pressed.

I see that you are using 2.0 because sf::Keyboard is not present in 1.6


#4991460 SFML is shift pressed check

Posted by Nikko_Bertoa on 18 October 2012 - 10:03 AM

First you need to check fi a key was pressed

Event.Type == sf::Event::KeyPressed

then you can check if Shift was pressed doing this

Event.Key.Shift == true


#4991458 SFML is shift pressed check

Posted by Nikko_Bertoa on 18 October 2012 - 09:57 AM

What version are you using? 1.6 or 2.0


#4991454 SFML is shift pressed check

Posted by Nikko_Bertoa on 18 October 2012 - 09:52 AM

KeyEvent has an enum Code inside. Try EventList.KeyEvent::Shift


#4929664 Blending Water and Fog Effect

Posted by Nikko_Bertoa on 09 April 2012 - 02:39 PM

Hi GameDev community.

I want to show a blending water effect and fog effect that I implemented using DX10 and C++



https://sites.google.com/site/nicolasbertoa/blending-water-and-fog-effect

Thanks !


#4771737 Design Pattern in Game Development

Posted by Nikko_Bertoa on 08 February 2011 - 11:36 PM

HI community. I am developing a little game framework and because I do not want to reinvent the wheel, I am looking for the most common design patterns used in the game development field.

Do you know books or articles about this???




PARTNERS