Beginner Questions

Started by
3 comments, last by Dawoodoz 13 years, 7 months ago
I am starting to develop my own mini 3D graphic engine using Direct3D 9, to use it for games (C++,Win32 API).

I have read some e books about it, and i understood how direct3d works(vertex buffers,index buffers, textures, materials, lights, point sprites[particles], and camera). I also managed to make a simple application up and running.

But I have some questions...

I saw through the forums that are some differences between DirectX 9,10 and 11. Which are those differences, in case i want to support future versions of D3D?

Also, about the lighting system, is there a easy and efficient way to make use of multiple lights (more than 8), to cast shadows, reflections (mirrors), per pixel lighting, and optionally refractions, and the underwater effect(if there are lots of objects in the game)? (i know that the lighting is per vertex)
This can be achieved only through shaders?

Another question is, how is this implemented in games for example?
http://img535.imageshack.us/img535/1079/screenshotdoom201009261.png
When I shoot and the projectile hits the wall, on it a "bullet" texture appears.
How is this implemented efficiently (both as a texture and through 3D), in very large game maps and also if there are alot of bullet marks?

For 2D GUI, through the e books i have seen that sprites are used. It is efficient if instead i use D3DFVF_XYZRHW + D3DXMatrixOrthoOffCenterLH(0,Width - 1,0,Height - 1,1,1000) + switch the Y axis to go down using matrices, and use vertex buffers to store the X,Y position of the "windows", and with the Z position i store the Z Order?
For example :
window1 : Z = 5
window2 : Z = 6
window1 is drawn on top of window2

I will ask more questions if i find something i do not know how to implement it.

Small sample source code would be useful, and also try to make the answers easy to understand, i am just beginning graphics programming.[smile]
Advertisement
Quote:Original post by xlad
I saw through the forums that are some differences between DirectX 9,10 and 11. Which are those differences, in case i want to support future versions of D3D?

There are some major differences between DirectX 9 and DirectX 10. In DirectX 10 you will have to write shaders to do the most basic things, like lighting, and even to render a simple box you will have to write a shader, because fixed-function pipeline was removed. But dont worry is not that hard and it brings a lot of new nice features. DirectX 11 also have some differences, I never worked with it, but I know there are some differences like Compute Shader, etc.

Quote:Original post by xlad
Also, about the lighting system, is there a easy and efficient way to make use of multiple lights (more than 8), to cast shadows, reflections (mirrors), per pixel lighting, and optionally refractions, and the underwater effect(if there are lots of objects in the game)? (i know that the lighting is per vertex)
This can be achieved only through shaders?

To use many lights (hundreds) you will have to implement a deferred or light pre-pass renderer (google it),and yes you will have to user shaders, but perform lighting per pixel.

Quote:Original post by xlad
Another question is, how is this implemented in games for example?
http://img535.imageshack.us/img535/1079/screenshotdoom201009261.png
When I shoot and the projectile hits the wall, on it a "bullet" texture appears.
How is this implemented efficiently (both as a texture and through 3D), in very large game maps and also if there are alot of bullet marks?

Google for billboards and ray picking.


The bullet "holes" from the screen shot are made through billboards?
Aren't billboards the 2D images like the "Shells box" from the image?

In the game, if i shoot in a wall, the wall changes it's texture, the bullet hole is drawn like it is part of the wall.
Quote:Original post by xlad
The bullet "holes" from the screen shot are made through billboards?
Aren't billboards the 2D images like the "Shells box" from the image?


No, not neccesarily. Billboards can be oriented against the camera,
and thus may be orthogonal to the view direction.
They don't have to, though they can be projected cylindrically,
axis aligned or facing any direction.

I call this type of billboards decals. That's what they use in HL and Source.

Quote:Original post by xlad
In the game, if i shoot in a wall, the wall changes it's texture, the bullet hole is drawn like it is part of the wall.


That would be very heavy on the memory to change the individual textures,
although it's doable -> Search texture atlas and streaming textures.

For a zDoom-style effect (also used many other places), it's common to just multiply a texture with the surface below to achieve this effect.
Just as billboarded particles, decals are typically ordered in a list, and
have a limited lifetime and/or are cycled FIFO-style.
Light:
For point light, use a fixed size array of points and a variable to tell how many lights are used. Place the data in a global constant buffer. In each pixel, measure the distance to each light point. If you use it with a culling system so that you only give light points to objects in it's area, you can have many lights without shadows for gunshots. Casting shadows from more than 10 light sources is demanding no matter what method you use. If you mostly use static light, you can use a lightmap and project a single shadow from dynamic objects to the floor.

Decals:
1. There are many used methods but a fast way is to create a quad of two triangles that are rendered with a depth offset to avoid overlapping from rounding errors in the depth buffer.
2. If you need larger decals, you must copy a subset of triangles of the object that you hit that are colliding with the brush and give it a texture with transparent edges and clamp with the texture sampler so that the texture is not repeated. Cutting triangles instead of pixels takes too much time when decals are created a lot.
3. The most demanding way is to generate UV coordinates in the shader and give the decals as a texture atlas. Each decal on the model is given using a global constant buffer and will have data about projection and area on the texture atlas. Clamping UV coordinates to a subset of an atlas can be done in the pixel shader before the UV is given to the Sampler. This method has the advantage of moving around without having to generate any geometry.

I recomend Shader model 4 and DirectX 10.0 as minimum requirement since everyone will have it when your engine have it's first game released and I don't care for detail tessellation since deep bumpmapping can look better.

This topic is closed to new replies.

Advertisement