Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

2518 Excellent

About lwm

  • Rank

Personal Information

  • Role
    Artificial Intelligence
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1.   Game engines are usually separated into two parts. The simulation, where all the game objects live, and a renderer, which pulls all necessary information from the simulation to draw what the camera sees. The simulation works in "world space" and doesn't even need to know that a renderer exists. A camera is just another object within the simulation. When the renderer wants to draw the world as seen by a camera, it constructs a view matrix for the camera. This matrix is what conceptually moves the world around the camera. But you don't do this yourself. That's the GPU's job, and it is very good at it. You may want to read a bit about world, view and projection matrices.
  2. One thing I would like to add: Learn how to write unit tests and how to write software that it is properly unit testable. This is valuable for two reasons: Firstly, "Test Driven Design" is one of those buzzwords that look good on a CV. But more importantly, thinking about the testability of the code you write almost automatically nudges you in the direction of the SOLID principles and can be a good place to start thinking about software architecture. From my experience in the "big business" world, having unit tests is the first step towards avoiding spaghetti code.
  3. Are you using some framework that we're not aware of? I suspect our definitions of the term "mesh" differ somewhat.   Each frame does basically the same thing: - clear the entire screen - for each "thing" you want to draw    - set up the pipeline (probably at least a vertex buffer, a vertex shader and a pixel shader)    - call draw - repeat   If you want to use a different pixel shader for a specific object, you just use that pixel shader when setting up the pipeline for that object. To start with, I suggest setting up the pipeline from scratch for each object. Optimizing for performance comes later.
  4. Each draw call uses the shaders (and other pipeline state) that were set on the device context previously.   Set Vertex Shader V1 Set Pixel Shader P1 Draw Mesh 1 (uses V1 and P1)   Set Pixel Shader P2 Draw Mesh 2 (uses V1 and P2)
  5. A modern variation on this would be tiled forward rendering or "Forward+", where you iterate over a list of lights per screen tile instead of per mesh.
  6. lwm

    Is it C# Territory?

    - Save time by going with C# - Use saved time for pushing compute-heavy algorithms to GPU instead   This is exactly what is happening at my job currently (MRI scanner). Most of the user-facing code is moving from C++ to C#. More and more of the high-performance code is moving from C++ to CUDA and the likes.
  7. From DirectX's point of view, the vertex and pixel shaders are completely independent. The only information DX uses to send data from one pipeline stage to the next are the semantics. Whatever your vertex shader writes to the variable with the TEXCOORD0 semantic will be linearly interpolated across the triangle by the rasterizer and sent to the pixel shader input variable with the same semantic.   Most people [citation needed] will simply reuse the code for the struct for both the VS output and PS input however.
  8. The input and output structs don't have to be identical. The input semantics specify the fields you want to read from the vertex buffers per vertex. Most of the time, there is a field that represents the positions of a vertex, but this is not required. The input can also be a subset of the fields in your model's vertices or even empty if you only need system-generated values.   The output specifies the fields that are sent to the rasterizer. Here you are actually required to provide a field with the position semantic, so that the rasterizer knows what the triangles you want to draw look like. You can also output additional values from the vertex shader. For example, you might want to calculate the output-vertices' texture coordinates from the input-vertices' position.   Assume that you want to draw a terrain mesh, deform it using a height map and generate a color for each vertex based on its height: struct VS_INPUT { float3 positionModelSpace : POSITION; float2 textureCoordinate : TEXCOORD0; }; struct VS_OUTPUT { float4 positionProjSpace : POSITION; float4 color : COLOR0; }; float4x4 gWorldMatrix; float4x4 gViewMatrix; float4x4 gProjectionMatrix; VS_OUTPUT vs_main ( VS_INPUT Input ) { VS_OUTPUT Output; float4 positionWorldSpace = mul(Input.positionModelSpace, gWorldMatrix); float height = get_height_from_texture(Input.textureCoordinate); positionWorldSpace.y += height; Output.color = get_color_from_height(height); float4 positionViewSpace = mul(positionWorldSpace, gViewMatrix); Output.positionProjSpace = mul(positionViewSpace, gProjectionMatrix); return Output; }
  9. lwm

    3D Grass and Depth Problem

    You can do this by setting the Depth-Stencil-State to DepthEnable = true DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO
  10. lwm

    3D Grass and Depth Problem

    Instead of alpha blending, you probably want alpha testing for the grass billboards so that no depth value is written for the transparent pixels.   You can do something like this in your pixel shader: float4 texture = ... if(texture.a < 0.5) discard;
  11. You can set the default culture for a thread: Thread.CurrentThread.CurrentCulture = CultureInfo.InvariantCulture;
  12. lwm

    Basic constant buffer question

    Yes, the last SetShader call will determine what shader will run. Methods like SetConstantBuffer bind things to the pipeline (device context), not a specific shader.   You can do this for example and both draw calls will use the same constant buffer: SetConstantBuffer(0, B1) SetShader(S1) Draw() SetShader(S2) Draw()
  13.   This is one of the most important points in my experience. Per-frame heap allocations are really not that bad if the newly created objects have short lifetimes and can be cleaned up by a generation 0 collection. The majority of objects should either be created once at load-time and kept around "forever", or for a specific frame and only this specific frame.
  14. Tessellation happens before the Geometry Shader, so I'm pretty sure you can use the Stream-Out stage to write your tessellated mesh to a buffer and read that back to the CPU.
  15. lwm

    Need help for game. C#

      If GDI works for you, that's totally fine as a first step. Just be aware of the limitations, such as the lack of hardware acceleration. Direct2D was designed as a replacement for GDI and is the more modern option.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!