• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

lwm

Members
  • Content count

    102
  • Joined

  • Last visited

Community Reputation

2518 Excellent

About lwm

  • Rank
    Member
  1.   Game engines are usually separated into two parts. The simulation, where all the game objects live, and a renderer, which pulls all necessary information from the simulation to draw what the camera sees. The simulation works in "world space" and doesn't even need to know that a renderer exists. A camera is just another object within the simulation. When the renderer wants to draw the world as seen by a camera, it constructs a view matrix for the camera. This matrix is what conceptually moves the world around the camera. But you don't do this yourself. That's the GPU's job, and it is very good at it. You may want to read a bit about world, view and projection matrices.
  2. One thing I would like to add: Learn how to write unit tests and how to write software that it is properly unit testable. This is valuable for two reasons: Firstly, "Test Driven Design" is one of those buzzwords that look good on a CV. But more importantly, thinking about the testability of the code you write almost automatically nudges you in the direction of the SOLID principles and can be a good place to start thinking about software architecture. From my experience in the "big business" world, having unit tests is the first step towards avoiding spaghetti code.
  3. Are you using some framework that we're not aware of? I suspect our definitions of the term "mesh" differ somewhat.   Each frame does basically the same thing: - clear the entire screen - for each "thing" you want to draw    - set up the pipeline (probably at least a vertex buffer, a vertex shader and a pixel shader)    - call draw - repeat   If you want to use a different pixel shader for a specific object, you just use that pixel shader when setting up the pipeline for that object. To start with, I suggest setting up the pipeline from scratch for each object. Optimizing for performance comes later.
  4. Each draw call uses the shaders (and other pipeline state) that were set on the device context previously.   Set Vertex Shader V1 Set Pixel Shader P1 Draw Mesh 1 (uses V1 and P1)   Set Pixel Shader P2 Draw Mesh 2 (uses V1 and P2)
  5. A modern variation on this would be tiled forward rendering or "Forward+", where you iterate over a list of lights per screen tile instead of per mesh.
  6. - Save time by going with C# - Use saved time for pushing compute-heavy algorithms to GPU instead   This is exactly what is happening at my job currently (MRI scanner). Most of the user-facing code is moving from C++ to C#. More and more of the high-performance code is moving from C++ to CUDA and the likes.
  7. From DirectX's point of view, the vertex and pixel shaders are completely independent. The only information DX uses to send data from one pipeline stage to the next are the semantics. Whatever your vertex shader writes to the variable with the TEXCOORD0 semantic will be linearly interpolated across the triangle by the rasterizer and sent to the pixel shader input variable with the same semantic.   Most people [citation needed] will simply reuse the code for the struct for both the VS output and PS input however.
  8. The input and output structs don't have to be identical. The input semantics specify the fields you want to read from the vertex buffers per vertex. Most of the time, there is a field that represents the positions of a vertex, but this is not required. The input can also be a subset of the fields in your model's vertices or even empty if you only need system-generated values.   The output specifies the fields that are sent to the rasterizer. Here you are actually required to provide a field with the position semantic, so that the rasterizer knows what the triangles you want to draw look like. You can also output additional values from the vertex shader. For example, you might want to calculate the output-vertices' texture coordinates from the input-vertices' position.   Assume that you want to draw a terrain mesh, deform it using a height map and generate a color for each vertex based on its height: struct VS_INPUT { float3 positionModelSpace : POSITION; float2 textureCoordinate : TEXCOORD0; }; struct VS_OUTPUT { float4 positionProjSpace : POSITION; float4 color : COLOR0; }; float4x4 gWorldMatrix; float4x4 gViewMatrix; float4x4 gProjectionMatrix; VS_OUTPUT vs_main ( VS_INPUT Input ) { VS_OUTPUT Output; float4 positionWorldSpace = mul(Input.positionModelSpace, gWorldMatrix); float height = get_height_from_texture(Input.textureCoordinate); positionWorldSpace.y += height; Output.color = get_color_from_height(height); float4 positionViewSpace = mul(positionWorldSpace, gViewMatrix); Output.positionProjSpace = mul(positionViewSpace, gProjectionMatrix); return Output; }
  9. You can do this by setting the Depth-Stencil-State to DepthEnable = true DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO
  10. Instead of alpha blending, you probably want alpha testing for the grass billboards so that no depth value is written for the transparent pixels.   You can do something like this in your pixel shader: float4 texture = ... if(texture.a < 0.5) discard;
  11. You can set the default culture for a thread: Thread.CurrentThread.CurrentCulture = CultureInfo.InvariantCulture;
  12. Yes, the last SetShader call will determine what shader will run. Methods like SetConstantBuffer bind things to the pipeline (device context), not a specific shader.   You can do this for example and both draw calls will use the same constant buffer: SetConstantBuffer(0, B1) SetShader(S1) Draw() SetShader(S2) Draw()
  13.   This is one of the most important points in my experience. Per-frame heap allocations are really not that bad if the newly created objects have short lifetimes and can be cleaned up by a generation 0 collection. The majority of objects should either be created once at load-time and kept around "forever", or for a specific frame and only this specific frame.
  14. Tessellation happens before the Geometry Shader, so I'm pretty sure you can use the Stream-Out stage to write your tessellated mesh to a buffer and read that back to the CPU.
  15. Unity

      If GDI works for you, that's totally fine as a first step. Just be aware of the limitations, such as the lack of hardware acceleration. Direct2D was designed as a replacement for GDI and is the more modern option.