• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

jajcek

Members
  • Content count

    36
  • Joined

  • Last visited

Community Reputation

274 Neutral

About jajcek

  • Rank
    Member
  1. I think I will go with the MRT solution, because I will probably need this when doing soft edges feature, so it will be good to study this topic a bit. Although I have a probably funny question before I start. Does MRT work with back buffer as well? I mean if it's possible to render simultaneously to back buffer and some other render target or you can render simultaneously only to 2 other render targets where none of them is the main back buffer?
  2. Hey,   I want to optimize my water rendering on the grounds of this: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter19.html   However, I don't understand how the last step should look like when rendering the final scene. 3 ways came to my mind, but I don't know which one is the correct way to go. Could you give me some suggestions, please?   I 1. Render all to the S texture except for the water 2. Render water to the same S texture using (for refractions) the texture you are rendering to (is this even possible?) 3. Render texture on the orthogonal to camera plane   II 1. Render all to the S texture except for the water, alpha channel will contain what part of the water is seen 2. Render texture on the orthogonal to camera plane 3. Render water to the main back buffer (in front of the plane) using mask from alpha channel to clip what is not seen   III 1. Render all (without water) to the S texture an back buffer at once using MRT 2. Render water simply to the back buffer using the texture for refractions   Thanks!
  3. I wonder how do you do the frustum culling with only 2 draws? In shader? It must be inefficient to cull by a vertex.
  4. The terrain tiles are not instanced*, every tile has its own vertex buffer, so SV_InstanceID won't help me, but I will play with that and check different approaches.   *I don't even see the point of instancing it as every tile has another geometry, altough number of vertices is the same,
  5. I really really appreciate wasting time to answer my question and giving the example . The SV_VertexID looks very promising.   I actually shouldn't put my data to a texture as I am generating an infinite terrain, so I think sending a texture to the shader will be heavier than sending an array of Y values (and additionaly I would need to sample the texture inside the shader).   I am pre-calculating index buffers and switching between them if it's neccessary when creating LOD effect (unfortunately it has popping effects).   But still I am wondering what the performance impact will be during translating the chunks (from my investigation translation will be called like 50-100 times). I think the best way will be to use constant buffer to put there some origin position and add it to the final vertex position in the shader, but I would need to update constant buffer 50-100 times during one frame or maybe put there 100x origins at once, but which origin use for which chunk... aww! I don't know now I will try to find the best way
  6. Hello,   I am currently working on a terrain generation (so common problem that you're probably bored already :)). I have recently read on gamedev.net topic (can't find it anymore :/) that there's a possibility to have one buffer containing X and Z positions (which is always the same - real positioning will be done later by using translation) and send only Y values to the shader. I was wondering about such method for a while, but I can't find any way to merge these buffers in a shader. If I sent to shader only Y values how can I obtain X and Z values?   One thing that came up to my mind while writing this question was to send X and Z values through the constant buffer. Is this the correct way?   Thank you for any help.
  7. Okay I have solved it, according to the http://mathworld.wolfram.com/BinormalVector.html , the binormal is cross(t, n), so I just changed the code to:   //x = cross( b, n ); b = cross( t, n ); b = normalize( b );   and it works.
  8. Hello,   I have recendly added normal mapping to my terrain engine, but after diagnosing a bit, it looks like there is some problem with the binormal (it is somehow too sharp in some places). Here are some screenshots:   Full (there area visible artifacts in some places): http://img833.imageshack.us/img833/89/lzwv.png Tangent color: http://img23.imageshack.us/img23/6396/8bs.png Binormal color (compare it with the full screenshot): http://img543.imageshack.us/img543/3219/dyu7.png Normal color: http://img541.imageshack.us/img541/5279/85ss.png   Why this happens? This is the code responsible for generating TBN:   - Terrain has shared vertices. - vertexPosition below is a position not transformed by any matrix. - normal is calculated on CPU and is send (not transformed by anything) to the VS which sends it to the PS as is. float3 computeNormalWithTBN(float3 vertexPosition, float2 texCoord, float3 normal ) {     float3 p_dx = ddx(vertexPosition);     float3 p_dy = ddy(vertexPosition);     float2 tc_dx = ddx(texCoord);     float2 tc_dy = ddy(texCoord);     float3 t = normalize( tc_dy.y * p_dx - tc_dx.y * p_dy );     float3 b = normalize( tc_dy.x * p_dx - tc_dx.x * p_dy );     float3 n = normalize(normal);     float3 x = cross(n, t);     t = cross(x, n);     t = normalize(t);     x = cross(b, n);     b = cross(n, x);     b = normalize(b);     float4 detail = normalMap.Sample( SampleType, texCoord );     detail = (detail * 2.0f) - 1.0f;     detail *= 6.0f;     return normalize( normal + detail.x * t + detail.y * b ); } main PS function:     // ...     input.normal = computeNormalWithTBN( input.rawPosition.xyz, input.tex.xy, input.normal;     float light = saturate( dot( input.normal, float3( 0, 0.73, -0.69 ) ) );     float4 color = 0.3f;     color += light;     return color; } Thank you for any hints!
  9. Hey, thanks for the interest.   I think it's in world space, but I will explain how it is calculated to not make a stupid mistake.   The normal is calculated on CPU and is send to the VS. To calculate normal for the "5" vertex: 1 2 3 4 5 6 7 8 9 I calculate: XMFLOAT3 normal; XMVECTOR v1 = {  2, height_value_of_3 - height_value_of_7, -2 }; XMVECTOR v2 = { -2, height_value_of_1 - height_value_of_9, -2 }; XMStoreFloat3( &normal, XMVector3Normalize( XMVector3Cross( v1, v2 ) ) ); return normal; VS only passes the normals to the PS as is.
  10. Hi,   I've got a problem that normal mapping is dependent on view. It looks like this (sorry for the watermarks): http://www.youtube.com/watch?v=-V_2Pp4kiLM&feature=youtu.be   The normal mapping is calculated in the pixel shader as follows (it is actually taken from http://stackoverflow.com/questions/5255806/how-to-calculate-tangent-and-binormal):   - vertexPosition is a position in SV_POSITION semantic - shared vertices on in the triangles   float3 computeNormalWithTBN(float3 vertexPosition, float2 texCoord, float3 normal ) { float3 p_dx = ddx(vertexPosition); float3 p_dy = ddy(vertexPosition); float2 tc_dx = ddx(texCoord); float2 tc_dy = ddy(texCoord); float3 t = normalize( tc_dy.y * p_dx - tc_dx.y * p_dy ); float3 b = normalize( tc_dy.x * p_dx - tc_dx.x * p_dy ); float3 n = normalize(normal); float3 x = cross(n, t); t = cross(x, n); t = normalize(t); x = cross(b, n); b = cross(n, x); b = normalize(b); float4 detail = normalMap.Sample( SampleType, texCoord ); detail = (detail * 2.0f) - 1.0f; detail *= 6.0f; return normalize( normal + detail.x * t + detail.y * b ); } // in main function     // ...     input.normal = computeNormalWithTBN( input.position.xyz, input.tex.xy, input.normal;     float light = saturate( dot( input.normal, float3( 0, 0.73, -0.69 ) ) );     float4 color = 0.3f;     color += light;     return color; }     Why this happens?   Thanks for help.
  11. I have changed the input layout as you said:   { "POSITION", 0U, DXGI_FORMAT_R32G32B32_FLOAT,    0U, 0U,                           D3D11_INPUT_PER_VERTEX_DATA, 0U }, { "COLOR",    0U, DXGI_FORMAT_R32G32B32A32_FLOAT, 0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U }, { "NORMAL",   0U, DXGI_FORMAT_R32G32B32_FLOAT,    0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U }, { "TEXCOORD", 0U, DXGI_FORMAT_R32G32_FLOAT,       0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U }, { "TESTCOORD", 0U, DXGI_FORMAT_R32G32_FLOAT,       0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U } and my vertex shader:   cbuffer WVP {     matrix worldViewProjMatrix; }; struct VS_INPUT {     float3 position : POSITION;     float4 color    : COLOR;     float3 normal   : NORMAL;     float2 tex      : TEXCOORD0;     float t[2]   : TESTCOORD0; }; struct PS_INPUT {     float4 position : SV_POSITION;     float4 color    : COLOR;     float3 normal   : NORMAL;     float3 tex      : TEXCOORD0; }; PS_INPUT main(VS_INPUT input) {     PS_INPUT output;     output.position = mul( float4( input.position, 1.0f ), worldViewProjMatrix );     output.color    = input.color;     output.normal   = input.normal;     output.tex = float3( input.tex.x, input.tex.y, input.position.y / 50.0f );     return output; } but still the same effect with the same error.   I don't know where to look, it is not even used in the shader, just trying to pass there something for testing.   To sum up: when I change the float t[2]   : TESTCOORD0; to float2 t : TESTCOORD0; in shader and float t[2]; to DirectX::XMFLOAT2 t; in vertex structure   it is working. But I'd like it to be an array.
  12. Hello,   I'm looking for some flexibility on sending data for each vertex. In my situation I'd like to pass some array for every vertex to specify which texture it should use (the array won't me too big, like 5, 6 maybe 7 values; I cannot send only one value with texture id because I need blending between them), like:   [1.0, 0.0, 0.0 ...] - first texture [0.0, 1.0, 0.0 ...] - second texture and so on   Currently I am packing these values into some TEXCOORD etc. but it is not flexible, because when I want to add next texture I need to change the vertex structure and try to pack the value somewhere in the shader.   So short question: is it possible to send an array with each vertex in DirectX/HLSL?   EDIT:   I have such structure (variable t is unused but it works with it):   struct TerrainVertex { DirectX::XMFLOAT3 position; DirectX::XMFLOAT4 color; DirectX::XMFLOAT3 normals; DirectX::XMFLOAT2 texture; DirectX::XMFLOAT2 t; }; and changed it to (for testing): struct TerrainVertex {     DirectX::XMFLOAT3 position;     DirectX::XMFLOAT4 color;     DirectX::XMFLOAT3 normals;     DirectX::XMFLOAT2 texture;     float t[2]; }; but my whole terrain disappears and I have random artifacts on the screen (plus Microsoft C++ exception: _com_error at memory location 0x002EF170.)   input layout:   D3D11_INPUT_ELEMENT_DESC inputLayout[] =  {     { "POSITION", 0U, DXGI_FORMAT_R32G32B32_FLOAT,    0U, 0U,                           D3D11_INPUT_PER_VERTEX_DATA, 0U },     { "COLOR",    0U, DXGI_FORMAT_R32G32B32A32_FLOAT, 0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U },     { "NORMAL",   0U, DXGI_FORMAT_R32G32B32_FLOAT,    0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U },     { "TEXCOORD", 0U, DXGI_FORMAT_R32G32_FLOAT,       0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U },     { "TEXCOORD", 1U, DXGI_FORMAT_R32G32_FLOAT,       0U, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0U } };   Thanks for help!
  13. It's hard to make it without such interpolation, since my terrain is infinite and automatically generated using perlin noise.   I have actually no idea on how to use the texture array to make interpolations between them ([0] - [1] - [2] - ... - [N] and so on) without sending N weights to every vertex to tell the shader which texture it should sample.   Could you help me with that, please?   // edit   Is it possible to give texture index a float value and get blending this way? E.g:   Texture2DArray myTex; myTex.Sample( sampler, float3( u, v, 0.5f ) );
  14. What about interpolating between them? In 3D texture I could use simple weights like:   1.0 - rock 0.5 - grass 0.0 - sand   and 0.25 would be mix of grass and sand.
  15. @GameCreator By saying DDS plugin for photoshop I meant the one you are talking about.   @Nik02 Is that so? I thought it's a bug not a feature. But searching a bit more I have found this: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205579(v=vs.85).aspx   I wonder how such texture will look on a mesh. This approach of interpolating mipmaps is quite dangerous I think. Some unexpected textures might be visible when looking further.