• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

139 Neutral

About a2ps

  • Rank
  1. wow i cant believe i fell for that.. well thanks a lot for the help
  2. hello, trying to make a simple effect on a fullscreen triangle (triangle generated on the vertex shader using SV_VertexID) and on the pixel shader im trying color the triangle depending on its text coords (that is also generated on the vertex shader). when i compare uv.x or uv.y with a magic number (like 0.3333 for example) it works just fine but when i use generic values (like 1 / 3 instead of using 0.3333) it doesnt work. this is a simple example that shows the problem: [CODE] struct VSOutput { float4 position : SV_POSITION; // screen space float2 uv : TEXCOORD; // [0,1] coords }; PSOutput PS(VSOutput input) { PSOutput output = (PSOutput)0; float oneThird = 1 / 3; float twoThird = 2 / 3; float x = input.uv.x; if(x < oneThird) output.diffuse = float4(1.0, 0.0, 0.0, 1.0); else if(x > twoThird) output.diffuse = float4(0.0, 0.0, 1.0, 1.0); else output.diffuse = float4(0.0, 1.0, 0.0, 1.0); return output; } [/CODE] this outputs blue for the entire fullscreen triangle (meaning that only the "else if(x > twoThird)" test works) BUT if i change from "x < oneThird" and "x > twoThird" to "x < 0.3333" and "x > 0.6666" it outputs 3 color stripes and eveything works great. conclusion: using 0.3333 instead of 1 / 3 and using 0.6666 instead of 2 / 3 works great, the other way around it doesnt work. is there something im missing? thanks in advance.
  3. thanks for the help Tsus. so yes, the way im setting my mesh, the normals already have the w component at 0.0f. so i took your advice and rendered the gbuffer into the main rendertarget (the window) so i could check the results. the reason i didnt do that before was because i thought that PIX showed the "graphics pixel history" feature for all buffers but i was wrong, the moment i rendered it to the main render target i could immediatelly see that the draw call gets called but it didnt show so the transformations should be wrong. i hard coded some matrices to see it worked and at first it didnt, but after i applied a transpose to it, it worked! the only problem now is that for the 4 spheres that i draw, it leaves the first 3 at the origin (like it didnt apply any transformation) and only moves the 4th sphere, probbaly some bug in my code that ill have to find. sorry i havent replied earlier but my work doesnt leave me much time for my personal projects (javaEE for a living is NOT fun..). PS: as soon as i finished writting this post, i managed to solve the "only one sphere move" problem, it was just a bug due to ctrl+c ctrl+v of code, now everything works fine! now off to build my lightning buffer and pass. once again thanks so much for the help [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  4. hello, so in the process of learning direct3d11 i decided to implement a small framework where is easy to change render type (ppl, deferred, forward, etc) and now im using that framework to implement a a pre-pass lightning. im still in the process of rendering normals and depths into the gbuffer. problem is gbuffer clears fine (grey on the normal buffer, white on the depth buffer) BUT when i try to render 4 spheres (just for testing) onto the gbuffer, nothing changes it just keeps grey for normals and white on the depth. at least thats what it shows on PIX (im not rendering to the window just yet, just using PIX to see if it works). problem is it could be wrong transformations, wrong shaders, wrong constant buffers.. i've already lost 3 days with this and its becoming really frustrating especially because i thought that i was starting to understand how direct3d works.. ill try to post every piece of relevant code my clear gbuffer shader: [source lang="cpp"] // simple vertex and pixel shader that clears the NormalDepth buffer to default values ////////// Vertex Shader ////////// struct VSInput { float4 Position : SV_POSITION0; }; struct VSOutput { float4 Position : SV_POSITION0; }; VSOutput VS(VSInput input) { VSOutput output = (VSOutput)0; output.Position = input.Position; return output; } ////////// Pixel Shader ////////// struct PSOutput { float4 Normal : SV_TARGET0; float4 Depth : SV_TARGET1; }; PSOutput PS(VSOutput input) { PSOutput output = (PSOutput)0; // set normals (for the normals buffer) to 0.5f (transforming 0.5f into [-1, 1] is 0.0f, a good default value output.Normal = float4(0.5f, 0.5f, 0.5f, 1.0f); // set depth (for the depth buffer) to white (maximum depth) output.Depth = 1.0f; return output; } [/source] my render objects normals and depths into gbuffer shader: [source lang="cpp"] // simple vertex and pixel shader that fills the NormalDepth buffer with the transformed geometry ////////// Vertex Shader ////////// cbuffer Parameters : register(b0) { matrix WorldMatrix; matrix ViewMatrix; matrix ProjectionMatrix; }; struct VSInput { float4 Position : SV_POSITION0; float4 Normal : NORMAL0; }; struct VSOutput { float4 Position : SV_POSITION0; float3 Normal : TEXCOORD0; float2 Depth : TEXCOORD1; }; VSOutput VS(VSInput input) { VSOutput output = (VSOutput)0; // apply position transformations (into screen space) output.Position = mul(input.Position, WorldMatrix); output.Position = mul(output.Position, ViewMatrix); output.Position = mul(output.Position, ProjectionMatrix); // apply normal transformations (into world space) output.Normal = mul(WorldMatrix, input.Normal); // pass on the depth position output.Depth.x = input.Position.z; output.Depth.y = input.Position.w; return output; } ////////// Pixel Shader ////////// struct PSOutput { float4 Normal : SV_TARGET0; float4 Depth : SV_TARGET1; }; PSOutput PS(VSOutput input) { PSOutput output = (PSOutput)0; // remap the normal from [-1, 1] to [0, -1] output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); // output.Normal = ((input.Normal + 1.0f) / 2.0f); // just pass the depth values output.Depth = input.Depth.x / input.Depth.y; return output; } [/source] calculation of the view matrix: [source lang="cpp"]DirectX::XMMATRIX D3DCamera::GetViewMatrix() { // create position vector auto positionVector = DirectX::XMVectorSet(positionX, positionY, positionZ, 0.0f); // default lookAt is through the Z coord auto lookAtVector = DirectX::XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f); // default up is through the Y coord auto upVector = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); // create the rotation matrix, roll = Z, pitch = X, yaw = Y auto rotationMatrix = DirectX::XMMatrixRotationRollPitchYaw(rotationX, rotationY, rotationZ); // transform the lookAt and up vector using the rotation matrix lookAtVector = DirectX::XMVector3TransformCoord(lookAtVector, rotationMatrix); upVector = DirectX::XMVector3TransformCoord(upVector, rotationMatrix); // update lookAt vector from the new position lookAtVector = DirectX::XMVectorAdd(positionVector, lookAtVector); // finally create the view matrix using the updated vectors return DirectX::XMMatrixLookAtLH(positionVector, lookAtVector, upVector); }[/source] calculation of the world matrix for each object: [source lang="cpp"]virtual DirectX::XMMATRIX GetWorldMatrix() { // TODO: rotate around a point, now it only rotates around itself auto scaling = DirectX::XMMatrixScaling(scaleX, scaleY, scaleZ); auto rotation = DirectX::XMMatrixRotationRollPitchYaw(rotationX, rotationY, rotationZ); auto translation = DirectX::XMMatrixTranslation(positionX, positionY, positionZ); auto scale_rotation = DirectX::XMMatrixMultiply(DirectX::XMMatrixScaling(scaleX, scaleY, scaleZ), DirectX::XMMatrixRotationRollPitchYaw(rotationX, rotationY, rotationZ)); return DirectX::XMMatrixMultiply(scale_rotation, translation); }[/source] now setting the shader parameters and calling draws: [source lang="cpp"]void D3DMaterialFillNormalDepth::SetParameters(const DirectX::XMMATRIX& worldMatrix, const DirectX::XMMATRIX& viewMatrix, const DirectX::XMMATRIX& projectionMatrix) { cbParameters.WorldMatrix = worldMatrix; cbParameters.ViewMatrix = viewMatrix; cbParameters.ProjectionMatrix = projectionMatrix; d3d->DeviceContext()->UpdateSubresource(cbBuffer, 0, nullptr, &cbParameters, 0, 0); } // then for each object: d3d->DeviceContext()->OMSetRenderTargets(renderTargetViews.size(), &renderTargetViews[0], d3d->DepthStencilView()); d3d->DeviceContext()->IASetInputLayout(pInputLayout); d3d->DeviceContext()->VSSetShader(pVertexShader, nullptr, 0); d3d->DeviceContext()->PSSetShader(pPixelShader, nullptr, 0); d3d->DeviceContext()->VSSetConstantBuffers(0, 1, &cbBuffer); d3d->DeviceContext()->PSSetConstantBuffers(0, 1, &cbBuffer); // activate this geometry on the device context unsigned int stride = sizeof(VertexType); unsigned int offset = 0; d3d->DeviceContext()->IASetVertexBuffers(0, 1, &pVertexBuffer, &stride, &offset); d3d->DeviceContext()->IASetIndexBuffer(pIndexBuffer, DXGI_FORMAT_R16_UINT, 0); d3d->DeviceContext()->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); d3d->DeviceContext()->DrawIndexed(indices.size(), 0, 0); [/source] PIX does say it draws around 1200 indices for each sphere, so i assume things are kind of working, but the result buffer is just grey for normals and white for depth like it is after clearing it. sorry the wall of code, trying to post every piece that i think is relevant, feel free to ask for more if needed. and thanks in advance.
  5. [quote name='kauna' timestamp='1324502838' post='4896313'] Let me take a guess : you haven't bound your constant buffer to pixel shader resources, only to the vertex shader resources. You'll need to call PSSetConstantBuffers too on the same constant buffer. Best regards! [Edit] You should enable Direct3D debug output which would help you track these kind of issues [/quote] that was it, that solved it! i had debug on on the dx control panel but the "filter info" checkbox was checked so it didint output any info (corrected now). thank you so much for your help!
  6. [quote name='BornToCode' timestamp='1324497341' post='4896280'] Did you try to debug it with Pix and see what is store in your color. [/quote] thanks for the answer. yes i just debbuged it using PIX and it says that outputColor = float4(0.0f, 0.0f, 0.0f, 0.0f); thats so weird because i dont even initialize it with that value let alone update the constant buffer with that value. i set it as (1.0f, 0.0f, 0.0f, 1.0f) but for some reason on the pixel shader its equal to 0 :\ EDIT: when i check the buffer on PIX it has the correct value! i've even recompiled with a different value and it shown on the buffer as correct.. but on the pixel shader it says the value is 0. since the dx tutorial runs fine on my computer i dont think its a driver issue or something.. this is really frustrating.
  7. hi, so im currently on tutorial 6 of the dx sdk (lightning), and im trying to adapt it to my little framework. problem is it seems that the constant buffer only updates a few variables (at least the world matrix) but doesnt update others. the problem is such that i've removed any lightning and i only want to output a solid color, and it still doesnt work. when i hard-code the color on the pixel shader it works, when i try to use a color from the constant buffer it doesnt work.. its even more weird because all my attempts at the previous tutorials worked just fine, even passing a color through the constant buffer worked fine.. but im tired of going through all my code and i cant seem to find anything wrong but it still outputs black (like the constant buffer color is == 0.0f). ill try to post every relevant code relevant structs: [code] // the normal vertex im using struct NormalVertex { XMFLOAT3 Pos; XMFLOAT3 Normal; }; //my constant buffer: struct ConstantBuffer { XMMATRIX World; XMMATRIX View; XMMATRIX Projection; XMFLOAT4 lightDir[2]; XMFLOAT4 lightCol[2]; XMFLOAT4 outputColor; }; [/code] at the setup of my cube i create the constant buffer with an empty ConstantBuffer struct: [code] //create constant buffer: array <ConstantBuffer, 1> tmpCB = { ConstantBuffer() }; inst.CreateBuffer(D3DEngine::Constant, tmpCB, &constantBuffer); //the CreateBuffer method: template <typename T, size_t Size> void CreateBuffer(BufferType bufferType, const std::array <T, Size>& source, ID3D11Buffer** bufferOut) { HRESULT hr = S_OK; D3D11_BUFFER_DESC bd; ZeroMemory(&bd, sizeof(bd)); bd.Usage = D3D11_USAGE_DEFAULT; bd.ByteWidth = sizeof(T) * source.size(); bd.BindFlags = bufferType; bd.CPUAccessFlags = 0; D3D11_SUBRESOURCE_DATA initData; ZeroMemory(&initData, sizeof(initData)); initData.pSysMem = source.data(); hr = device->CreateBuffer(&bd, &initData, bufferOut); if(FAILED(hr)) throw std::exception("ID3D11Device::CreateBuffer() -> Failed."); } [/code] and this worked just fine when i tried a simple 2d triangle and my render function for the cube: [code] ConstantBuffer cb; cb.world = world; cb.view = view; cb.projection = projection; cb.outputColor = XMFLOAT4(1.0f, 0.0f, 0.0f, 1.0f); inst.UpdateBuffer(constantBuffer, cb); inst.DrawRenderData(36); //36 vertices //the update buffer method: template <typename T> void UpdateBuffer(ID3D11Buffer* buffer, const T& source) { deviceContext->UpdateSubresource(buffer, 0, nullptr, &source, 0, 0); } [/code] i set all the buffers and shaders and etc to the deviceContext at the cube setup. now my shader: [code] cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; float4 lightDir[2]; float4 lightCol[2]; float4 outputColor; }; struct VS_INPUT { float4 Pos : POSITION; float3 Normal : NORMAL; }; struct PS_INPUT { float4 Pos : SV_POSITION; float3 Normal : NORMAL; }; PS_INPUT VS(VS_INPUT input) { PS_INPUT output = (PS_INPUT)0; output.Pos = mul(input.Pos, World); output.Pos = mul(output.Pos, View); output.Pos = mul(output.Pos, Projection); output.Normal = mul(input.Normal, World); return output; } float4 PS(PS_INPUT input) : SV_TARGET { return outputColor; } [/code] i KNOW it updates a few constantbuffer variables because i can rotate the cube, so at least the world matrix is being updated but it draws the cube black.. if on the pixel shader i do: return float4(1.0f, 0.0f, 0.0f, 1.0f); it works! this is driving me mad, if anyone can throw a tip ill really appreciate it. thanks.
  8. [quote name='Tsus' timestamp='1324314280' post='4895357'] So, in conclusion: You do it already right. Hope this helps you a little. [/quote] it helped a lot! thanks so much for taking time to read my answer and reply to it
  9. hi, im educating myself about D3D11 and shader development. i have a small framework that does all the initialization work as well as buffer and shader creation. for a small test i build a small square (2 triangles) where every vertex has a different color on the original buffer. this is my initialization code: [code] array <string, 2> tmpInputLayout = { string("POSITION"), string("COLOR") }; array <D3DEngine::ColorVertex, 4> tmpVertexBuffer = { XMFLOAT3(-0.5, 0.5f, 0.5f), XMFLOAT4(1.0f, 0.0f, 0.0f, 1.0f), XMFLOAT3(0.5, 0.5f, 0.5f), XMFLOAT4(0.0f, 1.0f, 0.0f, 1.0f), XMFLOAT3(-0.5, -0.5f, 0.5f), XMFLOAT4(0.0f, 0.0f, 1.0f, 1.0f), XMFLOAT3(0.5, -0.5f, 0.5f), XMFLOAT4(0.0f, 0.0f, 1.0f, 1.0f) }; array <unsigned short, 6> tmpIndexBuffer = { 0, 1, 2, 2, 1, 3 }; inst.CreateBuffer(D3DEngine::Vertex, tmpVertexBuffer, &vertexBuffer); inst.CreateBuffer(D3DEngine::Index, tmpIndexBuffer, &indexBuffer); inst.CreateVertexShader(vsBlob, tmpInputLayout, &inputLayout, &vertexShader); inst.CreatePixelShader(psBlob, &pixelShader); [/code] pretty simple right? and it works fine. now i want to add a simple effect, change those vertex colors every second. using a constant buffer its easy, i just see if a second has passed and if it has i just set a new color and pass it to the shader using a constant buffer. now my question is, is there a way i can calculate if a second has passed and calculate the new color on the shader itself? it seems to me that im doing work on the CPU that i should be doing on the GPU. with the way im doing things the shader doesnt do anything but apply the new color (that is on a constant buffer) and thats it. problem is, if i pass my timeDelta to the shader so it can calculate if a second has passed how can i save persistant data between shader calls? the way i do it on the CPU is add timeDelta until its addedTimeDelta == 1.0f, and then i know a second has passed. but on the shader such a variable would reset every shader call (or im i wrong about this?), also global variables are constant so i dont really know how to change that. also the shader always receives the same input right? the data that has been originally set to the buffer? that means he always receives the initial color and NOT the color from the previous shader call (previous frame that is), so how can i deal with that? on a sidenote, the only way i can pass data between the CPU and GPU is by using constant buffers? or is there a another way? the reason i ask is because constant buffers need more work to initialize and have to be minimum 16 bytes long. if i want to pass just one float i would have to create at least a float4. thanks in advance.
  10. well, if you're already confortable with C++ and windows API you can try use Direct2d: http://msdn.microsoft.com/en-us/library/dd370990(VS.85).aspx it really beats GDI for 2D rendering, and will be the mainstream for high performance 2D rendering on windows. or you can try SDL (http://www.libsdl.org/) or SFML (http://www.sfml-dev.org/), i recomend SFML, since it takes an OOP aproach and also its easier to use.
  11. why dont you read the keyup value instead? that way it only reads once for key press and prevents it from keep reading the same key for as long as you keep pressing it.
  12. it will ship with directx 10 or 10.1, which is backwards compatible with directx 9 so no worries. even the latest SDK includes also the directx 9 development files apart from the directx 10 ones.
  13. if thats really the case you'll probably find the examples on some website, try to google and see if you have any luck.
  14. Quote:Original post by mososky I don't understand where the formual kxy came from or what it represents. Can you elaborate? Thanks. i didnt write the post but i would say K its a coefficient valor of your own choice and the X, Y and Z are the coordinates of the vertex you're currently transforming. correct me if im wrong.
  15. the error is probably here: Heights = new float*[Length]; or here: Heights[Position - 1][i] = toFloat(getWord(Line, i)); make sure you're not writting out of bonds of the array, also check if its being properly inicialised. use the debugger to check in what line is actually blowing the program. ps: this was probably better on the "general programming" section.