• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

MessiahAndrw

Members
  • Content count

    33
  • Joined

  • Last visited

Community Reputation

123 Neutral

About MessiahAndrw

  • Rank
    Member
  1. As a lone developer, I find one of the most hardest parts of prototyping a new game is creating the player's character model. I can whip up simple static programmer art - walls, chairs, crates, trash cans, simple tanks, guns, etc but I cannot make a somewhat barely recognizable 3d representation of a human for the life of me - let alone animate one so it walks or holds a gun. In have briefly encountered XNA on Xbox 360, and I loved being able to call: AvatarDescription avatarDescription = AvatarDescription.CreateRandom(); and quickly populate my world with random characters of all shapes and sizes that can walk, run, crawl, etc. I love the simplicity of the Xbox Avatars - and I have seen many indie games on XBLA use avatars mixed in with simple programmer art that still looks polished while allowing the developer to focus on other aspects of the game. I would love the ability to do a similar thing on the PC. Does such an SDK or library exist for the PC? I'm not talking about a full on character customization system, just a quick CreateRandom() equivalent to populate my game worlds with. If there isn't, how do you populate your game world characters when you don't have the art resources/talent to do so?
  2. It seems less personal if you see a message "Someone from your clan is in trouble!" vs "Your brother is in trouble!". But it depends on your theme. Perhaps in a mafia themed game where bloodline is an important part of your identity.
  3. Using hex tiles has the advantage that the distance to the centre of each of the 6 touching tiles is exactly the same. The distance to the centre of a diagonally-touching tile in a square map is ~1.41 times the distance of the centre of a non-diagonal neighbour.
  4. That would be really cool. The problem with family ties only being created through marrying is only a small percentage of players will marry in-game and it also requires an equal gender ratio of characters. In real life, everyone has family. So you could let the user choose a unique first name, and then automatically placed into a family along with brothers and sisters. It would help develop a special bond between players. Married couples could have children, if there is an uneven gender ratio then perhaps you could have prostitutes randomly accuse a single male that I child they bore was theres. Having children gives a reason for senior players to mentor new players. If you have a parent-child hierarchy you have to think about who is at the top (an NPC?), what if your parent ignores you and you're without a mentor, and the likelihood of your MMO getting 10,000+ players that some 50 year old guy runs into their great-great-great-great-great-great grandparent who ends up being 15 year old kid.
  5. Quote:Original post by ArKano22 i use forward lighting in 1/4 original resolution, SSAO in 1/4 too, then blending these two passes together (more light, less ssao) So you're rendering a 640x360 buffer for 1280x720? Would not that give noticeable blurring?
  6. I took an attempt at stippling after being inspired by inferred shading using it to deal with alpha. You can see the result I have here of 100 unsorted semi-transparent quads: http://img691.imageshack.us/img691/9728/stipple.jpg For the stipple I generate a 16x16 texture filled with random values and use that to as a greater than mask to compare against the alpha value. I also give each quad a unique random offset that updates per frame which prevents two quads of the same opaqueness from getting the same pixel masked out. The problem is, no matter the technique I use to blur the image you can still notice the moiré pattern.
  7. My D3D engine is randomly freezing. It was all working fine, then I added deferred rendering - basically created 2 render targets, changed my material system (all it does is set shader/texture states and shader parameters). Each time I was debugging I was using NVPerfHUD, where my engine seemed to be running smoothly. Now when I try to run my engine outside of NVPerfHUD (either through the VS debugger or standalone), every 5 or so seconds everything freezes for 1 second and keeps repeating. Also it randomly flashes black (the occasional frame doesn't render?) I've tried running it through the VS debugger, and as soon as it freezes I've hit pause, and it always seems to break deep inside a call in the NVidia driver: nvd3dum.dll!619cd44c() ^ nvd3dum.dll!6197cc3c() ^ nvd3dum.dll!61966c26() ^ d3d9.dll!6f964dbc() ^ d3d9.dll!6f94a674() ^ d3dDevice->Present(0,0,0,0); All attempts to google this problem, search MSDN or Nvidia just tell me to update my drivers (obviously directed towards gamers not programmers). I'm thinking either I'm setting the texture or shader states wrong somewhere? But that doesn't explain why my engine runs smoothly and glitch free under NVPerfHUD?
  8. I rendered the world-space position to it's own MRT, and my framerate dropped by around 25% in a relatively complex scene in RenderMokey. So I'm sticking with reconstruction through depth.
  9. No, each camera has it's own position/orientation and projection matrix so I'm re-rendering all passes (normal->light->final) for each camera. As for rendering out the positions I'd require a 2nd render target. Would the overhead of MRT be outweighed by the math required to reconstruct the world position from depth? [Edited by - MessiahAndrw on July 1, 2009 7:05:14 PM]
  10. To calculate a ray in my engine (using lighting pre-pass) I output the object's Z value to a 32-bit floating point render target. Then in the lighting pass, in the vertex shader I construct a ray as such: float3 pixel = float3( (OUT.Position.x / OUT.Position.w) / mProjection[0][0], (OUT.Position.y / OUT.Position.w) / mProjection[1][1], 1); OUT.Ray = mul(pixel, (float3x3)mViewInverse) * OUT.Position.w; Then in the pixel shader I get the world space position by multiplying the ray via the depth I previously wrote out: float3 position = vViewPosition + (IN.Ray / IN.Position.w) * depthValue; This works perfect for perspective projection. However, it utterly fails when it comes to orthographic projection because it assumes all pixels emit a ray from a common centre-point. I'm looking for a method that will work with both perspective and orthographic projection because I'm using a combination of projection methods across multiple cameras rendering the same scene, and it is much simpler (and elegant) if they are able able to share the same code. What is the best unified method of retrieving the worldspace (short of multiplying by the inverse of the proj*view matrix for every pixel)?
  11. I'm writing a light pre-pass renderer, and at the moment I'm implementing the required shaders in RenderMonkey before using them in my engine. All my normal and lighting calculations work correctly, it's just recreating the position that I am having trouble with. I was previously calculating the world position by multiplying the depth by the inverse of the view*projection matrix per pixel, which worked but in my engine it became slow when working with a lot of lights. I'm now trying to recreate the world space based on the method described in the second comment in the following post: http://diaryofagraphicsprogrammer.blogspot.com/2008/09/calculating-screen-space-texture.html The viewVecUpperLeft,etc in my light pass vertex shader will be passed in through my engine as a float4x4, but since I can't (or haven't worked out how to) simulate such a thing in RenderMonkey I'm calculating them in the lighting pass. Also, I'm using geometry for my lights (currently a sphere), but in the future it may be a box or a cone. I'm rendering my normal/depths to a A32B32G32R32F render target, which might be a waste of memory but reduces the need to pack bits (I might lower it down to A16B16G16R16F if the precision is still acceptable). My lighting pass is suppose to be outputing the world-space position as the colour. So when working "correctly" it should output red on the half, green on the top half, blue on the bottom half. It's kind of doing this now (from front on), but when I rotate around the model it completely distorts. My normal and depth pass: float4x4 matWorldViewProjection; float4x4 matWorldView; float4x4 matWorld; struct VS_INPUT { float4 Position : POSITION0; float3 Normal : NORMAL0; float3 Tangent : TANGENT0; float2 TexCoord : TEXCOORD0; }; struct VS_OUTPUT { float4 Position : POSITION; float4 WorldPos : TEXCOORD0; float3 Normal : TEXCOORD1; float2 TexCoord : TEXCOORD2; float3 Tangent : TEXCOORD3; float3 Binormal : TEXCOORD4; }; VS_OUTPUT vs_main( VS_INPUT Input ) { VS_OUTPUT Output; Output.Position = mul( Input.Position, matWorldViewProjection ); Output.Normal = mul(Input.Normal, matWorldView); Output.TexCoord = Input.TexCoord; Output.Tangent = mul(Input.Tangent, matWorldView); Output.Binormal = cross(Output.Normal, Output.Tangent); Output.WorldPos = mul(Input.Position, matWorld); return( Output ); } sampler2D normalmap; float fFarClipPlane; float4 vViewPosition; float4 vViewDirection; float4x4 matView; struct PS_INPUT { float4 WorldPos : TEXCOORD0; float3 Normal : TEXCOORD1; float2 TexCoord : TEXCOORD2; float3 Tangent : TEXCOORD3; float3 Binormal : TEXCOORD4; }; struct PS_OUTPUT { float4 NormalMap : COLOR0; }; PS_OUTPUT ps_main( PS_INPUT Input ) { PS_OUTPUT Output; float3 Normal = tex2D(normalmap, Input.TexCoord) - 0.5; Normal = Normal.x * -Input.Tangent + Normal.y * -Input.Binormal + Normal.z * Input.Normal; float3 vEye = normalize(vViewDirection) / fFarClipPlane; Input.WorldPos.xyz /= Input.WorldPos.w; float depth = dot(vEye, (Input.WorldPos.xyz - vViewPosition.xyz)); Output.NormalMap = float4(normalize(Normal), depth); return Output; } Light pass: float4x4 matWorldViewProjection; float4x4 matWorld; float2 fInverseViewportDimensions; float fFarClipPlane; float4 vViewPosition; float4 vViewDirection; float4 vViewSide; float4 vViewUp; float fFOV; float2 fViewportDimensions; struct appin { float4 Position : POSITION; }; struct vertout { float4 Position : POSITION; float3 ScreenCoords : TEXCOORD0; float3 FarClipPlane : TEXCOORD1; float4 WorldPos : TEXCOORD2; }; vertout vs_main( appin IN) { vertout OUT; OUT.Position = mul(IN.Position, matWorldViewProjection); OUT.ScreenCoords.x = (OUT.Position.x / OUT.Position.w) + 1; OUT.ScreenCoords.y = 1 - (OUT.Position.y / OUT.Position.w); OUT.ScreenCoords.z = OUT.Position.w; OUT.ScreenCoords.xy += fInverseViewportDimensions; OUT.ScreenCoords.xy *= 0.5f; /* calculate corners */ // This will be a matrix calculated in my game engine and passed through as a matrix, // but while using RenderMonkey I must calculate it in the shader: // ------------------------------ begin float3 fc = vViewPosition.xyz + vViewDirection.xyz * fFarClipPlane; float tang = tan((3.14159265358979323846/180.0) * fFOV * 0.5); float HFar = tang * fFarClipPlane; float WFar = HFar * (fViewportDimensions.x / fViewportDimensions.y); float3 viewVecUpperLeft = fc + (vViewUp * HFar / 2) - (vViewSide * WFar / 2); float3 viewVecUpperRight = fc + (vViewUp * HFar / 2) + (vViewSide * WFar / 2); float3 viewVecLowerLeft = fc - (vViewUp * HFar / 2) - (vViewSide * WFar / 2); float3 viewVecLowerRight = fc - (vViewUp * HFar / 2) + (vViewSide * WFar / 2); // ------------------------------ end OUT.WorldPos = mul(IN.Position, matWorld); float3 upper = lerp(viewVecUpperLeft, viewVecUpperRight, OUT.ScreenCoords.x); float3 lower = lerp(viewVecLowerLeft, viewVecLowerRight, OUT.ScreenCoords.x); OUT.FarClipPlane = lerp (upper, lower, OUT.ScreenCoords.y); OUT.ScreenCoords.xy *= OUT.Position.w; return OUT; } sampler NormalMap; float3 LightPosition; float LightRange; float3 LightColour; float LightAttenuation; float SpecularPower; float4 vViewPosition; float2 fInverseViewportDimensions; float fFarClipPlane; struct fragin { float3 ScreenCoords : TEXCOORD0; float3 FarClipPlane : TEXCOORD1; float4 WorldPos : TEXCOORD2; }; struct fragout { float4 Colour : COLOR0; }; inline float3 getDistanceVectorToPlane(in float negFarPlaneDotEye, in float3 direction, in float3 plane) { float denum = dot(plane, direction); float t = negFarPlaneDotEye / denum; return direction * t; } fragout ps_main(fragin IN) : COLOR0 { fragout OUT; IN.ScreenCoords.xy /= IN.ScreenCoords.z; float4 normalMap = tex2D(NormalMap, IN.ScreenCoords.xy); float3 eyeRay = getDistanceVectorToPlane(dot(IN.FarClipPlane, vViewPosition), IN.WorldPos.xyz / IN.WorldPos.w, IN.FarClipPlane); float3 position = vViewPosition + (eyeRay - vViewPosition) * normalMap.w; // Lighting calculations here, but I've snipped this out just to focus on the position OUT.Colour = float4(position,0); return OUT; }
  12. Quote:Original post by wolf You might also want to look into depth peeling and two ShaderX7 articles that describe how use depth layers with the DX10 MRT to achieve several depth layers that can help you to do this. Obviously this requires DX10 hardware as a min standard. Thanks. I looked into depth peeling (I saw an nVidia publication it a while ago and reread it after you mentioned it). The advantages of depth peeling over my system (which I'll call depth sheets) are: - No need to depth sort the transparent items. - No need to group them into non-overlapping bounding boxes (probably save a lot of CPU processing power). - Transparent parts which concave over itself on the same mesh will render correctly. - Since you're not splitting it up into groups, you can render everything (in the case of particles) in a single batch . The advantages of my system: - You don't need multiple render targets. - You don't need DX10 or nVidia specific OpenGL extensions. - You're not limited to just 4 (or however many RTs you have) layers, nor light/shade unused RTs. - You don't need to perform multiple passes (though you potentially have a higher batch count). There are a few other optimisations I thought of, including unlit groups of particles can directly be drawn to the screen (still in order) and other optimisations for fitting as many particles as possible onto a single sheet (improving batch count). Anyway, still not convinced otherwise yet, I will try implementing depth sheets and see how it goes for performance.
  13. I was meaning lights affecting the particles. To rephrase, my idea was to split the particles into non-overlapping groups, and then light each group with the same lights that effect the scene. Then overlay the lighted group onto the scene. There isn't a reason why you couldn't do this for transparent convex geometry also (all you need is a screen-space BB). I like the idea of having a single lighting system for all transparent and opaque objects. Even doing forward multi-pass lighting on alpha blended (not additive) particles introduces the same problems if they overlap. You could render these sheets front to back to create a shadow map. [Edited by - MessiahAndrw on May 23, 2009 10:25:19 PM]
  14. I'm trying to work out the best way to integrate particle rendering into my deferred shading rendering engine. My particles are 2d point sprites. My idea is to render particles in groups called 'sheets'. Assume I've already rendered the opaque objects onto my primary render target, and I have a depth-sorted list of particles waiting to be rendered. I'll have a second render target (the size of the screen) - an intermediate target that the lights render the particles on to, before they are rendered ontop of the scene (the primary render target). To render a sheet you build a list of particles to render on that sheet. You create this list by iterating through the list of unrendered particles (starting from the furthest away). If the unrendered particle does not collide with any other particles in the list you just created (a point particle is just a Bounding Box with a size, though it has to check it against EVERY particle already added to the list) you add it to the sheet. When you finally reach an overlapping particle, you render that batch into your G-buffer, perform lighting (rendering to our immediate target), then render the immediate target on top of the screen (using what ever blending you'd like - additive, alpha, subtractive). Then clear the list and repeat for a new sheet until all particles have been rendered. In the best case, all on screen particles can be rendered in 1 sheet, with a worst case of O(N^2) complexity in checking against all others). I know this isn't the most efficient particle rendering idea ever, but it's consistent in a deferred rendering environment. Some optimisation ideas: - Use a sheet-space quadtree (or a self-balancing 2d binary tree) for checking against other particles, rather than one by one. - Have a maximum number of particles in a batch (e.g. 500), so you don't get too many particles to check against. - To reduce the number of batches, if you hit an overlapping particle, continue going through the list (through store the overlapping particle in a separate list so you still check others against this) fitting as many non-overlapping particles as you can on a single sheet (perhaps with a timeout).
  15. So I render the anti-portal quads into the Z buffer, and if there are any then as I parse the octree I run an occlusion query with the octree node's AABB? I think it will be overkill to run an occlusion query against each object inside the octree node (in a loose octree), since an occlusion query is still realistically a render call and also requires uploading a matrix to the graphics card. So effectively, I'll insert the anti-portals into its own octree (allowing shapes other than quads so I do corners in a single render call) and render this. The problem if I have too many complex anti-portals (e.g. having a lot of buildings each with their own collection of anti-portals) - nothing is occluding the occluders! But that would be a general level design issue. One optimisation I thought of; Is it possible to tell if EVERY pixel passed the occlusion test? (excluding back faces) Because if it did then I know not to check that node's children.