• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Lemmi

Members
  • Content count

    17
  • Joined

  • Last visited

Community Reputation

126 Neutral

About Lemmi

  • Rank
    Member
  1. You're right and I agree! I'll also go ahead and assume that my thoughts surrounding the sorting approach are at least somewhat on the right track. I'll carefully re-read all the posts before acting.
  2. Okay! I had no internet access for about a week (feels like a year). This is all super good advice and I will definitely take all of these into consideration when moving forward. However, it just occurred to me, I'm unsure of how I want to produce and store scene depth during the pre-render transform pass?   I'm quite sure I can't just use the normal transforms as they are, because they're probably in world-space or own-space.   Should I just as an extra step transform everything by the camera's view matrix to produce the depth from the camera's point of view and then store that and use that for sorting?    My intent is to store my depth buffer linearly, the way MJP describes in his excellent tutorials.   See, last time I did something like this, I did no sorting at all, and I just did all transforms on the vertex shader, where I'd multiply every vertex by WVP or whatever was needed. I did no sorting at all, I just let the shader sort out depth through painter's algorithm.   To roughly sketch out what I'm imagining here:   Game's pre-render update pass:   for(each renderable) {      //EITHER THIS VVVVVV   //transform renderable to camera viewspace and get the depth from camera POV   float renderableDepth = (renderable.transform * camera.viewMatrix).z     //OR THIS VVVVV   //simply calculate a rough distance between camera and renderable. when we've reached this point, we've already culled away all the objects that are outside the camera frustum,     //so it should be pretty OK?   float renderableDepth = vector3Distance(renderable.position, camera.position) //returns a length as a single float     //no matter the method, we'd finally do this   //encode the depth somewhere within the flags variable   (renderable.flags & 0x0000ffff) |= renderableDepth; //Don't pay too much attention to how I pack it. I can never remember bitshifting syntax without looking it up. }   Then later on:   void Sort(all the renderables) {   for(each renderable)   {     Sort based on... flag? just straight up sort on which value is lowest?     and that the lowest value also indicates the first textures/materials/meshes.. Perhaps first compare it on textures, then meshes etc, as was suggested above?   }     for(each renderable)   {       and then after it's been sorted once by the first 32bits of the flag (where we'd possibly store all those things)       we sort it again by depth?   } }   I'm sorry for being so dense. ;)   I am also of course aware that I'll have to profile this to see if I even gain anything at all by sorting, but I sort of just want to try making a system of this sort either way, as I think it could be very useful to understand the techniques.
  3. I like the idea about packing different (small) indices into a 64bit structure. Regarding all the shaders, textures and such, I was currently thinking about using the flyweight pattern, which is pretty much what you're describing, I think. It's also what I used last time and it worked fine.     Am I to understand that you suggest I'd do something like this?:     struct Renderable {  InstanceData transform; //either pos+quat or a 4x4 matrix, I guess. possibly other things? Locally stored copy, right? because we want optimal data locality  long key; //or potentially even a 128bit structure }     struct RenderPass {   vector<Renderable>   vector<shaderIndices> //or shader pointers }     And then each render pass would: 1) Sort each renderable based on, for example, their transform's camera depth 2) Applying each shader to each renderable, where you'd fetch from localized and sorted mesh/texture/material arrays somewhere else, using the handles that you'd extract from the renderable 'key' variable.     Or you can sort all renderables once per frame based on their transform camera depth and THEN insert them by going from the back to the front and pushing them back into every render pass that they've been flagged for. That's probably better.     I guess that'd mean that the actual entity would also need another set of flags then, so that the renderer knows which passes I want to insert it into.     Why would I sort by the key and not the transform? To optimize texture/resource usage and bundling render calls? How would I go about doing that on the fly? Rebuilding and merging vertex buffers every frame and doing some sort of semi-instancing? Or are you meaning just sorting them by what textures/materials they use so that I can send those resources once and then render several meshes without changing anything? I can't remember if that was possible to do in directX. I'm going to be using openGL, btw, if that is in any way relevant and helps the discussion.   Edit: Oh, yeah. Of course. You want to sort by the keys because then you know that you'd constantly be accessing the closest meshes/textures/materials every time you move to the next renderable.   Btw are you implying that I should sort by both transform and key? Say, I first sort by keys, and then again within all renderables that have the identical key(unlikely) sort again by transforms?
  4. Yes! Okay. Thank you for this advice. I'm fully aware that premature optimization is wrong, but last time I wrote a "graphics engine", I spent 1½ years of regretting that I had made a bunch of stupid design mistakes that would be really hard to fix, so this time around I rather overthink than underthink! :) Also, I'm a dummy. By data redundancy, do you mean that you actually have several instances of the same object in different places for better data locality?
  5. Hi, so I'm building a graphics engine for fun, and I've been thinking about how to approach renderable sorting for the different passes (I'm doing deferred rendering). I'd heard about how you can make huge gains by sorting everything so that access is linear for each pass. The problem for me comes when I want to re-use the same renderables for several different passes during the same frame. First of all I want to start off by saying that my knowledge of how the modern CPU cache actually works is very rudimentary, so I'm mostly going off assumptions here, please do correct me if I am wrong at any point. Also don't hesitate to ask for clarifications if I'm making no sense. My current idea would be to keep a large, preallocated buffer where I store all the renderables (transforms, meshes, bundled with material and texture handles, flyweight pattern style) that got through culling each frame update. Then I would keep different index/handle "lists"(not necessarily an actual list) -- one list per render pass -- with handles or direct indices to the renderable array. This way I can access the same renderable from several different passes. I don't have to copy or move the renderables around. I'd just send in a pointer to the renderables array and then for each pass access all the relevant renderables through the index lists. This would essentially mean that I never sort the actual renderables array, only sorting the index lists for things like depth, translucency (depending on what pass). Now comes my question, would this be inefficient because I'd be essentially randomly accessing different indices in the big renderable array? The cache would have no real good way to predict where I'd be accessing next, so I'd probably be getting tons of cache misses. I just feel that despite this, it's a flexible and hopefully workable approach. How do real, good engines deal with this sort of thing? Should I just not bother thinking about how the cache handles it?
  6. [quote name='kauna' timestamp='1355066091' post='5008804'] I don't have answer to your problem but your code seems complicated at some parts: output.Position = mul(float4(input.Position, 1.0f), World); could be written as: output.Position = mul(input.Position, World); If you define your input.Position as float4. It isn't necessary to provide the 4th component from the program. float2 texCoord = postProjToScreen(input.LightPosition); float4 baseColor = textures[0].Sample(pointSampler, texCoord); Since you are using D3D 10 or 11 that part of the code could be replaced with int3 Index = int3(input.Position.x,input.Position.,0); float4 baseColor = textures[0].Load(Index); float4 normalData = textures[1].Load(Index); Cheers! [/quote] Hi! The input position needs to be casted to a float4 and have an added 1.0f to the last channel, else you get some really weird undefined behaviour unless you re-write the model class vertex struct, which I see no reason to do. The .Load function thing was neat! Fun to learn about new things. Question: Do you know if this is faster than using the sampler, or if it brings any other advantage? If I'm going back on the subject, I'm starting to suspect that it actually isn't the attenuation that is the problem, because I've scoured the entire net and tried so many different attenuation methods, and they all have the same problem. It's as if the depth value gets screwed up by my InvertedViewProjection. This is what I do: viewProjection = viewMatrix*projectionMatrix; D3DXMatrixInverse(&invertedViewProjection, NULL, &viewProjection); Then when I send it into the shader I transpose it. I'm honestly not sure what transposition does, so I'm not sure if it can cause this kind of problem where it screws up my position Z axis when multiplied with it. Another oddity I found was that when I looked through the code in pix I'd get depth values that were negative, so I'd get -1.001f and other values. This doesn't seem right? The depth is stored the normal way in the gbuffer pixel shader: output.Depth = input.Position.z / input.Position.w;
  7. Yeah, I changed it to position.xy = texCoord.xy; a while back. It didn't change anything. The reason I had it that way was because I had seen it done like that in some other samples. Pretty much just changing values around and crossing my fingers at this point.
  8. I have to admit, I don't really understand all of the math behind this, so please do assume that the math is wrong, I think that's for the best. I tried what you suggested and it resulted in virtually nothing being rendered whatsoever, but hey, that solved my first problem! ;) It's uh, hard to explain, but it did render if I had my camera in a very special angle and was looking at it with the edge of my screen. You are of course right about the division by zero thing, that was silly of me.
  9. [b]*Update*[/b] I fixed it. It was not at all related to anything in my shader, my attenuation is perfectly fine. I had my depth stencil set up all wrong! I found the right solution here: https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/6800_Leagues_Deferred_Shading.pdf at page 15, if anyone is having similar problems! Sorry for necroing. Thanks for all the help, you people! Hi, sorry for vague title. I'm going by Catalin Zima's deferred renderer pretty heavily, so it's very similar. My problem is that my point light lighting doesn't fall off based on the attenuation like it should, it's a little hard to explain exactly what's wrong, so I frapsed it: [url="http://youtu.be/zabfS59bhc0"]http://youtu.be/1AY2xpmImgc[/url] Upper left is color map, right of that is normal map and furthest to the right is depth map. The light map is the bottom left one, so look at that one. Basically, they color things that are outside of the light radius. I strongly suspect there's something wrong about the projected texture coordinates. I've double checked so that all values that I send into the shaders actually get assigned, and I've looked through everything in pix and it seems to be fine. When I draw the sphere model that represents the point light, I scale the translation matrix with a (LightRadius, LightRadius, LightRadius) matrix. I use additive blending mode for my lighting phase, and change rasterizer state depending on if I'm inside or not. I use a separate render target for my depth, I haven't bothered trying to use my depth stencil as a RT as I've seen some people do. Here's how the shader looks: Vertex shader: [CODE] cbuffer MatrixVertexBuffer { float4x4 World; float4x4 View; float4x4 Projection; } struct VertexShaderInput { float3 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : SV_Position; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput LightVertexShader(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(float4(input.Position, 1.0f), World); output.Position = mul(output.Position, View); output.Position = mul(output.Position, Projection); output.LightPosition = output.Position; return output; } [/CODE] Pixel shader: [CODE] cbuffer LightBufferType { float3 LightColor; float3 LightPosition; float LightRadius; float LightPower; float4 CameraPosition; float4 Padding; } cbuffer PixelMatrixBufferType { float4x4 InvViewProjection; } //==Structs== struct VertexShaderOutput { float4 Position : SV_Position; float4 LightPosition : TEXCOORD0; }; //==Variables== Texture2D textures[3]; //Color, Normal, depth SamplerState pointSampler; //==Functions== float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } half4 LightPixelShader(VertexShaderOutput input) : SV_TARGET0 { float2 texCoord = postProjToScreen(input.LightPosition); float4 baseColor = textures[0].Sample(pointSampler, texCoord); if(baseColor.r + baseColor.g + baseColor.b < 0.0f) //I cull early if the pixel is completely black, meaning there really isn't anything to light here. { return half4(0.0f, 0.0f, 0.0f, 0.0f); } //get normal data from the normalMap float4 normalData = textures[1].Sample(pointSampler, texCoord); //tranform normal back into [-1,1] range float3 normal = 2.0f * normalData.xyz - 1.0f; //read depth float depth = textures[2].Sample(pointSampler, texCoord); //compute screen-space position float4 position; position.x = texCoord.x; position.y = -(texCoord.x); position.z = depth; position.w = 1.0f; //transform to world space position = mul(position, InvViewProjection); position /= position.w; //surface-to-light vector float3 lightVector = position - input.LightPosition; //compute attenuation based on distance - linear attenuation float attenuation = saturate(1.0f - max(0.01f, lightVector)/(LightRadius/2)); //max(0.01f, lightVector) to avoid divide by zero! //normalize light vector lightVector = normalize(lightVector); //compute diffuse light float NdL = max(0, dot(normal, lightVector)); float3 diffuseLight = NdL * LightColor.rgb; //reflection vector float3 reflectionVector = normalize(reflect(-lightVector, normal)); //camera-to-surface vector float3 directionToCamera = normalize(CameraPosition - position); //compute specular light float specularLight = pow( saturate(dot(reflectionVector, directionToCamera)), 128.0f); //take into account attenuation and lightIntensity. return attenuation * half4(diffuseLight.rgb, specularLight); } [/CODE] Sorry if it's messy, I'm pretty loose about standards and commenting while experimenting. Thank you for your time, it's really appreciated.
  10. fixed it, thank you very much! Was fully expecting it to be something complicated. =) EDIT: I found a new error, this is really baffling me because I can't see what's wrong. // loop through all objects in list for (UINT i=0; i<(*pC); i++) if (ppLob[i] == pLob) break; // did we find the one we came for? if (i>=(*pC)) return; <--- Error appears here. SAFE_DELETE(ppLob[i]); and I keep on getting "error C2065: 'i' : undeclared identifier" I tried putting an int i but it doesn't help. Tried about 5 different combinations but it just doesn't want to compile without errors. [Edited by - Lemmi on March 14, 2010 11:06:13 AM]
  11. Hi, I'm working on compiling this HUGE project that came with the 3D Game Programming Book by Stefan Zerbst. I am honestly in over my head because I dont understand 1/10th of it, but I just want to get it compiled to see how it looks, and the only error I'm getting is this: 1>c:\documents and settings\<name>\desktop\zfx3d\chap_15\include\cgameentity.h(29) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Yeah, so it's just a header file. I'm going to paste the whole thing. I've marked row 29. // FILE: CGameEntity.h #ifndef CGameEntity_H #define CGameEntity_H #include <windows.h> #include <stdio.h> #include "zfx.h" #include "CGamePortal.h" #include "CGameLevel.h" class CGameLevel; class CGamePortal; class CGameEntity { public: CGameEntity(void); virtual ~CGameEntity(void); virtual HRESULT Render(ZFXRenderDevice*)=0; virtual void Update(float)=0; virtual bool TouchAndUse(const ZFXVector&)=0; virtual bool TestCollision(const ZFXAabb&, ZFXPlane*)=0; virtual bool TestCollision(const ZFXRay&, float, float*)=0; virtual bool Load(FILE*); virtual ZFXAabb GetAabb(void) { return m_Aabb; } 29----> virtual IsOfType(ZFXENTITY e) { return (e==m_Type); } protected: ZFXENTITY m_Type; ZFXAabb m_Aabb; VERTEX *m_pVerts; WORD *m_pIndis; UINT m_NumVerts; UINT m_NumIndis; UINT m_nSkin; }; // class typedef class CGameEntity *LPGAMEENTITY; /*----------------------------------------------------------------*/ class CGameDoor : public CGameEntity { public: CGameDoor(void); virtual ~CGameDoor(void); virtual HRESULT Render(ZFXRenderDevice*); virtual void Update(float); virtual bool Load(FILE*); virtual bool TouchAndUse(const ZFXVector&); virtual bool TestCollision(const ZFXAabb&, ZFXPlane*); virtual bool TestCollision(const ZFXRay&, float, float*); virtual bool IsActive(void) { return m_bActive; } virtual bool ConnectToPortals(CGameLevel*); private: ZFXVector m_vcT; ZFXAXIS m_Axis; float m_fSign; float m_fTime; float m_fDist; float m_fPause; bool m_bActive; bool m_bOpening; bool m_bPausing; UINT m_Portal[2]; CGamePortal* m_pPortal_A; CGamePortal* m_pPortal_B; bool LoadMesh(FILE *pFile); }; // class typedef class CGameDoor *LPGAMEDOOR; /*----------------------------------------------------------------*/ #endif Thanks in advance.
  12. Hi. I'm trying to learn 3D programming with this book that was released in circa 2002, compiling any of the code doesn't work in a modern compiler (Say... Visual Studio 2005 or 2008). And I can't find VS2003 anywhere? Microsoft doesn't seem to have it anymore, and I don't want to turn to pirate sites out of principle. Also I'm working with .dsw files, does anyone know if there are any plugins for any other compilers than the Microsoft ones that can handle that? Also I'm self learned so don't bash me on not knowing the terminology, please. :P Thanks in advance! P.S. If it helps, it's the 3D Game Engine Programming book by Stefan Zerbst, if anyone has worked with it before. Edit: OKAY, thanks a lot! I solved the problem. As usual it was just me being a total noob! :) Turns out I'm using the wrong version of VC++ 2k8, trying to find the MFC package now. [Edited by - Lemmi on March 6, 2010 12:32:56 PM]