• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

145 Neutral

About alagtriste

  • Rank
  1. [url="http://www.positiveatheism.org/hist/twainlfe.htm#0"]Letters from earth[/url]. If someone from 19th century could see it, I don't know how is possible to someone with senses not to see the falacy in the One Book (All of them). God as defined by the religions is just a JOKE. A really bad joke.
  2. Quote:Original post by SteveDeFacto Quote:Original post by alagtriste I don't get the code posted. You are resizing a vector to accomodate point data and then you are growing it with that same data. Why not just do something along those lines: *** Source Snippet Removed *** I see no where in your code that it compares the vertices so you are completely missing the point of the code and when I run that code it loads for 5 minutes, expands my application's memory by over 600MBs then crashes... Here is my code now btw: *** Source Snippet Removed *** Yes, I could have missed the point of the code. In fact I don't get it. Why are you allocating the vertex array with default values and then you are growing it with data? std::vector<AxVertex> vertices(ControlPointCount); ... if( vertices[vi] != vertex && vertices[vi] != ZeroVertex ) { indices[i] = vertices.size(); vertices.push_back( vertex ); } else { indices[i] = (DWORD)vi; vertices[vi] = vertex; } So if the vertex is not equal to a value or the value is not the default one then insert. Otherwise just overwrite the equal values (?) or defualt value. That piece of code makes no sense to me. Isn't FBX supposed to be aware of equal vertices allready (If not, why it uses indices for the polingons?), why are you checking for equality. If FBX says they are equal (same index) is because they are the same vertex. The fact that my code does not compare float values was the whole point of my thread. I've make a litte test and it worked as expected (but I don't really know if is the same thing you are trying to achive beacuse your code makes no sense to me).
  3. I don't get the code posted. You are resizing a vector to accomodate point data and then you are growing it with that same data. Why not just do something along those lines: int ControlPointCount = FBXMesh->GetControlPointsCount(); KFbxVector4* ControlPoints = FBXMesh->GetControlPoints(); std::vector<AxVertex> vertices; std::vector<AxFace> faces; std::vector<int> attributes; std::vector<int> indicesMap(ControlPointCount, -1); for( int p = 0; p < FBXMesh->GetPolygonCount(); p++ ) { int indices[3]; for( int i = 0; i < FBXMesh->GetPolygonSize(p); i++ ) { int vi = FBXMesh->GetPolygonVertex( p, i ); AxVertex vertex = {0}; KFbxVector4 normal; FBXMesh->GetPolygonVertexNormal( p, i, normal ); vertex.x = ControlPoints[vi][0]; vertex.y = ControlPoints[vi][1]; vertex.z = ControlPoints[vi][2]; vertex.nx = normal[0]; vertex.ny = normal[1]; vertex.nz = normal[2]; if (indicesMap[vi] == -1) { int vertexIdx = vertices.size(); vertices.push_back(vertex); indices[i] = vertexIdx; indicesMap[vi] = vertexIdx; } else { indices[i] = indicesMap[vi]; } } if(FBXMesh->GetPolygonSize(p) == 4) { AxFace face; face.indices[0] = indices[0]; face.indices[1] = indices[2]; face.indices[2] = indices[3]; faces.push_back( face ); attributes.push_back(FBXMesh->GetPolygonGroup(p)); } AxFace face; face.indices[0] = indices[0]; face.indices[1] = indices[1]; face.indices[2] = indices[2]; faces.push_back( face ); attributes.push_back(FBXMesh->GetPolygonGroup(p)); } [Edited by - alagtriste on December 20, 2010 3:11:04 PM]
  4. Some random thoughs not about the game design itself but the distribution model. Instead of making another pay per cards CCG you could try to make it free to play. Players play between them and winners are awarded cards based on a ranking system (players cards bets could also be in place). Then If you develop a game that is entertaining and it gains in popularity you could develop an ebay like (card) trading system (where you take part of the transfer money). What's more, if the game is fast paced and you could come up with a tournament system (maybe multiplayer rules) them there could be free and pay tournaments (think about sit and go poker tournaments). So, you make the game and if people enjoy it you could come up with profits.
  5. Dis you use some Assimp post process flags? For example: Assimp post process flags aiProcess_JoinIdenticalVertices should take care of duplicates if possible. If not possible maybe you should mix it with aiProcess_RemoveComponent. P.D. It's obvius I don't read carefully your post. Maybe if you just reap all other vertex components (UV, Normals, etc) the joint process gives you the mesh without duplicates. Of course, if you need that data then you should parse data two times (if there are no performance requeriments).
  6. I've looked at D3DXQuaternionSlerp documentation and it doesn't speficy if it's doing a shortest way check. So maybe it isn't doing. Try to perform an hemisphere check. Something along the lines: D3DXQuat q0 = BeginAnimJoint.Rotation; D3DXQuat q1 = EndAnimJoint.Rotation; if (Dot(q0, q1) < 0.0) { q0 = -q0; }
  7. The bind pose is a static pose, the pose used to skin the mesh. A pose is defined by its joints. Each joint it's a reference frame. Usually a joint reference frame is defined with respect to its parent joint space but bind pose (as the static pose used to compute vertex position) is defined in model space. That means that each joint reference frame goes from joint space to model space. So that joint reference frame is used to compute vertex position WRT joint space. So an skinned vertex position could be computed as: v' = Inverse(bind_pose_joint_i) * v (In case that more than one joint affecting a vertex a weighted average is used). Then if we have an animated pose we could compute the vertex position as: v' = animated_joint_i * Inverse(bind_pose_joint_i) * v; Normally a matrix is used to define the joint orientation in the bind pose but a quaternion could be used by convenience. So if your skin code works with quaternions then you need less memory/registers to store a pose.
  8. The transformation T is your function (transformation) in the original frame F1. So in your example the transformation T is your axis angle rotation. You just need to convert it to matrix form and perform the basis change. v is just a generic vector (the ones you are going to transform at runtime). The idea is that you have a function(transformation) T that takes as input vectors/points in F1 and ouputs vectors/points in F1 transformed. But your engine wants vectors/points in F2 and ouputs in F2 space. So you perform the basis change in your function so it accepts vectors/points in F2 and ouputs them in F2 too. That process is the one explained by apatriarca, known as change of basis in linear algebra. In the change of basis process what you need to calculate is M and it's inverse. When talking about a right-handed to left-handed conversion is as simple as a reflection transform and it's inverse.
  9. Hi, First of all excuse my English. You are almost there. With the concepts of Draw Atom and Stages you describe could easily implement an deferred renderer. You just need the command buffer abstraction. If I'm not misinterpreting you, the Stages are the ones submiting render orders. So think if that orders are just commands to a local command buffer... All your stages could be renderer in parallel. So for example: DrawAtom { BoundingVolume volume; KeyInt key; Shader* shader; ShaderParamsBuffer* shaderParamsBuffer; VertexStreams vertexStreams; PrimitiveAssembler primitiveAssembler; } DrawCmdData { KeyInt key; Shader *shader; ShaderPramsBuffer* shaderParamsBuffer; VertexStreams vertexStreams; PrimitiveAssembler primitiveAssembler; } NormalStage { Frustrum viewFrustrum; FrameBuffer fb; Int stageKey; vector<DrawAtom> visibleAtoms; vector<DrawCmdData> visibleCmds; void CollectVisibleDrawAtoms(SpatialOrganizer<DrawAtoms> scene) { visibleAtoms.clear(); scene.Cull(viewFrustrum, visibleAtoms); visibleCmds.clear(); foreach (DrawAtom da in visibleAtoms) { DrawCmdData data(da); data.key = AlterStageKey(data.key, stageKey); visibleCmds.insert(data); } } void Render(CommandBuffer& cb) { Int currentShaderKey = InvalidKey; Int currentParamsKey = InvalidKey; sort(visibleAtoms); cb.SetFrameBuffer(fb); cb.ClearFrameBuffer(...); foreach (DrawAtom atom in visibleAtoms) { Int shaderKey = GetEffectKey(atom.key); Int paramsKey = GetEffectKey(atom.key); if (shaderKey != currentShaderKey) { cb.SetShader(atom.shader); currentShaderKey = shaderKey; } if (shaderParamsBuffer != currentShaderParamsBufferKey) { cb.SetShaderParams(atom.shaderParamsBuffer); currentShaderParamsBufferKey = paramsKey; } cb.Draw(atom.primitiveAssembler); } } } ShadowMapStage { ... Int shadowMapShaderKey; Int shadowMapParamsKey; Shader* shadowMapShader; ShaderPramsBuffer* shadowMapShaderParamsBuffer; void CollectVisibleDrawAtoms(SpatialOrganizer<DrawAtoms> scene) { visibleAtoms.clear(); scene.Cull(viewFrustrum, visibleAtoms); visibleCmds.clear(); foreach (DrawAtom da in visibleAtoms) { DrawCmdData data(da); data.key = AlterStageKey(data.key, stageKey); data.key = AlterShaderKey(data.key, shadowMapShaderKey); data.key = AlterShaderParamsKey(data.key, shadowMapParamsKey); data.shader = shadowMapShader; data.shaderParamsBuffer = shadowMapShaderParamsBuffer; visibleCmds.insert(data); } } ... } The key is somehow an indexing mechanism (not fully exploted in the example) used to sort for material, transparency, etc. It could be exploted so that all the info is encoded in the key and the real data (Shader, ShaderParamsBuffer, etc) is accesed indexing data tables(Like in c0de517e link). The command buffer just stores the data to be executed later in a format ready to use by the API. CommandBuffer { Command { Int type; union { ... SetFrameBuffer setFrameBuffer; SetShaderCmd setShader; SetShaderConstantCmd setShaderConstant; SetSamplerStateCmd setSamplerState; DrawPrimitiveCmd drawPrimitive; ... } } vector<Commad> commands; ... void SetShader(Shader* shader) { Command cmd; cmd.type = SetShaderCmdType; cmd.setShader.shader = shader; commands.insert(cmd); } void SetShaderParamsBuffer(ShaderParamsBuffer* paramsBuffer) { foreach (ShaderParam param in paramsBuffer->params) { Command cmd; cmd.type = SetShaderConstantCmdType; cmd.setShaderConstant.data = param.data; commands.insert(cmd); } foreach (SamplerState sampler in paramsBuffer->samplers) { Command cmd; cmd.type = SetSamplerStateCmdType; cmd.setSamplerState.texture = sampler.texture; cmd.setSamplerState.filer = sampler.filter; ... commands.insert(cmd); } } void DrawPrimitive(PrimitiveAssembler pa) { Command cmd; cmd.type = DrawPrimitiveType; cmd.drawPrimitive.primitive = pa.primitive; cmd.drawPrimitive.numPrimitives = pa.numPrimitives; commands.insert(cmd); } ... } Things to take into account is that the cmds generated inside the command buffer are ready to use by the underlying API (OpenGL, Direct3d9, Direct3d10, consoles own command/push buffers formats, etc). So for example in d3d10 the SetShaderParamsBuffer could be translated to a Constant buffer setting while in d3d9 a command setting a shader constant is submited per param. The command buffer could be pregenerated to render a fixed scene. So no time is wasted culling, ordering calls, etc. Finally the renderer interprets that command buffer and sends the commands to the API. Renderer { void Render(CommandBuffer& cb) { foreach (Command cmd in cb.commands) { switch (cmd.type) { ... case SetFrameBuffer: { glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, cmd.setFrameBuffer.bufferId); break; } case DrawPrimitiveType: { glDrawElements(cmd.drawPrimitive.primitive, cmd.drawPrimitive.numPrimitives, cmd.drawPrimitive.primitive, ...); break; } ... } } } }
  10. Pointers are unique ids. If differencies in lifetimes is a problem just keep different lifetimes enities containers: EntitiesContainer game; // From app execution until shutdown EntitiesContainer level; // From level init until level unload EntitiesContainer normal; // Entities spawing and dying with variable freq ShorLivedEntityContainer shortLived; // Very high freq deletion With that you know have to keep track of Container + position in container but if you know wich container each entity belong then you are done That could be particularized for each entity type. For example you could have just a Projectile pool container. The containers could be one of the many others have said. A vector plus a free list is a good approach. The free list could be implemented as a start index + a next index inside the bucket. If entity is not a POD just remove the union if you could live with 2/4 bytes overhead: struct Bucket { union { Entity entity; int nextFree; }; }; std::vector<Bucket> entities; int freeEntity; Said that I think that the best approach is to keep special cases as special cases. If it happends that you need to create/delete projectiles with high frequency then make a projectiles container that best fit the usage: pools + special ADT like segmented tables, unrolled linked list, etc.
  11. I'm implementing an animation system based on animation trees similar to Unreal Engine 3 (and guess that Morpheme and Havok Behavior) and I'm trying to think about the best (performance effecient, memory efficient and parallelizable) execution algorithm of the tree. I know there is the Mechwarrior flatten like algorithm but It's not correct (not normalization after blend, as if the tree was an N-blend node) and not handle additive animations. The good thing about the algorithm is that is simple and easily parallelizable (animation evaluation operations). The naive depht first execution uses a buffer per active blend node + one per animation player and that buffers should be allocated each tick (I suppose that memory pools could be used for this). Then there is the traverse a tree like (cache problems) two times: update weights and execute operations. Each blend node now its a dependency level so it's hard to parallelize (blend + normalize operations are level dependant). I have thinking a post-fix like operations buffer, but then there is the problem of context dependant operations (The good thing is that a flattened context dependency graph could be pre-generated for a static tree). Eg: B: N-way blend node, P: playback node B P P B B P P P Operations (Context_Operation): Eval: animation evaluation, Blend: weighted accumulation, Normalize: normalize quaternions 0_Eval 0_Blend 0_Eval 0_Blend 2_Eval 2_Blend 2_Eval 2_Blend 2_Normalize 1_Blend 1_Eval 1_Blend 1_Normalize 0_Blend 0_Normalize Sorted by context: 0_Eval 0_Blend 0_Eval 0_Blend 0_Blend 0_Normalize 1_Blend 1_Eval 1_Blend 1_Normalize 2_Eval 2_Blend 2_Eval 2_Blend 2_Normalize With the dependency graph each context could be executed like that: ExecuteCtx(ctx, dependecyGraph) { LookOpStartAndEnd(operationsBuffer, strat, end); if (start == end) return; foreach (dependantCtx in dependecyGraph[ctx]) { ExecuteCtx(dependantCtx, dependecyGraph) } for (i = start; i != end; ++i) { switch (operationsBuffer[start]) { case Operation_Eval: ... case Operation_Blend: ... case Operation_Normalize: ... } } } Easy to parallelize but how effecient is the parallelization depends on tree complexity. Each context should have an accumulation buffer, that should be allocated, initialized and deallocated each tick. Each hard to know when to allocate and when to deallocate.
  12. D3D10 HLSL Render States. That same post contains a link to the HLSL D3D9 info.
  13. A link to an article about memory mamagement.
  14. I ran it on a Geforce 9600M (Laptop) with 186.03 drivers and have the same effect as you. Both techniques looks the same if don't move the window. If I move the window then spritebatch version looks correct for all resolutions.
  15. FPS is not a good metric and 10 objects is probably not a good test case. Try a complex scene or a scene where the bottleneck is in the CPU-GPU bus then probably your numbers change. That remainds me the classic inmediate mode rendering is faster than VBO one... P.D: It could be that the drivers are not up to it (really weird given the time DX10 constant buffers have been around)