• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

robert.leblanc

Members
  • Content count

    7
  • Joined

  • Last visited

Community Reputation

140 Neutral

About robert.leblanc

  • Rank
    Newbie
  1.     So do they take a device type parameter or a render type class? Here's an example I found of a CVertextBuffer class. Is this similar to what you meant? It's D3D9 but I'm assuming the ideas are the same. I'm less interested in having the flexibility to change 3D API's although I know that is a big advantage of this approach. I'm more interested in how it helps structure programming itself. Maybe that's the wrong reason to go this route.    Also this VertexBuffer Class
  2.       Can we just dissect a little bit pseudocode of extremely simple code to see if I can get this a little more.. it will likely help others who are new like me...   MyGameObject( .. ) : GameObject{ ... init( .. ) inputModule physicsModule modelModule graphicsModule otherModules ... } I don't think this was exactly what you were describing so you can clear up my misconceptions hopefully. It seems there is some degree of coupling between the graphics and model modules as you'd have to get the vertices or the mesh or whatever from the model and pass it to the graphics to be turned into buffers of various sorts. I guess this makes sense. Currently I'm working with the simplest thing possible a few vertices in the shape of a cube so I guess that would be the model. In the future I imagine I'll have create models using software like Blender (I'm too poor for the like of 3D Studio Max). So then you'd be parsing a file (I used Assimp or whatever it's called in the past... but back then I wasn't any decoupling whatsoever so it was messy just like this run on sentence with poor use of commas ;-) The model I suppose would also contain references to materials, texture files, and other stuff that would be the starting point of what would eventually get put on the screen. Most of the periods used in this paragraphs could be replaced with questions marks, except for this one. modelModule{ init( .. path-to-model-file/ID-of-Model-to-look-up-/Something-that-lets-this-class-load-model) { do the loading stuff by parsing... or whatever this implementation decided on } verticies Mesh textures materials general-model-data-variables etc } How the graphicsModule gets its hands on this I'm not quite sure.. Maybe this is a implementation of an interface and the graphics module gets a handle to this during the MyGameObjects initialization? Then the graphics can call things like getMesh(), getWhateverItNeeds() knowing those functions will be there because they were virtually defined in the modelModule interface... In any case... some black magic happens and the graphicsModule maybe could look like.... graphicsModule : someInterfaceThatMakesSenseForTheRenderEngine{ init( handle-to-modelModule, whatever-else-it-needs) { //This is what I'm wondering... does the following sort of stuff go in here ... //for example //Build some shaders based on the data from the model (presumably models define their own shaders )-- The engine can deal //with the result D3DCompileFromFile( szFileName, nullptr, nullptr, szEntryPoint, szShaderModel, dwShaderFlags, 0, ppBlobOut, &pErrorBlob ); // Define the input layout D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, }; UINT numElements = ARRAYSIZE( layout ); //Vertext Buffer Description D3D11_BUFFER_DESC bd; ZeroMemory( &bd, sizeof(bd) );     bd.Usage = D3D11_USAGE_DEFAULT;     bd.ByteWidth = sizeof( SimpleVertex ) * 3;     bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; bd.CPUAccessFlags = 0;     D3D11_SUBRESOURCE_DATA InitData; ZeroMemory( &InitData, sizeof(InitData) );     InitData.pSysMem = vertices;     //description ready to be passed along to the engine   ... ... } }   To make an actual vertex buffer requires a handle to the D3DDevice. I guess creating a VB maybe isn't considered rendering?? So does it make sense to just get a handle to the engine and give it the buffer desc and get back a VB? Then just hang on to it in this class? You had several classes mentioned CVertexBuffer, CInputLayout, .... Are these classes within the graphicsModule? Are the initialized during the parsing of the model data?   If any of what I have written is even close to on the right track (and I doubt it is) I picture the Engine using the interface when a GameObjectManager calls RenderEngine.render(MyGameObject) to access all the predefined methods like, getVB, getInputLayouts, getShaders, getWhateverStates, getWhateverElseIsDefinedByTheInterface. BTW is there a difference in the use of the word component vs. module? Also I'm mostly interested in simple ideas here that allow me to decouple for organization. I'm currently (although I recognize its importance) not terribly interested in optimizing speed or fancy dancy scene management. I will do that when I get a bunch of simpler objects interacting and doing the basic things I want them to do. I figure even get there I need at least a semblance of structure to my code. That's why I'm going this route. Ex: I'm not building a commercial rendering engine here
  3. I'm fairly new to 3D programming but I've got a grasp on some of the basics. My last attempt at it fizzled out mainly because my ability to manage assets and the code in general was compromised due to poor program architecture. I've been reading about various design patterns and it has spawned a few good ideas (nothing overly concrete though). I'm programming with c++/Direct3D11 API (Win 8 sdk) and I've created a class that I'm calling a RenderEngine (I'm not entirely convinced that I actually know what I render engine is but you'll see soon enough what I'm leaning toward). The idea of having a 'game object' that consists of components like 'graphics', 'physics', 'control', etc seems appealing for its flexibility. In my mind at least part of the idea behind a RenderEngine is that the engine could do things like optimization for one, but also allow you to program objects without worrying about putting a bunch of rendering calls in the game object itself.    So for example, say you have a game object that is a ball (or whatever). I'm interested in what would be considered its graphical part. Also I'm interested in how the RenderEngine would interpret the graphics part. For example, would you have some member variables for say,  texture,  verticies,  color,  sprite,  etc (what other useful things would go here ?)   Then somewhere in a game loop you'd have some type of GameObjectManager that would give a game object to the renderEngine and ask it to render it. Presumably the GraphicsComponent would be a predefined interface so that when the renderEngine looks at the GraphicsComponent, it "knows" it can call something like getVerticies(), getColor(), .... Some of these things could be null obviously because not every GraphicsComponent would be the same. That's at least part of how I thought a rendering engine would work (probably a very small part--but I'm only going for simple game at this point).    However, I don't see how the object could be truly decoupled from the Direct3D API because there are so many specific things that need to be set up in to bind the buffers, and layouts, to the pipeline. For example, vertex descriptions, layout descriptions, multisampling, etc, etc. So that makes me wonder, does it make sense to include that in the graphicsComponent as well? So along with texturs, verticies, colors, etc, you'd have DESC_ type structures that would get passed along to the RenderEngine? I don't see how the engine could be programmed to figure that stuff out.    Or does it make sense to have a initialize class that would have all these specific needs coded for the gameObject in question. The initializingComponent could pass this data along to the RenderEngine and the Engine could store it in some kind of STL container. Perhaps an objectID would also be passed along so that the next time the game asks the render engine to render that gameObject, it would use the gameID to locate the stored data in the container and fire that into the pipeline.    The question for me is this: in a initializing class as described above, am I maintaining a decoupled state even though I'd have a ton of highly specific commands to create buffer descriptions and the like? You certainly couldn't write the initializeComponent and GraphicsComponent separately. Well you could write the GraphicsComponent probably but someone (me) would have to make sure the stuff in the initializeComponent made sense for what was in the graphicsComponent (which seems to me is not decoupled)   Also, suppose I went the initialization route, when things like vertex buffer is created, would it make more sense for the render engine to store that? Or should that be fired back into the GraphicsComponent? If in the renderEngine, you'd need to tell the renderEngine if an object get's destroyed so you could release that memory? If in the GraphicsComponent you'd need to add many place holder variables in the interface for things that could possibly get initialized. I have no idea if either of those options are good, or if they even make much sense.    So what parts of anything I just wrote made sense? What am I out to lunch on? And where might I find examples of this type of architecture that I could learn from?      
  4. I have been working away with directx10 for a couple months now. I have a lot of the fundamentals figured out. I have a 3d world with a simple cube for a ship that flies around via thrust. A 3rd person camera that rotates in all directions etc, etc. Oh yeah, did I mention that my ship is a cube? WTF?! Many articles I have read have pointed to using blender, maya, or 3dstudio max as a model editor. Great I've played around with blender and I feel I could make a decent model or two. And then all of a sudden... pow... DirectX has been around for how long now and its still a huge pain in the ass to load data from a largely supported format into a mesh. I get that many people will have different uses for the data stored in their particular model format but is there not some base functionality that could be implemented. Such as: load vertices, normals, uv coordinates, and indicies? I seriously have to go write a loader from scratch? Or apparently learn the poorly documented Assimp API? So my question is: where is the most stable loader library for either a .blend, or .obj, or anything else that Blender can export (or a tutorial). I'm not about to drop 3 grand on Maya or 3DSMax. Jesus. I suppose I can do it if I have to but at this point I could care less about how the mesh is actually loaded. I don't need to know the inner workings of a mesh loader to satisfy a craving for knowledge. Just give me something that says: loadMesh(...), getVertices(...), getIndices(...).... etc, etc. Help a guy out. My flying cube is pretty sweet and all but I also wanted to add planets with gravity, etc and I don't want to write out the vertices by hand or apply some ungodly function to generate them.
  5. I have read many different articles on creating a first person camera. Most of them have been xna stuff. I am using c++ and have been going through Frank D. Luna's Directx 10 book. He does not cover first person camera's in that book. Anyway, I want to be able to control the camera with the mouse. I've read that directInput is deprecated so I'm not using that. Is rawInput what I should be using or is what I have fine for mouse input? Anyway, here is the section of code dealing with view transformations etc in relation to the camera. Everything else in the program works fine. It's taken directly from Luna's crate example and I'm just changing the view functionality. I believe once I get the rotation part figured out moving the camera around should be relatively easy. [CODE] GetCursorPos(&curMousePoint); int xCenter = (int)mClientWidth/2 + mWinPosX; int yCenter = (int)mClientHeight/2 + mWinPosY; D3DXVECTOR3 lUp = D3DXVECTOR3(0.0f, 1.0f, 0.0f); //local up vector D3DXVECTOR3 lDir = D3DXVECTOR3(0.0f, 0.0f, 1.0f); //local direction vector if(curMousePoint.x != xCenter) { deltaX += (curMousePoint.x - xCenter)/500.0f; //Yaw (around y axis) D3DXMATRIX yRotMatrix; D3DXMatrixRotationY(&yRotMatrix, deltaX); D3DXVec3TransformCoord(&wDir, &lDir, &yRotMatrix); //wDir is world direction vector } if( curMousePoint.y != yCenter) { deltaY += (curMousePoint.y - yCenter)/500.0f; //Pitch (around x axis) //D3DXMATRIX xRotMatrix; //D3DXMatrixRotationX(&xRotMatrix, deltaY); //D3DXVec3TransformCoord(&wUp, &lUp, &xRotMatrix); //wUp is world up vector } target = mEyePos + wDir; //Should point in direction of target D3DXMatrixLookAtLH(&mView, &mEyePos, &target, &lUp); SetCursorPos((int)mClientWidth/2 + mWinPosX, (int)mClientHeight/2 + mWinPosY); [/CODE] I haven't worried about over rotating yet. What I would like to know is: is this along the right track? What should be changed? I have the pitch portion commented out for simplicity
  6. Ok so just to make sure I'm getting it right: [CODE] D3DXCOLOR::operator UINT () const { UINT dwR = r >= 1.0f ? 0xff : r <= 0.0f ? 0x00 : (UINT) (r * 255.0f + 0.5f); UINT dwG = g >= 1.0f ? 0xff : g <= 0.0f ? 0x00 : (UINT) (g * 255.0f + 0.5f); UINT dwB = b >= 1.0f ? 0xff : b <= 0.0f ? 0x00 : (UINT) (b * 255.0f + 0.5f); UINT dwA = a >= 1.0f ? 0xff : a <= 0.0f ? 0x00 : (UINT) (a * 255.0f + 0.5f); return (dwA << 24) | (dwR << 16) | (dwG << 8) | (dwB << 0); } [/CODE] the return statement is essentially a 32 bit entity that looks like | --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--| Does it make sense that if x86 intel machines are little-endian that I should think of this as | --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--| LOW ------------------------------------------------------------------- HIGH [b]-----------------------------------------------------------------------------------------[/b] The function that I used was given by Luna as: [CODE] D3DX10INLINE UINT ARGB2ABGR(UINT argb) { BYTE A = (argb >> 24) & 0xff; BYTE R = (argb >> 16) & 0xff; BYTE G = (argb >> 8) & 0xff; BYTE B = (argb >> 0) & 0xff; return (A << 24) | (B << 16) | (G << 8) | (R << 0); } [/CODE] Which to me seems to imply: | --byte1=Alpha--|--byte2=Blue--|--byte3=Green--|--byte4=Red--| LOW -------------------------------------------------------------------- HIGH This works but obviously my understanding of the endianness is incorrect because the format code I am using is: DXGI_FORMAT_R8G8B8A8_UNORM So this means the expected byte order is Red, Green, Blue, Alpha. Which if you read opposite of what I have above makes sense. To me it looks like the bytes are ordered backwards in memory (ABGR). Maybe I'm misunderstanding the shifting operation. Does: (A << 24) | (B << 16) | (G << 8) | (R << 0) create the following sort of thing [font=courier new,courier,monospace]AAAAAAAA000000000000000000000000 (A bits shifted 24 to the left) OR'd With 00000000BBBBBBBB0000000000000000 (B bits shifted 16 to the left) OR'd With 0000000000000000GGGGGGGG00000000 (G bits shifted 8 to the left) OR'd With 000000000000000000000000RRRRRRRR (R bits shifted 0 to the left) Which results in ------------------------------- AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR[/font] [font=courier new,courier,monospace]Is it just a case of convention where I should start at the far right and call that byte 1 or am I missing something else?[/font] Also does the + 0.5f in (r * 255.0f + 0.5f) and the like, cause rounding upwards or is it doing something else?
  7. I've been working through Frank D. Luna's book "Introduction to 3D programming with DirectX 10" and one of the problems is to switch from using D3DXCOLOR color (128 bits) to UINT color (32 bits) Presumably the format code to use is: DXGI_FORMAT_R8G8B8A8_UNORM. In my mind this means you have a variable which at the byte level has information about the channels in the exact order: RGBA (Is this the correct interpretation?--Asking because I'm sure I've read that when you want RGBA you really need a code like: A#R#G#B# where the alpha channel is specified first. Anyway, I opted (there's probably a better way) to do: UINT color = (UINT)WHITE; where WHITE is defined: const D3DXCOLOR WHITE(1.0f, 1.0f, 1.0f, 1.0f); This cast is defined in the extension to D3DXCOLOR. However, when DXGI_FORMAT_R8G8B8A8_UNORM is used with the UINT color variable you get the wrong results. Luna attributes this to endianness. Is this because the cast from D3DXCOLOR produces a UINT of the form RGBA but because intel x86 uses little endiann then at byte level you really get 'ABGR'?? So when this variable actually gets interpreted the shader sees ABGR instead of RGBA? Shouldn't it just know when interpreting bytes that the higher order bits are at the smaller address? And the last question: Since the code is specified as DXGI_FORMAT_R8G8B8A8_UNORM, does this mean that R should be the smallest address and A should be at the largest? I'm sure there are a ton of misconceptions I have so please feel free to dispel them.