• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Quat

Members
  • Content count

    1028
  • Joined

  • Last visited

Community Reputation

568 Good

About Quat

  • Rank
    Contributor
  1. I might be able to get 5.  Do you have a link for solving for this projective frame?
  2. I have the following problem.  Suppose I have a 3D triangle in world coordinates, and I know their corresponding projected image coordinates.  Is it possible to find the view and projection transform?   The image points are actually found by feature point detection algorithm, which is why I do not know the view/proj matrices.  But I want to project more 3D points, which is why I need to solve/approximate the view/projection matrices.  
  3. I have a set A of 3D points. I have the camera and projection matrix so I can find their projected points on an image plane to get projected points {p1, ..., pn}. Suppose the set A is transformed by a rigid body transform T to a new set B of 3D points. Again I can project these points by the same camera to get projected points {q1,...,qn}. I am trying to do image alignment. It looks like the idea is to use least squares to find the 2D alignment transform (http://www.cs.toronto.edu/~urtasun/courses/CV/lecture06.pdf). My question is, can I use knowledge of the 3D rigid body transform T to find this 2D transform faster or immediately?  In other words, given a 3d rigid body transform, can I figure out how that transforms the corresponding projected points given a camera?
  4. Not sure if this should be moved to the math section...   Suppose I have a 3D object in some reference position/orientation R.  It undergoes a, relatively small, rigid-body transform M to get in new position/orientation R'.  Then R and R' will have different projected images I and I' relative to some camera C.  I want to find the alignment transform to align I and I'.  I know there are 2D algorithms for this to estimate the transform by identifying several feature pixels on the images, and then find a 2D affine transform.   Question is: does knowing M help?  That is, does knowing the 3D transform help me get a more accurate image alignment algorithm or help me get it faster?   So far, I think it will help somewhat.  Given the known feature pixels in image I from the reference position and their 3D points on the model, I can apply M, then project back to I' so that I know the feature pixels for R'.  This saves me having to search for matching feature pixels.  But is there room for more improvement? 
  5. //CamPos = float3(ViewMatrix._41, ViewMatrix._42, ViewMatrix._43);   The camera position is not stored in the 4th row of the view matrix.     Eye = CamPos - Pos.xyz;   I think you mean Pos.xyz - CamPos, which would be the vector from the camera origin to point Pos.  However, this doesn't take into consideration the orientation of the camera. Pos = mul(Pos, ViewMatrix);   This does take the orientation of the camera into consideration. //Eye = Pos.z;   Assigning scalar to vector? Out.Pos = mul(Pos, ProjMatrix); Out.DepthV = length(Eye);   Length is not the same as just the z-coordinate Pos.z.
  6. I wish they would add bindless texturing API to d3d11, as well as the "fast geometry shader" feature for cube map rendering.  I think they could have done more for d3d11 to reduce draw call overhead without having to drop to the low d3d12 level.
  7. Hello, I am new to unit testing and have the following question.  I have a WPF app that uses MVVM and I am working on unit testing the view model.  My UI has a button which the view model abstracts as a command, and when it is pressed it sets the state of a few properties and posts an event.  My question is whether I should make one unit test that asserts everything I expect to happen from the button press (psuedocode):   // Arrange   btnCommand.Execute(null); // basically button press handler   Assert(StateA == X) Assert(StateB == Y) Assert(StateC == Z) Verify event was posted   or should I separate these out into 4 separate tests with only one assert per test?    
  8. First, I don't have a lot of network background, so sorry if this question shows my noobness.  In my particular scenario, there is going to be one service per client (one-to-one).  I'm sending game data from the server to client so that the client can do some processing with the data. In theory the data could change every frame, but in practice it does not.  So I'm trying to only send over deltas, but ran into a problem with my approach.   My approach was to keep a dictionary<ID, data> on the server, so when it came to sending data over the wire, I could look up the last data I sent, and check what pieces of data changed, and then only send that data over.  A set of bits would be flagged and also sent over so the client knew what data values were being updated.  The client also keeps the data cached so it only needs the updated data.   The problem I ran into is that the server starts off before any clients connect, and starts sending the data (to nowhere).  This builds the cache and so by the time a client connects, it is only receiving deltas (but the client never received the full object the first time around because it wasn't connected yet).     Since the client/service is one-to-one, I could probably modify the server to not start building a cache until a client connects.  However, I wondered if missed packets would be a problem (maybe our socket API automatically resends so I don't need to worry about this situation).  I'm wondering what kind of systems games use to sort of sync up efficiently client/server data so that only deltas can be sent.    
  9. I'm implementing a sprite system to draw sprites.  My first strategy was as follows:   1. Use one big dynamic vertex buffer. 2. Sort sprites by render state/texture. 3. For each sprite batch   a. Fill dynamic vertex buffer with next batch using glBufferSubData.   b. Issue draw call to draw the batch   My thinking was that the driver would discard the old buffer and allocate a new one for the next batch so there would be no stalls.  This is how Direct3D words with the Map/Discard/WriteOnly flags.  Can I expect similar behavior on mobile opengl es 2.0?   Later, I was thinking that even if the driver does discard the buffers, it still allocates a lot of buffers per frame (not sure if this is a big deal or not).  So then I was thinking of the following fill strategy:   3. For each sprite batch   a. Fill system memory vertex array with next batch   b. push_back a struct BatchData = {VertexStart, VertexCount, Texture*, etc.} so I know what region of vertex buffer corresponds to a draw call. 4. Copy all the vertices for all the draw calls to the dynamic vertex buffer using  glBufferSubData 5. For each batch   a. Draw the BatchData using offset into vertex buffer   To me the advantage of the 2nd approach is that I won't discard a whole buffer if the sprite batch was small (like 1-2 sprites), but it requires a 2nd pass.   Any thoughts?          
  10. I'm new to network programming and have to write a tool that runs on Windows and communicates with a Linux box over the network.  The linux box has a TCP/IP server setup using c++ with boost.     My tool in Windows needs to connect.  For the Windows side, I am writing the tool in C# and looked at this tutorial: http://www.codeproject.com/Articles/10649/An-Introduction-to-Socket-Programming-in-NET-using   Basically, the linux box is going to send packets with "event data" to the Windows client at certain times.  What's the best way for the client to wait for incoming data?  The tutorial above uses a while loop to send/receive data over the network stream.  But is just looping and continuously polling for a packet the right design?
  11. In the 3rd edition of Real-time rendering, the authors base the specular reflectance off roughness and the fresnel effect:   (m+8)/(8pi) * R_F(a_h)cos^m(t_h)     It seems that for a really large m and where the normal and half vector are close to each other, the result of the above expression will be a large number much greater than 1.     Won't this amplify the amount of reflected specular light?  It's not making sense to me because then won't the reflected light be greater than the incoming light?  How is that possible?
  12.   I think you can for the most part.  If you have fixed level sizes (arena map, race track), you can pretty much know at load time what resources you have (number of objects, materials, textures, etc.), so you can size your descriptor heap appropriately.  You can then add a a fixed maximum count to support room for dynamic objects that will be inserted/removed on the fly.  Descriptors don't cost much memory, so it wouldn't be a big deal to over allocate some extra heap space.  For more advanced scenarios, you can reuse heap space that you aren't using anymore.   For a level editor type application, I'd imagine you could grow heaps sort of the way vectors grow.     I didn't really follow your question.  The way I would assume you would do is allocate the cbuffer memory.  Then you need to allocate CBVs that live in a heap that reference subsets of that cbuffer memory.  If an object is deleted, it would be easiest to just flag that cbuffer memory region and CBV as free so it can be used the next time an object is created.  
  13. Do the ray/triangle test in the local space of the mesh.
  14.   This is a good question.  Yes you can put data you would typically put in a constant buffer in a structured buffer and then bind an SRV to the structured buffer and index it in your vertex shader.  You would have to profile and see if one performs more optimally.     In the d3d11 days, I assumed constant buffers were distinguished in that they were designed for changing a small amount of constants often (per draw call), and so had special optimizations for this usage, whereas a structured buffer would not be changed by the CPU very often and would be accessed more like a texture.     I'm not sure if future hardware will continue to make a distinction or if it is all the same.
  15. My old material class used to have a "reflectWeight" to tweak how reflective an object is.  Now I am using the Shlick approximation for Fresnel to see how reflective an object is.  Also, all my materials can be reflective based on the materials "Fresnel 0" value.  The problem I am seeing is that for low reflective objects like a dull wood, at glancing angles (large angles between normal and reflection vector) the amount of reflection ramps up and makes even dull wood get reflective.  This does not seem right.  What's the right way to solve this?