• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

matches81

Members
  • Content count

    627
  • Joined

  • Last visited

Community Reputation

474 Neutral

About matches81

  • Rank
    Advanced Member
  1. thanks for your fast answer. Very helpful links. Helps a lot to know I can stop looking for device caps. Most of our "design" for that part consisted of pretty much copying the device caps structure from D3D9 and renaming it, so I guess I'll finally go over that again and try to find some compromise between the two. I currently don't have any idea for what I'd use the "different viewports for different primitives" feature, but it's definitely good to know. One thing we're looking forward to is getting to play around with geometry, domain and hull shaders and all the new stuff in D3D 10 and 11, so perhaps I stumble upon a use for that feature then.
  2. Hello there! A few years ago a friend of mine and me put together a basic 3D engine with a D3D9 renderer. Worked pretty well. Now, after a few years of doing other, unrelated work, we want to give D3D11 a try. Since our engine's design "expects" that we're able to enumerate what the graphics device is capable of (fairly closely to what the device caps in D3D9 describe), we'd like to be able to provide that info. Is there something similar in Direct3D11? The only thing I found so far is the feature level, but that seems fairly unspecific.. Another question: I've read a bit about render targets and viewports in Direct3D11. Am I correct that, in D3D11, a render target basically consists of the resource and a corresponding view, telling the pipeline where and how to read / write data? As for viewports, it seems that, although the involved methods and structs of course look a bit different, the basic idea is the same as in D3D9 (i.e. basically it ends up being a part of the projection matrix and that's it). Is that correct, too? Any help would be appreciated and thanks for reading.
  3. In the last few days I have been running into a question more and more and I'm unable to find a satisfying answer for myself, so here I am, presenting this question to you guys: Should I avoid including other headers in a header file whenever possible or does it make sense to include headers for structs / classes this header is using anyway? I started wondering about this when I went through all my header files and got rid of namespace inclusions in them in order to avoid namespace confusion for files including them. So, doing that, I asked myself: How about header files? Do I really want to "accidentally" include another header file by including this one? On the other hand: Do I really want that I have to include that next header explicitly when it is required to use this class properly anyway? Simple example: I have a class Box defined in Box.h, using a struct Vector3 defined Vector3.h and a struct Vector4 defined in Vector4.h. Is it a good idea to include both Vector3.h and Vector4.h in Box.h or is it better to provide forward declarations for both structs? In order to use Box, I'd pretty much have to know Vector3 and Vector4, so I'd have to include Vector3.h and Vector4.h anyway. Also, not including Vector3.h and Vector4.h will mean I'll have to use forward declarations for them. For this simple example, that's okay, but with a growing project I'll end up with forward declarations of a lot of stuff eventually. So, I guess this comes down to: Should I include header files for used classes / structs etc whenever possible (without causing a circular inclusion or similar issues) or should I rely on forward declarations as much as possible? Or is there some sweet spot in the middle?
  4. Once again, a stupid oversight ruins the day. I forgot to change the depth stencil buffer's size accordingly. My initial problem is solved, I guess, so I'm renaming the thread, too. However, I have a new problem. At startup, I'm back to getting the results I want, i.e. something like this: http://s2.imgimg.de/uploads/Good4efea5f5jpg.jpg But, as soon as I resize the window, this happens: http://s2.imgimg.de/uploads/Bad19524750jpg.jpg Sorry, I seem unable to get any kind of image posted :( Anyway: This is the exact same problem I had with my previous implementation that created a backbuffer with the size of the client window. It seems that some portions of the window are just not updated. The portions that aren't updated change when changing the window's size and remain constant between size changes. Any clues? Edit: Again, stupid oversight. I set the Z-value to clear to in the Clear call, but forgot to add the D3DCLEAR_ZBUFFER flag... Feel free to delete this thread entirely, as it just comes down to being blind due to hours of programming. [Edited by - matches81 on June 19, 2009 8:13:38 AM]
  5. Edit: Please see my second post, the problem described here is already solved So... I have this app using multiple target windows to render into using D3D9. Of course, I ran into the usual "how to handle resizing windows" problem and found a post saying that I could just create a back buffer with the desktop's size and use viewports afterwards. Since this meant I got around doing a device reset each and every time a window gets resized, I went that route. But, I am confused: My calls to Clear and Present were changed accordingly and work fine. I tested this by clearing only half of the target windows but presenting the full rectangle, resulting in half the windows being the color I cleared to and the other half getting that nice D3D debug flicker. So, I expect that area to be covered and I'm rather certain that that stuff is set up correctly, i.e. I'm not just presenting areas of the backbuffer that don't contain anything. Additionally, my test scene is definitely okay. Before changing to using viewports it rendered fine, which means that the vertex and index buffers were okay, the camera returned correct view and projection matrices etc... I assume this is the same now, as I haven't changed a thing in that regard. The problem now is: I don't see a thing anymore. The D3D debug runtimes don't give any errors and the only warnings I get are redundant render states, so nothing to worry about for now. Do I have to take the D3DVIEWPORT9 into account for the view and projection transform I use in my shaders in any way? I looked in D3D docs and found no further information about the usage of viewports... [Edited by - matches81 on June 19, 2009 8:34:52 AM]
  6. Setting y to 0 in the plane equation basically changes the plane you're representing. For example the plane represented by 1/sqrt(3)A + 1/sqrt(3)B + 1/sqrt(3)C + 1 = 0 is a completely different plane as 1/sqrt(2)A + 0 + 1/sqrt(2) + 1 = 0. Both have a distance to the origin of 1, but that's it. My guess is that you would have to project your frustum to 2D and use that for your culling purposes, but I also think that thias isn't exactly worth it. I'd set the min and max Y values for the AABBs in your quadtree to the min and max values of your terrain and continue to use the 3D test.
  7. Quote: The Error is/are in my program: ...\phong.fx(76): error X3500: asymetric returns from if statements not yet implemented ...\phong.fx(85): ID3DXEffectCompiler::CompileEffect: There was an error compiling expression ID3DXEffectCompiler: Compilation failed I can't do anything with this description. I compile the Shader with the (managed) function Effect.FromFile ( Shaderflags.Debug) My guess would be that the compiler doesn't like it if your "if" branch returns something and your "else" branch does not. Simple test for that would be to put an "else" before your final return statement. I'm just guessing what could be meant by "asymmetrical returns from if statements", though. If that isn't a solution at all or not for your case, I'd just go with Viik's solution.
  8. Quote:Original post by luca-deltodescowhen converting from float to int, it simply removes the fractional part of the number; for positive numbers this acts as taking the floor, for negative numbers this acts as taking the ceil. True, sorry for the inaccuracy. Btw, I got it working, although I still use floats. The problem was that I calculate 'restStepX' / 'restStepY' before I adjust the direction's length, leading to bigger values than expected. So, simply swapping these two lines: float restStepY = errorX*dir.y; dir /= dir.x; to dir /= dir.x; float restStepY = errorX*dir.y; (same for the slope > 1 case, of course) fixed the problem.
  9. Thx! Good points. I'd like to use integer arithmetic only for this algorithm, just like the original, but I had one problem: My lines don't have to start at the center of a "pixel", neither do they have to end at the center of a "pixel", and using floats was the only quick way I could think of to accomodate for that. Of course, I could calculate the offset of the line start to the pixel center at the start and keep using that at each step while using integers for the rest. That way, I wouldn't accumulate the error. I think, I'll give that a try. AFAIK, C++ always floors when converting a float to an integer. However, you're right, I should use explicit casts to make it more obvious.
  10. Hi there! I'm currently implementing a ray-heightmap-intersection. The basic idea is: 1. Clamp ray to heightmap's AABB. 2. Rasterise the ray's valid segment (inside the AABB) to the grid defined by the heightmap to get list of possibly intersecting quads. 3. Test the quads. I've implemented it, and it works pretty well so far with a few exceptions here and there (intersections not found that should be there). One of the issues I've found so far is that the line rasterising algorithm sometimes goes beyond the end point specified by one "pixel". I'm pretty sure there are quite a few people around here that have implemented a line rasteriser, so it would be appreciated, if you could look over my code to see if you see something off. I don't. :( Here you go: void Line::RasterizeToGrid(float gridSize, const Ogre::Vector2 &origin, std::vector<SamplePoint> &result) { // Bresenham modified to plot all points in contact with line, not only one per X coordinate // http://lifc.univ-fcomte.fr/~dedu/projects/bresenham/index.html using namespace Ogre; float invGridSize = 1.f / gridSize; Vector2 start, end, dir; if(m_P.x > m_Q.x) // start = transformed m_Q, end = transformed m_P, to keep dir.x > 0 { start = (m_Q - origin) * invGridSize; end = (m_P - origin) * invGridSize; } else { start = (m_P - origin) * invGridSize; end = (m_Q - origin) * invGridSize; } dir = end - start; int yStep; if(dir.y < 0) { yStep = -1; dir.y = -dir.y; } else yStep = 1; // now, dir.x and dir.y are > 0, so only two cases remain: slope <= 1 or slope > 1 result.push_back(SamplePoint(start.x, start.y, 0)); if(dir.y <= dir.x) // slope <= 1 { float errorY = start.y - (int)start.y; float errorX = start.x - (int)start.x; float restStepY = errorX*dir.y; dir /= dir.x; int currentY = start.y; for(int i = start.x; i < end.x-1; i++) { errorY += dir.y; if(errorY > 1) { if(errorY - restStepY < 1) // we were below 1 when passing the grid border { result.push_back(SamplePoint(i+1, currentY, 0)); } else { result.push_back(SamplePoint(i, currentY+yStep,0)); } currentY += yStep; errorY -= 1.f; } result.push_back(SamplePoint(i+1, currentY, 0)); } } else // if slope > 1, swap X and Y { float errorY = start.y - (int)start.y; float errorX = start.x - (int)start.x; float restStepX = errorY*dir.x; dir /= dir.y; int currentX = start.x; for(int i = start.y; i < end.y-1; i++) { errorX += dir.x; if(errorX > 1) { if(errorX - restStepX < 1) { result.push_back(SamplePoint(currentX, i+1, 0)); } else { result.push_back(SamplePoint(currentX+1, i,0)); } currentX++; errorX -= 1.f; } result.push_back(SamplePoint(currentX, i+1, 0)); } } SamplePoint endSample = SamplePoint(end.x, end.y, 0); if(endSample != result.back()) result.push_back(endSample); } The parameter 'origin' specifies the origin of the grid, 'gridSize' should be self-explanatory and 'result' will contain a list of integer coordinates for the quads that intersected the line. The line has the fields m_P and m_Q, containing the start and end point. Those are calculated after intersecting the ray with the heightmap's AABB. As I said, this works rather well mostly. But every once in a while I'd get a SamplePoint outside of the line, the algorithm seems to go too far every now and then. So, if anybody sees something off here, please tell me. I get the feeling I've spent too much time on this to see clearly.
  11. float vd = ray * n; ... float v0 = (p0 * n) + d; float t = vd / v0; This seems a bit off... the distance of a point at parameter t along the ray from the triangle's plane would be v0 + t*vd. When the ray hits the plane, this should be 0. So: v0 + t*vd = 0 => t*vd = -v0 => t = -v0 / vd Haven't read further after that, hth.
  12. as Viik said, clamping is one of a few ways to address textures. When you use that mode to address a texture, every U- or V-coordinate greater than 1 will be interpreted as 1, every coordinate less than 0 will be interpreted as 0, i.e. the coordinates are clamped to [0,1]. That results in the texture's borders being stretched outwards.
  13. I'd probably try to find a balance between the two. In most games you will have a large chunk of geometry that already exists at the beginning of a level (i.e. is created during load time). For that geometry you can probably combine the two: Sort the geometry based on the material it's going to be rendered with, and then place as many "material groups" as you can / want in one buffer. That shouldn't take too long to do and you'll end up with buffers of a sensible size (i.e. few buffer changes) and can still batch all objects using the same material. For geometry created during gameplay you'd probably only sort by materials, because the above might be too slow to do during run-time.
  14. I'd first place a breakpoint in the destructor for ray_lval and see if it's called by accident somewhere. Had that happen to me once for some odd reason. If it's called, you can then use the call stack to find out from where. If your scene is static you should definitely look into a ray-tri-intersection test based on Pluecker coordinates, seems like a nice idea. I also wrote a raytracer as part of my studies and we used to project the triangles along the axis corresponding to the biggest component of its normal and store that projection. Worked pretty well and was fast enough for our purposes. I could probably check if I still find a description of the method we used.
  15. thx all for your replies. @cameni and Ashaman: You're right, a heightmap of size (2^n+1)² will turn into a terrain with (2^n)² quads. The problem is: I want to calculate the textures based on the terrain, too. Currently, as I said, I simply generate a higher resolution heightmap and evaluate the vertices of the resulting terrains directly. Simple, but it results in a (2^n+1)² texture. If I want to sample the quads of the higher res terrain for my textures, I'd have to interpolate between the vertices of the terrain, right? I've thought about that again, and to me, there's no way around interpolating between vertices if I want the texture resolution to be different from the heightmap resolution (except of course HellRaiZer's probably very feasible suggestion of ignoring the last row and column of vertices). But your hint with the quads combined with HellRaiZer's UV calculation actually was a bit of a eye-opener... I was thinking about mapping the whole 2^n of the textures to the whole 2^n+1 of the heightmap, which would have resulted in pretty ugly mappings. Your tips reminded me that UV-coordinates of (0,0) don't actually correspond to the center of the top-left pixel, but to the top-left corner of it. So, mapping quads to pixels is actually more accurate and simple than I had originally thought, since I only need to sample the center of each quad (which actually corresponds to the center of the pixel) of my high-res terrain and that's that. Thanks again!