• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

198 Neutral

About MirekCerny

  • Rank
  1. DX11

      Of an extremely high res model, from which you generate a normal/displacement map, which you can then apply to a coarse model. The other alternatives would be some form of subdivision scheme (e.g. pnTriangles, or subdivs). Maya/XSI allow you to model subdiv characters, and there are libs available to handle the tessellation (e.g. opensubdiv).     It's entirely possible, but don't expect it to shorten the time for rigging & modelling, because it won't.       Ok, I guess that splits to two questions: 1)If there are, say, two characters on the screen - which will result in better visuals:     - using lowres meshes with tesselation and displacement map     - or using a hand-made high-res meshes, sans tesselation? (With the same frame rate, of course)   From the stuff I've read and seen, I am still not sure whether the tesselation is only good for dynamic LOD (and as such, the most 'important' objects are still better left un-tesselated).     2)Why do you think it might not shorten the time for rigging and modeling? Without tesselation, you need           - one very high res model to generate normal maps from           - one high res model to texture,rig and ultimately display     I though that with the use of tesselation, the second model can be considerably more low-res, and as such, easier to create and rig. Am I missing something?
  2. DX11

    To be more specific:  The (maybe nonworkable) idea is to be able to create a very high-res model in a sculpting application; and a very (as in very,very) coarse base model; and then let the tessellation and displacement do the rest.   That should theoretically drastically shorten the time needed for the actual 3d modeling and rigging.   Or is this something the currently available RT tessellation methods are unable to accomplish?
  3. Does anybody have an experience with a good working pipeline for DX11 style tessellated character?   I mean from the selection of the tessellation method, the assets creation tools, a tool to create and bake the displacement/normal map etc.   I'd like to include some (in fact, a lot of) tessellation in an engine, but I keep running into dead ends (lack of supporting tools, glitches and artifacts etc), so I'd treasure an info from somebody who actually got it working.   Thanks.    
  4. DX11

    Even if they did decide they don't need the anisotropic filtering, it still makes no sense to use it to render the scene _when the motion blur is off_ ;-) Thanks for the paper, I was looking for such information. Yeah it is a bit disappointing - I was hoping that the GS method would allow a real object based motion blur - perhaps using a technique similar to shadow volume generation on the GPU - and now it seems I'm still stuck with the screen based one. Oh well ;-)
  5. Hello, I decided to implement the object based motion blur the 'DX10' way (with geometry shader, vs the velocity texture). I read the docs for DX10 SDK sample; they were clear enough - first you amplify the geometry "on both sides" using geometry shader, then you have to blur the texture for said geometry using anisotropic filtering. However, when I took a look at the sample, it looked fishy. I checked the source, and found out they were not amplifying geometry AND bluring the texture; they were EITHER amplifying the geometry, OR bluring the texture, which doesn't make much sense? Does anybody know what is going on here? What is the right way to do this? Or rather, what is the best way to implement object based motion blur (real-time) for DX11 hardware?
  6. Thanks for answering. Yes, I do the math in VS. I sent you a PM.
  7. Hi, after trying out several Shadow mapping algorithms, I settled for Paraboloid/Dual paraboloid shadow mapping. However, I have a problem - all the shadows I get are a bit "curved". This is not much of an issue for figures etc., but it is pretty noticeable for vertical columns casting shadows on the walls. I'd like to ask: Are the curved shadows a normal feature of Paraboloid shadow mapping? Or is it just a bug in my code? Thanks.
  8. Nope. In CubePSM, the six faces of the cube are rendered NOT from the center of the cube, but from the position of the light, ie. each one from a different distance. That means there is different Znear and Zfar for every face. See http://http.download.nvidia.com/developer/GPU_Gems/Sample_Chapters/PSMs_Care_and_Feeding.pdf for details.
  9. Hello, I'm implementing a rendering system that will use quite a lot of point lights in an in-door scene. So I decided to use a Cube PSM (I excerpted the "PSM - Care and feeding" heavily :-). However, I encountered some problems with the algoritm. The biggest one so far is that in the original algoritm, they use the Zfar/(Zfar-Znear) and -Znear*Zfar/(Zfar-Znear) factors for rendering the shadows from ALL FACES of the cube, assuming they are constant. Provided that you set Znear/Zfar to be constant, the first factor is indeed constant for all faces. But the second factor is not. Now I'm thinking if it's worth the trouble to somehow find out which face is being rendered from, and choose the appropriate Znear , or store the original post-projective coordinates in the cube mape...etc. Is there any easier way out? Does anybody know how was the original algorithm supposed to work? Thx.