• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

EternalNewbie

Members
  • Content count

    31
  • Joined

  • Last visited

Community Reputation

207 Neutral

About EternalNewbie

  • Rank
    Member
  1. Thanks tried changing the kernel size but this doesn't seem to make any difference. Adding a minimum depth difference helps to remove the artifacts in some areas but as I move the camera it simply reappears at different depths.
  2. Hi all I'm trying to implement very basic ssao but get the following artifacts. I'm not using the normals and I'm using a simple filter with no noise here. I've tried using a noise filter which helps remove them on the floor except when looking directly at them. However they seem to persist on the walls and other objects. Has anyone seen these sort of artifacts before? Any Idea how to get rid them? [attachment=11715:SSAOArtefacts.jpg]
  3. Quote:Original post by wavetable however, i tried D3D10CreateDeviceAndSwapChain() and it *only* works with a valid DXGI_SWAP_CHAIN_DESC here. does your code work without it? if it works, how does the DXGI_SWAP_CHAIN_DESC look like? is it in current mode or in first enumed mode (smallest dimension), i am just interested..? Nah, thats what I thought your were suggesting. As I figured that creating a swapchain requires you to set the format. I was looking for way to get the format of the users current display mode. Thx for trying anyway. any other suggestions? Or do i have to use brute force and guess work?? Raj
  4. I didn't think that would work as you need to fill in DXGI_SWAP_CHAIN_DESC to create the swap chain in the first place. But if i create a swapchain without filling DXGI_SWAP_CHAIN_DESC or its BufferDesc, D3D10CreateDeviceAndSwapChain will use the current display format?? Is that right? Raj
  5. That gives me list of possible display modes for a particular adaptor. But doesn't give me the current display mode. I could search the list for modes that match the details I get from EnumDisplaySettings, however that seems kinda roundabout way of doing things. Also to get the format you would have to store all potential formats for each bit depth and search through them. Any alternatives?? Thx Raj
  6. Hi I'm moving to D3D10 from D3D9. I know in d3d9 you can use GetAdaptorDisplayMode. But can't figure out how to do it in D3D10, specifically how to get the display format. I know that I could use EnumDisplaySettings to get the current resolution and bits per pixel, but there is no way to safely convert this to a DXGI Format. So is there a proper function to do this? Or do I need to use some sort of hack? Thx Eternal Newbie
  7. I'm trying to render a animated mesh. but I have a bug somewhere. When I try to render using the retail version of D3D9 I dont see the model. Using Pix in the render tab there is nothing, but in the mesh tab you can see the geometry rendered in the viewport. I also tried using the debug version of D3D9, with the appropriate flags set. So that I could debug the program. However under the mesh tab under PostVS and Viewport it simply says "Failed to process vertices". So I can't debug this problem. Looking at the debug output it says "Unsupported renderstate D3DRS_INDEXEDVERTEXBLENDENABLE.". Note this only occurs when using the debug version D3D9. I have tried this using my own shader and the fixed function pipeline. I have tried this with a different model in the same format. (I'm using my own max exporter for both models). This model is rendered. Obviously there are numerous things that could cause invisible geometry. But can anyone suggest why I might be getting the "Failed to process vertices" message and how I could get rid of it. So that I can debug the code? I've just installed NVidia latest perfSDK and drivers and I'm using a Gf8800GTS. if that makes any difference. Tia Eternal Newbie
  8. Try looking at Frame Buffer Postprocessing Effects in DOUBLE-S.T.E.A.L for an explanation on hhow to apply bloom. The way you perform a blur is by applying a convolution. This involves for each pixel\texel apply a kernel which sums neighbouring pixel with a weight. The simplest convolution is a box blur which takes all neighbouring pixel and averages them. Here the kernel might be a 3x3 where each weight 1/9th. As the kernel uniform(?) and applying a 2d kernel more expensive than applying two 1d kernel, we can optimse by applying two 1d kernels. First we apply a 3x1 kernel horizontally then we apply 1x3 kernel vertically. float hKernel[3] = {0.33f, 0.33f,0.33f}; float vKernel[3] = {0.33f, 0.33f,0.33f}; //Horizontal Blur for(int y = 0; y < height; y++){ for(int x = 1; x < width - 1); x++){ tmp = hKernel[0] * Buf[(y * width) + x - 1]; tmp += hKernel[1] * Buf[(y * width) + x]; tmp += hKernel[2] * Buf[(y * width) + x + 1]; tmpBuf[(y * width) + x) = tmp; } } //Vertical Blur for(int y = 1; y < height - 1; y++){ for(int x = 0; x < width); x++){ tmp = vKernel[0] * tmpBuf[((y - 1)* width) + x]; tmp += vKernel[1] * tmpBuf[(y * width) + x]; tmp += vKernel[2] * tmpBuf[((y - 1) * width) + x]; tmpBuf[(y * width) + x) = tmp; } } Thats off the top of my head but is roughly right. Actually the above wont work perfectly as there are details missing (like handling edge condition). But with with a little research you should be able to fill in the blanks. Also the above show a simple box blur, but are other blur techniques which may be better. (Edit) Fixed your link. <3 mittens [Edited by - mittens on July 12, 2007 8:30:27 AM]
  9. If I understand the process correctly the you can perform the operation in any space as long all the vectors are in the same space. so you Check if dot((Vertex - dir), n1) * dot((Vertex - dir), n2) < 0 Where dir will be (0,0,0) if you are in eyespace or lightspace. But its more efficient if you do the calculation in the same space. So to create a LighView matrix you would use the same techniques as for computing a View matrix. This is easiest for a spotlight as you already have a pos and lookat vector. For a directional light calcalting the vector isn't too difficult you just need to make sure it 'behind' you scene. Point light are difficult, generally the method iv seen is similar to cube mapping. Alternativly use the method i suggested above.
  10. Ok I havent tried using shadow volumes, but from what i understand you need to know if a face is front facing or back facing. For this you can use the same techniques as you would for backface culling. Which can be done by taking the sign of the dot product of the normal and the direction vector (which for culling is the view vector and for shadow volumes is the light direction). So if n1 and n2 when dotted with the dir vector have the same sign then they're not silhouette edges. This method can be used in any space.
  11. how long is a piece fo string? The best space todo lighting calcs depends on exactly what your are doing? which reflection model are you using? Are you doing bump mapping? etc. Also there no best solution there always a trade off between quality and performance so it depend son your specs for both, probably the main factor will be fps you want/require for a particular spec.
  12. The accurate way of calculating the normal is being SLERP'ing (using Spherical interpolation) But the calculation is expensive which is why you'll almost always see it done as a linear interpolation, which is then renormalised. The formula for SLERP is:- N(t) = N0(sin(1-t)θ/sinθ) + N1(Sin(tθ)/sinθ) You can look it up on wikipedia or google if you need more detail.
  13. You can perform lighting in almost any 3d space. So the choice depends on which would be the most efficent. As your lights are already in world space this is your first option. but if your using a reflection model which uses the view vector (e.g phong) then its probably more efficient to do the calculations in eye space. Not used the fragment shader with opengl, but if you glFragcoord uses proj space then you'd have to transform your light and view vectors to proj space.
  14. It would be more efficient to do the transformation in the vertex shader, then the attributes would be interpolated and you would renormalise the vectors in the pixelshader. But you would lose a little accuracy. Also if you transformed the L vector to tangent space you would use a TBN matrix calculated per vertex, but as you interpolate across a polygon the interpolated tangent and normal vectors would also be different from those used in the TBN matrix so that would increase your innaccuracies. This suggests transforming to tangent space is more efficient if your using gouraud shading (not interpolating the normal) and using world space would be more efficient\accurate if you use full phong shading(interpolating the tangent and normal). Is that that right?
  15. For a single light using the lambert reflection model, this would mean either transformin a single normal or L vector to world or tangent space respectivly. So the only additional computation would be transposing the TBN matrix (per fragment). But if you were using another reflection model this would mean transforming one more additional vectors(e.g V, R, H, etc), and if your using multiple lights then this means (numLights * numVectorForReflModel) transformations so wouldn't it generally be more efficient to transform the normal? or am i missing something?