• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Mohanddo

Members
  • Content count

    18
  • Joined

  • Last visited

Community Reputation

109 Neutral

About Mohanddo

  • Rank
    Member
  1. DX11

    [quote name='Hyunkel' timestamp='1316042028' post='4861784'] There's a lot of things you can improve, though as Krohm already mentioned, I'd only worry about it if performance actually becomes an issue. Here's a more detailed list of things you can do to (greatly) improve your performance: [list][*]Use a single quad (2 triangles) to render all your tiles. That's really all you need. Your quad should have the size of a single tile and be created at the origin of your world space. Whenever you draw a tile, use a vertexshader to move the quad to the appropriate location by providing a WVP matrix. This will reduce your total vertex count and eliminate the need to update your vertex buffer every frame. You can now also set the vertex buffer to default or immutable.[*]Use frustum culling. You only need to draw the tiles that are actually visible on screen.[*]Put all your tile images into one big texture (texture atlas). I understand that you might not want to manually do that yet, but you can easily have your program do it for you at startup. Just calculate the texture size needed hold all of your tiles and create your texture atlas using it. Then render every tile to your new texture atlas, and keep track of the UV location for every single tile. Now you can render all of your tiles using the very same texture. Instead of switching textures you switch UV coordinates. This greatly cuts down your state changes.[*]If you want even more performance, do everything in a single draw call using hardware instancing. You'll need to create a second (dynamic) vertexbuffer that holds the WVP matrix and UV data for every tile to be rendered. This will reduce your draw calls down to one. At this point you can easily render over 10k tiles without performance issues.[/list] [/quote] Thanks, im going to implement all that you have said soon but first i have to make a decision on what is a sprite and an entity so i can conveniently change texture and buffer states. I guess that's a design problem. Anyway many thanks to everyone that helped
  2. DX11

    Thanks for the suggestions. I pre transform my sprites on load and any time the map is moved now which gave me a decent fps boost (~30 fps). Unfortunately im not using sprite sheets and i have a separate image for each entity to keep it simple. Later i can create a system where my individual sprites would be compiled into a large image when i create a map editor. Also what is NN sampling?
  3. Hi I have been working on 2d tile map loading and rendering and im noticing very bad performance with my simple approach. Basically i have a 2d array which specify the tiles in the map using a number to represent each type of tile (water, rock etc.) Each of these are a sprite of its own with a vertex buffer and a texture. All tiles are 32x32. This is the render function of the map: [code] XMMATRIX VPMatrix = g_Game->GetViewProjectionMatrix(); ID3D11Buffer* MVPBufferHandle = g_Game->GetMVPBuffer(); ID3D11DeviceContext* D3DContext = g_Game->GetContext(); XMMATRIX world = GetWorldMatrix(); for(int i = 0; i < m_TileList.size(); i++) { for(int j = 0; j < m_TileList[i].size(); j++) { XMMATRIX offset = XMMatrixMultiply(world, XMMatrixTranslation(j * m_TileSize, i * m_TileSize, 0.0f)); XMMATRIX mvp = XMMatrixMultiply(offset, VPMatrix); mvp = XMMatrixTranspose(mvp); D3DContext->UpdateSubresource(MVPBufferHandle, 0, 0, &mvp, 0, 0); D3DContext->VSSetConstantBuffers(0, 1, &MVPBufferHandle); m_SpriteList[m_TileList[i][j]].Render(); } } [/code] Simply put i get the pointer for my constant buffer to send my final matrix to the shader, I multiply the location of each tile by the location of the map and the projectionview matrix and send it off for rendering. The render function of a sprite just binds the vertex buffer and the texture resource and renders a quad(using triangle list). As an example i tried to render 100 tiles with only 2 types of sprites. without map = 500 fps; with map = ~250 fps; This looks to me like very bad performance probably due to my approach but i have no idea how i can couple things together to save draw calls or texture binds [img]http://public.gamedev.net/public/style_emoticons/default/sad.gif[/img] Can anyone guide me on how i can increase the performance? Thanks
  4. Ok i think i have enough info to implement it now, thanks alot for your help and patience [img]http://public.gamedev.net/public/style_emoticons/default/smile.gif[/img]
  5. [quote name='dpadam450' timestamp='1314508577' post='4854641'] The only real thing you need, to get started are what you already have: position, normal, diffuse (texture) color. 1st, your diffuse has all perfectly lit green grass. So if you have a sunlight, you should calculate that first and put that in your diffuse buffer as well. For all other lights (lamposts, vehicles etc): they are contributing (addding) EXTRA light to the scene. So first again, you put your original diffuse on the screen (with or without sunlight), then you use addative blending of each extra light. Again keyword add/extra (addative). For each light you draw a physical model of the actual lights area. For a lampost it might be a 3d cone, a lightbulb would be a sphere. Hopefully you already knew that cuz if not, then your stepping too far. After that you should have basic lighting. Again with materials like metal you have a certain specular power, as well as certain spots of metal that have different amounts of specular. Imagine a piece of metal with rust, the rust portions have no specular, and the clear ones do because they reflect light back. But even more when the light hits that surface, it has a specular power, either it is really focused and hard metal, or not as focused and soft. So you could (and eventually will), want to have those properties stored either in the color,normal,position, anywhere. Any property/material that effects light from an object standpoint, is saved. The light properties are just used when you draw those 3d models. [/quote] Yes i already understand all this except about the sunlighting. Do you mean to calculate phong directional light just as you would in a forward pass renderer and do all lighting calculations on that instead? If so, is there an advantage to this compared to a full screen quad at lighting stage? [quote name='johnchapman' timestamp='1314525462' post='4854689'] 1. I would say that an objects texture [i]is [/i]its[i] [/i]material's diffuse property, i.e. the colour which modulates any reflected, diffuse light. So your grass texture represents the final colour you'd see if there was a 100% white light shining on it. [/quote] Ok but in the case which the object i am rendering does not have a texture of its own, the material diffuse will be in this buffer i presume?
  6. Thanks, i think im starting to understand now. So let me get this right. 1. The albedo texture should be a combo of the material diffuse property and the objects texture? 2. No ambient material should be needed since it is usually the same as the diffuse property. Later i can add SSAO 3. And now there will be no emissive or specular color but instead a factor that will be multiplied by the light properties? Only question that remains for me now is if the materials are now like this, will the light sources properties be similar or do they still retain their RGBA values
  7. I see but then what is the values that i store in my gbuffer for lighting? and how can i compute them? ([size=2]specular intensity, power, emissiveness)[/size] [size="2"]If i understand correctly, specular intensity is calculated based on light direction and viewpoint so then how can i compute this when i am in my first pass? Shouldnt light properties only be available at the lighting stage?[/size] [size="2"] [/size] [size="2"]As you can probably tell i am very confused by all this [/size][img]http://public.gamedev.net/public/style_emoticons/default/sad.gif[/img]
  8. Thanks for the reply. I have only written shaders for the simple phong shading using full rgba values for material properties. What equations do i need to use to compute shading using only those 3 values form the g-buffer? Is there some resources that can teach me this? You also mention that in the lighting stage i need the material properties but how can i pass these to my lighting shader if they are not stored in my g-buffer? thanks
  9. Thank you very much for your answers [quote name='johnchapman' timestamp='1314390564' post='4854206'] Doesn't look as if there's anything wrong with your g-buffer textures. As regards your questions: 1) All of the fragment output can be done in the lighting pass - your g-buffer is an input to the lighting stage, you accumulate 'lit' pixels in the final framebuffer. 2) You can draw anything you like in the lighting pass, as long as it covers the pixels which need to be lit. You could draw a full screen-aligned quad for every light, which is pretty inefficient (you'll end up shading a lot of pixels which don't need to be shaded). Drawing the light volumes is the optimal solution: a sphere for point lights, a cone for spotlights is generally the way to go. 3) You'll need to write material properties to a target in the g-buffer. How you do this depends on what your lighting stage requires and the sort of material properties you need to support. A simple format might be something like R = specular level, G = specular power B = emissiveness A = AO factor. Here's a few links to some deferred rendering resources (you may have already seen some of them): [url="http://developer.download.nvidia.com/assets/gamedev/docs/6800_Leagues_Deferred_Shading.pdf"]http://developer.dow...red_Shading.pdf[/url] [url="http://www.talula.demon.co.uk/DeferredShading.pdf"]http://www.talula.de...rredShading.pdf[/url] [url="http://bat710.univ-lyon1.fr/%7Ejciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf"]http://bat710.univ-lyon1.fr/~jciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf[/url] [url="http://developer.amd.com/gpu_assets/S2008-Filion-McNaughton-StarCraftII.pdf"]http://developer.amd...StarCraftII.pdf[/url] [/quote] Good resources thanks. The only thing i still dont understand is how to pack an RGBA value into 1 component. My render targets are of the format GL_RGBA16F so how can i pack and unpack them into this format in GLSL? [quote name='dpadam450' timestamp='1314399493' post='4854240'] The first thing you have to do right before you draw lights, is put the full screen diffuse to the screen. Then for each light such as a point light, you will have to draw a physical sphere , but your shader isnt drawing the sphere, its just using it to do lighting on the diffuse you already drew to the screen. It just continually blend with what is on screen and your screen will fill up as more objects are drawn. [/quote] So in the deferred pass stage the output will be the diffuse color?
  10. Hi. I am trying to implement deferred shading in my engine and i am slightly confused. I have already set up multiple render targets and this is what i have so far: 1. Bind deferred shader and fill 3 textures with: position, normal and texture color. 2. Bind lighting shader and fbo textures, multi textures available in my fragment shader and i can output the various textures. An image of how my textures look from rendering a simple terrain and an axe is attached: Is there anything wrong with them? (top right is just ambient property) So to actually implement the lighting i have a few questions that i would be glad if anyone could clarify for me. 1. Do i output the scene once in the deferred pass and then blend the lighting pass to it or is all of the fragment output done in the lighting pass? 2. In the lighting pass, what do i have to draw? For a point light, do i have to use a cone 'proxy shape' or i can compute shading for each pixel based on the light area of effect? 3. Where do materials come in in deferred shading? Do i have to render to a texture the material properties ambient - diffuse - specular - emissive of every object for every pixel? In addition is there any resources on deferred shading that you can recommend. I have read the major ones (guerrilla-games, nvidia, gamedev image space lighting). Thanks in advance [img]http://public.gamedev.net/public/style_emoticons/default/smile.gif[/img]
  11. Ok so I have been learning openGL with GLSL during the last few weeks and now that I have come to writing shaders and implementing lighting, textures etc. I am slightly confused. How are shaders supposed to be used and combined? If I want directional lightning, spot lights and point lights to affect all of a scene do I write one shader with the implementation of all lights? Or should they be implemented in different shaders. How do i combine shaders for different effects all in one scene? What i am basically asking is what should constitute as one shader? Any help on understanding this would be great. Thanks
  12. I have fixed the problem now. The texture parameters are per texture and should be specified after binding every texture when loading them. For some reason i thought they followed the state system like how you would use glEnable().
  13. [quote name='EvilTesla-RG' timestamp='1310241596' post='4833149'] I havn't done alot with VBO's (working on fixing that), but shouldn't you be sending the uniforms before drawing your first texture? [/quote] Thanks for your reply. Correct me if im wrong on this but i am not rendering anything before sending my uniforms, just making opengl ready for the render phase( Just binding vertex attrib to my buffer data - vertex co-ordinates, color, and uv co-ordinates and binding the texture ID) I believe the rendering starts from the DrawArray call. Just to make sure i tried sending uniforms before anything else and i still had the same result.
  14. I have been trying to render two different objects with 2 different textures. I'm using GLSL and VBOs. Everything works fine except the first texture is not displaying for some reason. I am totally stumped, ive looked at my code many times and I cant figure it out so i would appreciate any help. I load my textures from an SDL surface like this: [code] glGenTextures(1, &m_textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, m_textureID); SDL_Surface* tmpsurface; tmpsurface = IMG_Load("test.jpg"); GLenum bpp = GL_RGB; if(tmpsurface->format->BytesPerPixel == 4) { bpp = GL_RGBA; } glTexImage2D(GL_TEXTURE_2D, 0, bpp, tmpsurface->w, tmpsurface->h, 0, bpp, GL_UNSIGNED_BYTE, tmpsurface->pixels); glGenTextures(1, &m_textureID2); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, m_textureID2); SDL_Surface* tmpsurface2; tmpsurface2 = IMG_Load("okay.jpg"); bpp = GL_RGB; if(tmpsurface2->format->BytesPerPixel == 4) { bpp = GL_RGBA; } glTexImage2D(GL_TEXTURE_2D, 0, bpp, tmpsurface2->w, tmpsurface2->h, 0, bpp, GL_UNSIGNED_BYTE, tmpsurface2->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); SDL_FreeSurface(tmpsurface); SDL_FreeSurface(tmpsurface2); [/code] Then I render the scene like this: [code] float modelviewMatrix[16]; float projectionMatrix[16]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glPushMatrix(); glLoadIdentity(); glBindBuffer(GL_ARRAY_BUFFER, m_bufferID); glBindTexture(GL_TEXTURE_2D, m_textureID); glVertexAttribPointer((GLuint)2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(28)); glVertexAttribPointer((GLuint)1, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(12)); glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(0)); glGetFloatv(GL_MODELVIEW_MATRIX, modelviewMatrix); glGetFloatv(GL_PROJECTION_MATRIX, projectionMatrix); m_GLSLProgram->SendUniform("modelview_matrix", modelviewMatrix); m_GLSLProgram->SendUniform("projection_matrix", projectionMatrix); m_GLSLProgram->SendUniform("texture0", 0); glDrawArrays(GL_TRIANGLE_FAN, 0, 6); glBindBuffer(GL_ARRAY_BUFFER, m_bufferID2); glBindTexture(GL_TEXTURE_2D, m_textureID2); glVertexAttribPointer((GLuint)2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(28)); glVertexAttribPointer((GLuint)1, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(12)); glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(0)); glDrawArrays(GL_TRIANGLE_STRIP, 0, 5); glPopMatrix(); [/code] What i get as a result is the first draw call being rendered with a 'plain black' texture. And the second call being rendered correctly. The weird thing is that if I switch the loading order of the 2 textures, then the first draw call is textured and the second with a black texture. Does anyone have any idea of why this might be happening? Thanks
  15. Unity

    Quote:Original post by freeworld I wouldn't think that tiny bit of code would really cause your app to slow down... if you're that worried you can store a point to the m_setup and use that instead of using the function call. Thanks for your reply. Correct me if im wrong but the time given is also depending on how many times its called. Also thanks for the pointer idea! i will try it. I think its better to try pointers and caching at the same time to reduce calls. Quote:Original post by rip-off Are you profiling in Release mode or Debug mode? The only way I could see that function causing trouble is if you call it unnecessarily, for example not caching the return value in a local variable if you are invoking it multiple times: *** Source Snippet Removed *** This isn't primarily an optimisation, this is making the code more readable. You can see clearly in the second version that frobnicate() is intended to receive the same second parameter every invocation, whereas the first example requires one to examine (at least) the implementation of EngineSetup::baz() to determine this. Also, if you move such a short function into the header file the compiler might be more willing to inline it for you. I was using debug mode. Although originally the sample time was 5s but i only counted the first thread and presumed the others were from other things like visual studio debugger. Thanks for your suggestion in caching. This is what i meant in number 1 about having a variable inside the method to take the values and use that instead of calling them each time in calculations. This will definetly speed things up but is there any other way to make these variables more global? The methods are called alot and there are also numerous other methods and objects that use these values. I guess it would be a bad idea to make a global variable outside the classes and set them to the values? Maybe im going too far into optimization ... I dont exactly know what you mean by the last sentance... do you mean in the .h file or by header you mean something else? And what would be the benefits of that? Thanks for your time!