• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Bokke

Members
  • Content count

    7
  • Joined

  • Last visited

Community Reputation

482 Neutral

About Bokke

  • Rank
    Newbie

Personal Information

  1. I have written a small blogpost concerning an initial tetris game on Android (Java) and I've put the code up on Github, maybe it is somewhat helpful for you.   Personally, I learned a lot from Beginning Android Games (it isn't free, but good value for money imho)
  2. I'm currently in the process of writing a tetris game. On a small scale project like this, I'm trying to keep the KISS principle, nevertheless I often find that I need to refactor the code because otherwise it becomes unreadable...   I also noticed some design flaws that I'll leave in the code (e.g. I didn't consider the field where the blocks fall down as a gameobject, which I should have in hindsight). It's an experience I'll take with me into my next project, but I won't fix it in the current codebase as it is not a "showstopper".   One should always consider the tradeoffs and take the one that moves the project ahead and doesn't delay it needlessly... The time you take polishing your code, is time lost where you could be learning about some new concepts. E.g.: in my case, refactoring to add another gameobject for the field with falling blocks, would be less valuable to me then learning about the best practices of sprites, because my tetrisgame could use some code where it is easy to render a rectangle with a specific texture (because I'll certainly need it again in this project and I'll need it in my next project).
  3. Wouldn't it be a possibility to log your accelerometer with time information on your android phone and then use the log as input to your virtual device?
  4. I started of with those tutorials from developer.android.com as well, now I'm working my way through Beginning Android Games (which is a good read and discusses a lot of topics (at a beginners level))   I did note though that you refer to OpenGL as having optimized physics, but please understand, OpenGL is there only for your rendering and will not provide you with any physics. You'll probably encounter OpenGL ES in your Android endeavours, and not OpenGL (Even though Nvidia's Logan project, scheduled for the near future will provide you with a full blown OpenGL implementation, but so far I know there aren't any other OpenGL Android devices out there).   If you are looking for a physics engine, then Box2D might be something you are looking for.   If you just want to have your game out, you might want to have a look at libGDX as VIkato already proposed. Then you'll be able to hit the ground running and you won't have to develop your engine from the ground up. Since you'll also need to consider input management, your game loop, animation, sound...   Anyway, good luck!
  5.   You are absolutely right, the subject has been treated rather dodgy. I remember how the OpenGL Red Book, 7th edition had the example of a rotating square, where a "ghosting"-effect was visible when you rendered with only one colorbuffer. A double-buffered framebuffer resolved this quite nicely.   I'll have a look if I can improve this article with some pictures/figures to make it somewhat clearer. In the meantime, I'll leave it as an exercise to the reader     I decided not to modify the article, nevertheless, you can see the effect of double-buffering (No Ghosting) versus single-buffering (Ghosting) in the following 2 youtube videos I just uploaded: No ghosting Ghosting
  6.   You are absolutely right, the subject has been treated rather dodgy. I remember how the OpenGL Red Book, 7th edition had the example of a rotating square, where a "ghosting"-effect was visible when you rendered with only one colorbuffer. A double-buffered framebuffer resolved this quite nicely.   I'll have a look if I can improve this article with some pictures/figures to make it somewhat clearer. In the meantime, I'll leave it as an exercise to the reader :)
  7.   I do agree that the blending stage is of high importance in the current graphics and I was in doubt of writing something or not. In the end, I decided not to. The blending stage can be disabled in OpenGL, it isn't mandatory and I felt that it would therefore add an additional layer of needless "complexity" in how pixels "come to be" (which was the main goal of this article)   I don't think we should compare the shader stages to the blending stage, as for example the geometry shader certainly has its merits!
  8. Introduction This article is mainly intended to give some introductory background information to the graphics pipeline in a triangle-based rendering scheme and how it maps to the different system components. We'll only cover the parts of the pipeline that are relevant to understaning the rendering of a single triangle with OpenGL. Graphics Pipeline The basic functionality of the graphics pipeline is to transform your 3D scene, given a certain camera position and camera orientation, into a 2D image that represents the 3D scene from this camera's viewpoint. We'll start by giving an overview of this graphics pipeline for a triangle-based rendering scheme in the following paragraph. Subsequent paragraphs will then elaborate on the identified components. High-level Graphics Pipeline Overview We'll discuss the graphics pipeline from what can be seen in figure 1. This figure shows the application running on the CPU as the starting point for the graphics pipeline. The application will be responsible for the creation of the vertices and it will be using a 3D API to instruct the CPU/GPU to draw these vertices to the screen. Figure 1: Functional Graphics Pipeline We'll typically want to transfer our vertices to the memory of the GPU. As soon as the vertices have arrived on the GPU, they can be used as input to the shader stages of the GPU. The first shader stage is the vertex shader, followed by the fragment shader. The input of the fragment shader will be provided by the rasterizer and the output of the fragment shader will be captured in a color buffer which resides in the backbuffer of our double-buffered framebuffer. The contents of the frontbuffer from the double-buffered framebuffer is displayed on the screen. In order to create animation, the front- and backbuffer will need to swap roles as soon as a new image has been rendered to the backbuffer. Geometry and Primitives Typically, our application is the place where we want to define the geometry that we want to render to the screen. This geometry can be defined by points, lines, triangles, quads, triangle strips... These are so-called geometric primitives, since they can be used to generate the desired geometry. A square for example can be composed out of 2 triangles and a triangle can be composed from 3 points. Lets assume we want to render a triangle, then you can define 3 points in your application, which is exactly what we'll do here. These points will then reside in system memory. The GPU will need access to these points and this is where the 3D API, such as Direct3D or OpenGL, will come into play. Your application will use the 3D API to transfer the defined vertices from system memory into the GPU memory. Also note that the order of the points can not be random. This will be discussed when we consider primitive assembly. Vertices In graphics programming, we tend add some more meaning to a vertex then its mathematical definition. In mathematics you could say that a vertex defines the location of a point in space. In graphics programming however, we generally add some additional information. Suppose we already know that we would like to render a green point, then this color information can be added. So we'll have a vertex that contains location, as well as color information. Figure 2 clarifies this aspect, where you can see a more classical "mathematical" point definition on the left and a "graphics programming" definition on the right. Figure 2: Pure "mathematics" view on the left versus a "graphics programming" view on the right Shaders - Vertex Shaders Shaders can be seen as programs, taking inputs to transform them into outputs. It is interesting to understand that a given shader is executed multiple times in parallel for independent input values: since the input values are independent and need to be processed in exact the same way, we can see how the processing can be done in parallel. We can consider the vertices of a triangle as independent inputs to the vertex shaders. Figure 3 tries to clarify this with a "pass-through" vertex shader. A "pass-through" vertex shader will take the shader inputs and will pass these to its output without modifying them: the vertices P1, P2 and P3 from the triangle are fetched from memory, each individual vertex is fed to vertex shader instances which run in parallel. The outputs from the vertex shaders are fed into the primitive assembly stage. Figure 3: Clarification of shaders Primitive Assembly The primitive assembly stage will break our geometry down into the most elementary primitives such as points, lines and triangles. For triangles it will also determine whether it is visible or not, based on the "winding" of the triangle. In OpenGL, an anti-clockwise-wound triangle is considered as front-facing by default and will thus be visible. Clockwise-wound triangles are considered back-facing and will thus be culled (or removed from rendering). Rasterization After the visible primitives have been determined by the primitive assembly stage, it is up to the rasterization stage to determine which pixels of the viewport will need to be lit: the primitive is broken down into its composing fragments. This can be seen in figure 4: the cells represent the individual pixels, the pixels marked in grey are the pixels that are covered by the primitive, they indicate the fragments of the triangle. Figure 4: Rasterization of a primitive into 58 fragments We see how the rasterization has divided the primitive into 58 fragments. These fragments are passed on to the fragment shader stage. Fragment Shaders Each of these 58 fragments generated by the rasterization stage will be processed by fragment shaders. The general role of the fragment shader is to calculate the shading function, which is a function that indicates how light will interact with the fragment, resulting in a desired color for the given fragment. A big advantage of these fragments is that they can be treated independently from each other, meaning that the shader programs can run in parallel. After the color has been determined, this color is passed on to the framebuffer. Framebuffer From figure 1, we already learned that we are using a double-buffered framebuffer, which means that we have 2 buffers, a frontbuffer and a backbuffer. Each of these buffers contains a color buffer. Now the big difference between the frontbuffer and the backbuffer is that the frontbuffer's contents are actually being shown on the screen, whereas the backbuffer's contents are basically (I'm neglecting the blend stage at this point) being written by the fragment shaders. As soon as all our geometry has been rendered into the backbuffer, the front- and backbuffer can be swapped. This means that the frontbuffer becomes the backbuffer and the backbuffer becomes the frontbuffer. Figure 1 and figure 5 represent these buffer swaps with the red arrows. In figure 1, you can see how color buffer 1 is used as color buffer for the backbuffer, whereas color buffer 2 is used for the frontbuffer. The situation is reversed in figure 5. Figure 5: Functional Graphics Pipeline with swapped front- and backbuffer This last paragraph concludes our tour through the graphics pipeline. We have now a basic understanding of how vertices and triangles end up on our screen. Further reading If you are interested to explore the graphics pipeline in more detail and read up on, e.g.: other shader stages, the blending stage... then, by all means, feel free to have a look at this. If you want to have an impression of the OpenGL pipeline map, click on the link. This article was based on an article I originally wrote for my blog.
  9. Thanks, I should have read the msdn more closely:   The Format of the texture is indeed defined as DXGI_FORMAT_R32G32B32A32_FLOAT. I was too much focused on the fact that it was a Texture1D and ignored to see that an element could as well have multiple dimensions...   As for the float3 vs float4: I only need a random x, y and z component for v, so no use for a float4 variable.
  10. Could someone help me figure out why a float3 is being returned in the following HLSL line? float3 v = gRandomTex.SampleLevel(gTriLinearSam, u, 0);   Now, gRandomTex, gTriLinearSam and u are defined as follows: Texture1D gRandomTex; SamplerState gTriLinearSam {     Filter = MIN_MAG_MIP_LINEAR;     AddressU = WRAP;     AddressV = WRAP; } float u = 1.0f; //Actually it is not a constant, but just for the example, I set it to 1.0f          
  11. Hello,   I propose you have a look at D3DX11CreateShaderResourceViewFromFile Since the D3D10ImageFileFormat is mentioned for D3DX10CreateShaderResourceViewFromFile, however that its not for the D3D11, I still do expect that D3DX11_IMAGE_FILE_FORMAT holds: D3DX11_IFF_PNG Is probably what you are looking for.
  12. I was wondering why Nvidia PhysX doesn't have the NxCharacter.dll included in "C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common".   I wanted to make it a bit easier for myself by adding "C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common" to my PATH env variable. Now, since NxCharacter.dll is not in the "C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common", I got interrupted by a runtime error when I wanted to run my program after compilation since NxCharacter.dll could not be found.   What would I do best to have NxCharacter.dll linked to my program? -> Should I just copy NxCharacter.dll to "C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common"? -> Should I just copy NxCharacter.dll to my debug folder? -> Should I just copy all required PhysX Dlls to my debug folder and remove "C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common" from my PATH?   Maybe as a more general question: how would your application install PyshX?   Thanks for any help, I'm in doubt about what would be the nicest way of doing things.