fortia

Members
  • Content count

    71
  • Joined

  • Last visited

Community Reputation

276 Neutral

About fortia

  • Rank
    Member
  1. Although... one final question remains now. If I have a shader with the following input: struct VS_IN { float3 position : POSITION; float3 normal : NORMAL; }; and have two VBs one with position and normal, the other with position, normal and texcoord. When creating separate input layout objects for these two VBs together with the shader, it is possible to use both of these VBs with this shader. But I have a hard time figuring out how the hardware could possibly figure out the mapping from vertex buffer to shader. Mapping for the first case: VBPosition -> ShaderPosition VBNormal -> ShaderNormal Mapping for the second case(?) (make use of knowledge of vertex stride?): VBPosition -> ShaderPosition VBNormal -> ShaderNormal VBTexcoord -> into the void? What if texcoord in the VB is squeezed in between position and normal instead? Then it would definately not exist some way for the hardware to figure it out, right? But wouldn't an input layout object create the correct mapping here as well? VBPosition ->ShaderPosition VBTexcoord -> into the void VBNormal -> ShaderNormal Now, have I understood things correctly?
  2. Thank you for your answer. It helped clearify it a bit.
  3. Hi, I'm sort of puzzled about the details below and can't seem to find clear answers in the docs. I have two shaders, S1 and S2, with DIFFERENT input elements and two vertex buffers with DIFFERENT vertex layout, VB1 and VB2. Now, creating an input layout object requires two things: - The shader - The vertex declaration for the vertices in the vertex buffer. We assume all combinations: - S1+VB1 - S1+VB2 - S2+VB1 - S2+VB2 yield valid input layout objects when created separately. That is both vertex buffers satisfy the needs of both shaders. But do I really have to create an input layout object for every permutation of shader+vb that I use or is there a way to reuse them? The docs say: "You can create a single input-layout object for many shaders, as long as all of the shader-input signatures exactly match." This would suggest that I could create only two (instead of four layout objects): - IL1 for S1+VB1 and S1+VB2 - IL2 for S2+VB1 and S2+VB2 But since VB1 and VB2 have different vertex layout I'm not so sure this would work. Has anyone tried? It would be truly disappointing having to create input layout objects for every permutation of shader+geometry. I do not really organize my resources by layout so basically this is where I would have to end up. Is that costly? Anyway, the only thing I can keep track of here is if objects use the same shaders or the same geometry. I do not have anything that sorts shaders by their input signatures or something. I don't know how microsoft thought it possible to manage such a thing :). But one thing that makes me curious... Does the directx runtime automatically prevent duplicates from being created like it does with for instance blend states?
  4. Quote: There is maybe something wrong with your framework. Have you tried explicitly to set feature level 11? Or to use shader version 5? I'm not quite sure what you mean. For instance when I try to run the DetailTesselation11 sample from the dxsdk (which uses domain/hull shaders and thus SM5), I can only run it using a reference driver. Is there something I need to do using some config application? Just so I don't misunderstand, when you say it works fine for you, are you meaning that you can run d3d11 at all (because I can too but not with D3D_FEATURE_LEVEL_11_0) or can you actually run features like domain/hull shaders in hardware? Quote: Does the DX11 haeven benchmark run with dx11? Just tried it, it didn't really complain about anything, does it really require FULL dx11 support or will it silently run on any machine with just visual differences? (The dragon looks nothing like on the screen shots)
  5. I have an ATI Radeon HD 5870 card that was supposed to support d3d11 and am trying to run d3d11 (feb 2010 sdk) on it, but it seems to revert to direct3d as a 10.1 feature set only (in the samples as well, not only my code). I was truly disappointed at first since the only reason I bought the card was to develop d3d11-stuff with a non-reference driver. But then I started to search around the web and everywhere it says the 5870 should support dx11. But why doesn't it? Is it the driver that is not fully functional yet? I just installed the latest catalyst 10.2 (including drivers).
  6. Hi! I have a simple game engine framework written in unmanaged C++ and compiled to a dll. Now I would like to create an editor for it and would really like to use some .NET specific windows controls instead of simply the older common controls. So I believe I need to do this by writing the editor in a .NET language (C++ or C#). But I also need to be able to import the game engine dll (and some plugin dlls for it) to the editor and I don't want to convert the engine to .NET because the code is cross-platform. Is there a way to solve this? Thanks for your replies!
  7. Bezier puzzle

    Hi! I'm having a problem where I would like to determine at which side of a composite Bezier curve (in 2D) that a certain point is located. I'm looking for a simple equation (if such exists) that I could use to test pixels if they are "inside" or "outside" of a shape defined by such curves. Below is an image created with a curve in Photoshop that illustrates what I need. The black surface is on the side of the curve that is currently desired. The desired side, however, could just as well have been the other side. If you have a simple function f(x) it would be easy to check this, but the Bezier curve being defined by a parameter t as (x(t),y(t)) makes it more complicated I believe... ? Maybe it might be helpful to you if I tell you that I would like to define contours of puzzle pieces with the help of this and check the pixels against the curve in a pixel shader when drawing the pieces. Or if this is to non-efficient, then I would at least need to perform the check to create precalculated masks. I'll be thankful for replies! /fortia EDIT: I just thought of something: Curves like this are used to represent fonts, and because of the amount of text being able to be displayed efficiently at once there has to be a simple solution, but the question still remains what that solution might be... [Edited by - fortia on February 22, 2007 3:58:32 PM]
  8. Life is ironic

    How come the more you learn, the more you realize that you are not an expert? I mean... if you think you know how something works in the beginning and then really dive into it, you always end up thinking "how could I say that I knew something about this before I learned all this?". You define that you know everything about for example a programming language and environment after having worked with it several years. Then you get into a series of problems that cause you to open your books again searching for a solution, and wow(!)... you learn something new. After several problem solving tasks you have learned so many new things that you just laugh out loud when you think back on the time when you thought you knew everything. Isn't this ironic? Doesn't the process of learning ever end [rolleyes]? Are humans constructed to learn all their lifes about everything and everybody? I think so. Why else would the term "science" exist?
  9. The last few days I've been trying to solve some major problems with my old design. One of these is the fact that you were limited to 4 simultaneous collisions per object and the collision detection shader was always run 4 times the number of objects. Now I have a new solution in which an object can have an arbitrary number of simultaneous collisions and no irrelevant pixels will be processed. My current best solution still has broad-phase collision detection on the CPU, which requires a read-back of the position texture (128x128 32-bit FP). However, the alternative would be either to test every object against every other one, which is a REAL performance killer!!! I tried to find a way to do broad-phase CD on the GPU, but always ended up with expensive or impossible-to-implement-on-GPU algorithms [totally]. I've not given up on this yet, but will put it on ice until everything else is done so I don't get stuck... This evening, I also created a simple scripting system so that I can define all textures and passes in a text file and not have to hardcode them in the program. This will really simplify testing of different configurations and it will not be such a pain in the *ss to test a configuration anymore... Now it's 2 a clock in the morning and I think I have to go to sleep so I don't kill myself working [smile]. Good night!
  10. Still around lol

    Another thing... You asked by which segment you should start. My advice for you is to start with the graphics part so you can quickly get something on screen (just when drawing an interface in a normal app [smile]. After that you could implement some basic game mechanics like what happens if you move the mouse (in an FPS you should look around) or press a key. Then some collision detection and physics system would be great so you don't start walking through walls. The next step for an FPS I think would be to implement AI with pathfinding etc. Then try to implement shooting mecanisms and place objects in your world that will affect health, ammunition etc. Things to leave for later that don't affect the game very much are sound and music as well as menus and other user interface and configuration screens. After all this is done you can refine everything by adding special effects, animations and more proffessional game content as well as optimizing(!!!). A good thing is also to implement some kind of level editor if you're not directly exporting from some modeling program. Hope this helped...
  11. Still around lol

    First: I'm not exactly agreeing with jjd. If you dive into programming without thinking about the technical design of what you're doing you will be fine in the beginning but you will always get to a point later when you realize your code is a mess and start having trouble even understanding what you're doing! Don't forget that a game is usually a rather big project if it's not just a simple one like tetris [smile]. I've been going through far too many projects that have exploded quickly and have always gotten to a point where I had to rewrite my code (yes, rewrite!) because I didn't know any more what I was doing. This is an example of what happens if your design and code organization is bad (which it usually gets if you don't think about it before starting). Design is time well spent and can spare you a lot of headache! Second: There are two ways (that I can think of) to get started developing a game: 1. Bottom-Up-Development: Here you start by developing all the engines and their details and later put together everything to a game. 2. Top-Down-Development: Here you start by writing a game prototype focusing on the game as a whole using only *extremely* basic engine features (in fact just as much as you need to "play" the game). For example: don't care about animations - let your characters be cubes moving around etc. You implement details later. I would say the second is easiest if you want to get started quickly. I say this from my own experience and because I know it's a common way to do in game studios during a preproduction phase (but not only for the get-started reason). The prototype does not have to form a basis for the finished game - that is you don't have to build your game extending the prototype code (although you can probably do a lot of cut and paste work). Except the advantage of getting something on the screen you will also probably have figured out what is the best way to design your subsystems and classes for the real game. Just remember to keep it as simple as possible or you will end up with a half-finished game as a prototype - which is not the purpose! The first alternative can have you get stuck on developing a single engine without getting anywhere with your game [sad]. It's a great way to learn a certain area of development, but it's a trap if you want to create a whole game!!! Don't forget that technology advances and that there will always be new things that you want to add to make your engine as flashy as possible - you are then stuck... I know I'm not the best of explaining things but if you have any questions please feel free to send a private message. Good luck!
  12. GPUPhysics: Case study of ODE

    I have completed a quick case study of an existing physics engine (ODE). This gave me a better overview of how a physics system works. Up til now I knew how the details worked but not how they might work together in an engine. I got some things to think about now regarding how to make my GPU system more useful for more than just a dumb rigid body simulation (one of the main requirements for the project is the ability to communicate with other systems like AI and animation also running on the GPU*). * The project is a part of an investigation of using hardware instancing to draw agents with physics, animations and simple AI (group movement and decision trees).
  13. Fortia: Game Architecture

    Exactly what I meant [smile]! My engine served for learning but it would take too long to have it completed to the point that you can actually create a game from it...
  14. Fortia: Game Architecture

    Came across this article (written by Jeff Plummer) by chance yesterday while I was browsing through Ogre tutorials to learn Ogre. It treats a somewhat different architecture for putting together a game from subsystems such as graphics, physics, AI, sound etc. For people like me -- who have at *least* basic knowledge in most major areas of game subsystems/engines but no real experience of putting together a full game -- it is easy enough to use one single engine in an application (like when creating a graphics demo you don't use AI for example). But when you have an application rendering a scene and want to turn it into a game you get stuck!!! Where are you going to add that extra code? How to organize it? This is where you should have thought twice in the beginning regarding the architecture that defines how different subsystems work together. Jeff describes a very easy-to-implement, easy-to-extend and easy-to-upgrade architecture. I found his idea interesting and started a design and implementation of it today in my spare time (I did some major changes compared to his prototype). What I want the most of all for Fortia right now is to start development of the Game itself. So I think I'm going to move over to use engines created by other people (Ogre, ODE etc) for use in Fortia as well as Jeff's architecture. It will speed up the development significally compared to having to reinvent the wheel [smile] (especially since the team is very small). I learned a lot working on FortiaEngine and it has already served as a significant portion of my portfolio, but I have to be realistic if I want to get the game done!!! It also helped me understand Ogre and Nebula in just a few days! I just want to finish this topic with a tip to all people working on their own game: Consider reading the article and move to use finished libraries/engines! For me this took years to understand and I'm sure I'm not the only one! Like a big group among game programmers I have had the ambition to code most things myself! If you're not in this group you're lucky! [smile] Til' later!
  15. Fortia: Game Architecture

    Came across this article (written by Jeff Plummer) by chance yesterday while I was browsing through Ogre tutorials to learn Ogre. It treats a somewhat different architecture for putting together a game from subsystems such as graphics, physics, AI, sound etc. For people like me -- who have at *least* basic knowledge in most major areas of game subsystems/engines but no real experience of putting together a full game -- it is easy enough to use one single engine in an application (like when creating a graphics demo you don't use AI for example). But when you have an application rendering a scene and want to turn it into a game you get stuck!!! Where are you going to add that extra code? How to organize it? This is where you should have thought twice in the beginning regarding the architecture that defines how different subsystems work together. Jeff describes a very easy-to-implement, easy-to-extend and easy-to-upgrade architecture. I found his idea interesting and started a design and implementation of it today in my spare time (I did some major changes compared to his prototype). What I want the most of all for Fortia right now is to start development of the Game itself. So I think I'm going to move over to use engines created by other people (Ogre, ODE etc) for use in Fortia as well as Jeff's architecture. It will speed up the development significally compared to having to reinvent the wheel [smile] (especially since the team is very small). I learned a lot working on FortiaEngine and it has already served as a significant portion of my portfolio, but I have to be realistic if I want to get the game done!!! It also helped me understand Ogre and Nebula in just a few days! I just want to finish this topic with a tip to all people working on their own game: Consider reading the article and moving to use finished libraries/engines! For me this took years to understand and I'm sure I'm not the only one! Like a big group among game programmers I have had the ambition to code most things myself! If you're not in this group you're lucky! [smile] Til' later!