• Advertisement

Assassin

Member
  • Content count

    230
  • Joined

  • Last visited

Community Reputation

246 Neutral

About Assassin

  • Rank
    Member
  1. Alpha Blending: To Pre or Not To Pre

    Another useful article on the topic of pre-multiplied alpha, from Shawn Hargreaves: http://blogs.msdn.com/b/shawnhar/archive/2009/11/06/premultiplied-alpha.aspx
  2. Cube Fortress

    Most of my effort since the last milestone has been focused on "support items", and I'm not ready for another milestone release yet but wanted to share some progress. The support items fall mainly into two categories: static item drops, and AI helpers. The AI helpers I currently have are: stationary sentry turret, crawler stun-bot, and a tunneling/boring bot. Item drops are along the lines of ammo packs & health packs similar to Battlefield games, and there's also a teleporter station which I just got working. All of it is very much lacking in polish, it's full of programmer art but the gameplay functionality is the essential part for now. Sentry Turret: [url="http://andyc.org/gamedev/CubeWorld/CubeWorld_41.jpg"]http://andyc.org/gam...ubeWorld_41.jpg[/url] Crawler Bot: [url="http://www.youtube.com/watch?v=zAL2nG7hPaA"]http://www.youtube.c...h?v=zAL2nG7hPaA[/url] Boring Bot: [url="http://www.youtube.com/watch?v=8tX2eHBaVQ4"]http://www.youtube.c...h?v=8tX2eHBaVQ4[/url] Teleport Station: [url="http://www.youtube.com/watch?v=tMwHVFWbROI"]http://www.youtube.c...h?v=tMwHVFWbROI[/url] One aspect I need to figure out is the economy of items. ie, should all things be available from the beginning of the game, unlock certain things based on game progression (like Halo Reach does with their attack/defend game type), or should the player "buy" the unlock (kinda like counter-strike). I'm leaning towards the immediate-available option since then I don't have to worry about managing currency or other assets for players, but that option removes the possibility for investment players might feel in their "build". I'm currently thinking of going with a build-your-soldier system instead of class-based constraints. This means you can select any primary weapon (rifle, shotgun, etc), any explosive device (grenades, rockets, C4, etc), any support item (ammo box, sentry turret, etc), and so on. Each of those categories of items is fully open, and you can pick anything which is unlocked. The categories of items aren't totally locked down yet, so I might collapse explosives into support & primary.
  3. Cube Fortress

    I've been working on an indie game since April 2011. It's primarily inspired by the visual simplicity and flexibility of Minecraft, and the dynamic multiplayer gameplay of Tribes. The game is currently in a playable alpha state with a complete game mode, and I'm working on making the gameplay deeper and more interesting. I will also be adding some single-player and co-op PvE gameplay to complement the competitive MP gameplay that's currently there. The game is written in C# with XNA and runs on PC (D3D10 video hardware required) and Xbox 360; I plan to release it on XBLIG before the end of the year, and on PC as quickly after that as possible. You can find more detailed information in the links below, including screenshots, gameplay videos, and an "M3" build which demonstrates the current state of the game. Main website: [url="http://cubefortress.com/"]http://cubefortress.com/[/url] Forum: [url="http://forum.cubefortress.com/index.php"]http://forum.cubefortress.com/[/url] Developer blog: [url="http://devblog.andyc.org/"]http://devblog.andyc.org/[/url]
  4. I've been investigating BRDF models for game lighting and have taken a look at a number of models, most of which are discussed in the following GDNet Book link, but I'm not entirely satisfied with what's available. My goals are entirely focused on real-time rendering on modern GPUs using a deferred lighting system. The game I'm currently working on isn't exactly visually realistic, but I'd like to get an edge up on the competition if possible... GDNet book BRDF stuff: [url="http://wiki.gamedev.net/index.php/D3DBook:%28Lighting%29_Summary"]http://wiki.gamedev....ting%29_Summary[/url] My current game (various assorted light models depicted): [url="http://andyc.org/gamedev/CubeWorld/CubeWorld_34.jpg"]http://andyc.org/gam...ubeWorld_34.jpg[/url] [url="http://andyc.org/gamedev/CubeWorld/CubeWorld_35.jpg"]http://andyc.org/gam...ubeWorld_35.jpg[/url] [url="http://andyc.org/gamedev/CubeWorld/CubeWorld_36b.jpg"]http://andyc.org/gam...beWorld_36b.jpg[/url] [url="http://andyc.org/gamedev/CubeWorld/CubeWorld_38.jpg"]http://andyc.org/gam...ubeWorld_38.jpg[/url] One thing which I find interesting is that the Schlick paper is almost totally ignored, although it appears to offer a very general model with intuitive control factors and supports anisotropy and multi-layered surfaces with appropriate Fresnel-blended contribution; it seems the only value people have found in the paper is the fast approximation for the Fresnel term. The controls exposed are available independently to each layer: normal-incidence reflectance (ie albedo), roughness, and isotropy. Roughness is in a reasonable range of 0-1 where 0 is pure mirror specular and 1 is pure Lambertian diffuse (or whatever sub-BRDF you happen to choose), and the same is true for the isotropy factor (which can be just set to 1 or ignored if you want only isotropic reflectance). Metallic surfaces would be modeled with a single layer, while dielectric materials like plastic or paint would use 2 layers: specular and diffuse, which allows you to specifiy colored or colorless and shiny or rough reflections for each layer (but you must consider that the Fresnel term requires the "top" layer to eventually dominate). The paper also presents approximations to the geometric self-occlusion term and Beckmann specular distribution which is closer than the typical Blinn-Phong standard for low powers. I haven't implemented this model yet, but I'd like to try it out. I have implemented Oren-Nayar, but as a diffuse-only equation it's a bit useless for generalized deferred shading so I blended it with Blinn-Phong. It produces some interesting results on variable-roughness surfaces, but I'm not convinced that my hacky BRDF-blending job is accurate. The O-N paper is well-defended with empirical evidence for its accuracy in reproducing rough surfaces like clay and sand, and it generalizes to Lambertian when no roughness is present. I'd like to try incorporating this as the diffuse term in a Schlick super-BRDF instead of my hacky blending, and see if it's tangibly different from the standard Schlick or other models. The control variables in such a model would be: diffuse roughness, diffuse albedo, specular albedo, specular roughness, and possibly specular isotropy. I find the value of the Ashikhmin-Shirley paper rather low, because it offers very little supporting evidence for its rendering model (either mathematical through hemisphere integrals and such, or through empirical evidence to match the reflectance of measured physical BRDFs), and also because it claims that no previous paper has offered the flexibility & accuracy it claims, while citing the very Schlick paper which does exactly that and more. The diffuse term is a little weird, it appears they've attempted a non-Lambertian diffuse term without exposing any controls over it, so it's just a view-dependent diffuse term which they claim is to account for the Fresnel blend in the specular term. Perhaps I simply don't understand it enough, but the paper doesn't make much effort to explain itself clearly. I'm also interested in other reflectance models and lighting effects... spherical harmonics seem interesting for an ambient diffuse contribution based on light probes, I actually tried doing a halfway hack of this by simply sampling a skybox cubemap at the lowest-detail mip. I also experimented with SSAO but found the screen-space noise pattern a bit distracting (maybe I wasn't doing it right), and settled for some static geometry-based AO approximation. As a slight aside, the HLSL for A-S in the above link appears to be incorrect, factoring in the Rs factor twice when computing the Fresnel blending for the Ps term: [font="sans-serif"][size="2"]float3 Ps = [u][b]Rs *[/b][/u] (Ps_num / Ps_den);Ps *= ( Rs + (1.0f - Rs) * [color="#000000"]pow[/color]( 1.0f - HdotL, 5.0f ) );[/size][/font] [font="sans-serif"] [/font] [font="sans-serif"][size="2"]Schlick paper: [/size][/font][url="http://www.ics.uci.edu/~arvo/EECS204/papers/Schlick94.pdf"]http://www.ics.uci.edu/~arvo/EECS204/papers/Schlick94.pdf[/url] Ashikhmin-Shirley paper: [url="http://www.cs.utah.edu/~michael/brdfs/jgtbrdf.pdf"]http://www.cs.utah.edu/~michael/brdfs/jgtbrdf.pdf[/url] Oren-Nayar paper: [url="http://www1.cs.columbia.edu/CAVE/publications/pdfs/Oren_SIGGRAPH94.pdf"]http://www1.cs.columbia.edu/CAVE/publications/pdfs/Oren_SIGGRAPH94.pdf[/url]
  5. Quote:Original post by Daaark I've been using XNA for years now, and I never once used a GameComponent. They are optional built in functionality for those that want to use them, and they don't limit or dictate anything about your design. All programs update their state, and then present a representation of that state to their user. Update/Draw is pretty much universal, unless your coding a GUI/Window type program. Drawing with custom shaders is the same as anywhere else. Assign your shader, and send your vertices to be drawn. Same as you would do it in D3D. The model class is just there as a helper for people who want to get models on the screen. It doesn't dictate or limit your design in any way. JWalsh's tutorial site has good examples of using custom shaders too. He's got a thread somewhere in this section, look for it. -edit- here: http://gamedevelopedia.com/ Thanks for the tips. Of course all 3D programs update and draw, I was mainly finding myself limited by the built-in GameComponent.Update and Draw methods, which I was coerced into using when following the XNA documentation "how to get started" stuff. Clearly you don't have exposure to those as you've never used them; I'll take that as implicit advice against their use. Considering that every effect needs to consume a set of matrices including view and projection, among other values that may be environmental rather than specific to the object being rendered, what design do you use to get them - fetching from a global store, feeding them into the Draw parameters, or something different? Manipulating the view and projection matrices is essential for shadow mapping, and I want to elegantly manage them when re-drawing objects into different render-targets within the same frame draw. The ShadowMapping sample on the XNA site has some useful examples, including a content processor that appears to replace the effect on a Model with a custom one. That sample still uses static scene management, simply calling Draw on a couple of named variables in a particular order. I suppose that can be abstracted to a collection of objects, but at that point it becomes less clear whether any object in the collection is using a specific effect or should be drawn with a certain technique name or which variables it will need to consume. Admittedly, XNA is the only venue for writing indie games that will run on an Xbox, but I was hoping it might provide better basic functionality than raw D3D. JWalsh's tutorials currently appear to be focused on reproducing the low-level instruction of NeHe and ignoring almost all of the XNA infrastructure and helper classes, and they don't really discuss any scene management at all - they're just rendering a triangle and a box with a static view matrix. The implementation of custom shaders is a useful demonstration, though.
  6. I've been working on a small project, and so far I've been satisfied to use the built-in game loop calling Update & Draw on my objects which are just entered into the Game.Components list. Now that I've got the gameplay working, I want to add some graphical upgrades like shadows and custom shaders, and the built-in methods are already annoying me with their simplistic definition (only passing in gametime). I'm curious how other people have addressed these issues: Do you mostly ignore the XNA game loop infrastructure and manage things yourself in a scene/level class? How do you apply custom effects to models that normally use a BasicEffect (.X files, for example), while retaining the useful data like texture references and such? Do you handle multiple rendering passes for different purposes using techniques with standardized names, or is there some other insight I could borrow? Using the FX system, do you make more use of 1 technique with multiple internal passes, or multiple techniques with 1 pass? I've written 3D engine code in the past, but didn't venture into the realm of shader management. My current plan is to introduce a SceneManager which will start life as a basic list, modify my game object class hierarchy to subclass a new SceneObject class instead of GameComponent, and use enums to define standard rendering techniques like "NoLighting", "CreateShadowMap", "UseShadowMap", etc. I'm not sure how I'd like to handle multiple shadow maps, dynamic environment maps, and other stuff requiring dynamic render-to-texture - any tips would be appreciated.
  7. D3D10 Shadow Map

    Yes, but the LookAt function does the subtraction to get a direction vector, so you have to add your direction to your position so the LookAt function can subtract it again. You could write a wrapper function that does the direction addition so you can specify direction in your specific code.
  8. D3D10 Shadow Map

    You should add your direction vector to the eye vector to get the "at" position to feed into the D3DX LookAt function.
  9. You could try using D3DDEVTYPE_NULLREF in your CreateDevice call. This will effectively prevent you from doing any actual D3D rendering or useful work, but it will give you a valid device object. You are correct that when the secure desktop (login screen) is displayed, no adapters are available for display duties. What are you trying to achieve?
  10. Graphics bug with direct x

    Draw your skybox AFTER beginscene, not before.
  11. You want an array of RT Views, each of which can have an array of texture slices (from the same texture). ID3D10RenderTargetView* pRTViews[8] = {NULL, NULL, NULL, ... } CreateRenderTargetView(rtTex1, &pRTViews[0]) CreateRenderTargetView(rtTex2, &pRTViews[1]) OMSetRenderTargets(numViews, pRTViews, pDSView)
  12. Quote:Original post by patw Ahh fantastic, thanks for the quick reply. I am coming at DX11 from DX9/360 and not a DX10 background so I'm stumbling along. The debug layer feedback is usually quite useful in D3D10 & 11, unfortunately it seems a bit lacking in this particular case. Quote:Here's a question. We've also found, with the new shader compiler in the Nov2k8 SDK (under DX9 proper vista/XP), that it is incorrectly optimizing some shaders, resulting in samplers actually getting optimized out. (In one case, it's optimizing out a light-map, and stuffing a different texture. The 'change around code order and hope for the best' doesn't seem to be working this time. Where is a good place to report these issues? I believe we have a public forum or email alias, but I can't seem to locate it at the moment. In this case, if you'd like to send your shader, along with the options & method you use to compile it, to my email address (removed), I'll send it along to the shader compiler team. I believe the forum here is monitored by other developers on the runtime & compiler teams: http://forums.xna.com/forums/27.aspx [Edited by - Assassin on December 9, 2008 4:40:15 PM]
  13. Quote:Original post by patw The shader is compiled using "vs_2_0" target. I can post it, as well, but it's not doing anything fancy. I changed the shader itself to use the system value semantics instead of VS/PS 2/3 semantics (Like POSITION), but that made no difference. Thanks for your response. I think that the D3D11 runtime expects to receive shaders that are compiled with at least vs_4_0 target, although the compiler can output shaders to any known D3D target. Unfortunately, it looks like the runtime interprets the vs_2_0 blob as corrupt rather than seeing that it's simply compiled with an inappropriate target. Depending on which feature level you're targeting, you have a choice of shader targets: vs_4_0, vs_4_0_level_9_1, vs_4_0_level_9_3, vs_5_0
  14. Could you please post some snippets of relevant code (the Compile and Create calls, most importantly) and the shader? I'll see if we can reproduce the problem here.
  15. You cannot just one universal vidmem/sysmem buffer through the D3D10 API, since it's focused on preventing unexpected performance loss. You can quite easily create a staging version of a vidmem resource by using GetDesc, modifying the bindflags, usage, and cpu access, and feeding that into CreateBuffer. You can then copy to/from it using CopyResource.
  • Advertisement