Jump to content
  • Advertisement

Stoic

Member
  • Content Count

    95
  • Joined

  • Last visited

Community Reputation

368 Neutral

About Stoic

  • Rank
    Member
  1. Awesome! Thanks guys! The min/mag filters were in fact the problem! That's weird. For some reason, I thought OGL would choose good defaults for me. I haven't looked at my texture code in years, and given all the extensions complexity, I probably should have known better. @silvermace - Good call on the GLError checking. I should have been more vigilant about that. @Huntsman - I should also have my eyes checked on the path for that Khronos stuff. It was in fact OGL ES. Thanks for the heads up! Rate++ for both you guys :)
  2. Hi All - I'm exploring FBO's, and I have a few questions about FBO completeness. I started by setting up my render targets with RenderBuffers that were the same size as my Window. (1024x768, I'm working on Windows XP). glGenFramebuffers( 1, &fbo_id ); glBindFramebuffer( GL_FRAMEBUFFER, fbo_id ); glGenRenderbuffers( 1, &depthbuf_id ); glBindRenderbuffer( GL_RENDERBUFFER, depthbuf_id ); glRenderbufferStorageMultisample( GL_RENDERBUFFER, 0, GL_DEPTH_COMPONENT, w, h ); glGenRenderbuffers( 1, &colorbuf_id ); glBindRenderbuffer( GL_RENDERBUFFER, colorbuf_id ); glRenderbufferStorageMultisample( GL_RENDERBUFFER, 0, GL_RGBA, w, h ); glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbuf_id ); glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorbuf_id ); glBindFramebuffer( GL_FRAMEBUFFER, 0 ); //restore rendering to default framebuffer This worked fine, no problems. I upped the Multisample values, and got some MSAA, lost some framerate, just as expected. Then I tried to use Textures instead of Render Buffers, and I'm getting FRAMEBUFFER_INCOMPLETE_ATTACHMENT errors. glGenFramebuffers( 1, &fbo_id ); glBindFramebuffer( GL_FRAMEBUFFER, fbo_id ); glGenTextures( 1, &depthbuf_id ); glBindTexture( GL_TEXTURE_2D, depthbuf_id ); glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, w, h, 0, GL_DEPTH_COMPONENT16, GL_UNSIGNED_BYTE, 0 ); glFramebufferTexture2D( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthbuf_id, 0 ); printf("%x\n", glCheckFramebufferStatus( GL_FRAMEBUFFER )); //INCOMPLETE_ATTACHMENT glGenTextures( 1, &colorbuf_id ); glBindTexture( GL_TEXTURE_2D, colorbuf_id ); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0 ); glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colorbuf_id, 0 ); printf("%x\n", glCheckFramebufferStatus( GL_FRAMEBUFFER )); //INCOMPLETE_ATTACHMENT I'm running on a rather old (but not ancient) GeForce 8400M GS. I have a recent driver which is telling me I support OGL 3.2. It also tells me that I support GL_ARB_framebuffer_object (as opposed to GL_EXT_framebuffer_object) I first thought that it might be a power of two problem, my first attempt used TEXTURE_RECTANGLE as the target. My driver advertises that I support GL_ARB_texture_rectangle, and GL_ARB_texture_non_power_of_two. I also read atthe Khronos website that the supported color formats for "Color attachable images" were very limited (like GL_RGBA4), which is just confusing to me (after all, isn't the color renderbuffer in my top example a "color attachable image"?) Also, that doesn't line up with this really old NVidia presentation. So, basically, I'm left thinking that I'm just missing some obvious step, and I'm looking in all the wrong places for the mistake. Does anyone with better eyes than me see a problem? One other quick thing I changed - when I started messing with the FBO code, I removed the depth buffer from my Window PixelFormatDescriptor - I figured that at the end of my post processing, I would just blit the output to the main window, so depth wouldn't be necessary. Anyhow, thanks for any help!
  3. Hi all - Sorry if this is answered somewhere, but I've been Googling around for awhile, and I haven't found an explicit answer anywhere. It doesn't help that I'm still wrapping my head around exactly how MSAA works in the hardware. I'm working on adding support for FBO's to my engine. I know that support for MSAA opens up a pretty big can of worms, especially on windows, where you need to create a dummy window to get the extension function to set up the correct backbuffer format for the main window (yuck). I was wondering if, when using FBO's, you could set up a non-multisampled window(primary backbuffer), then still create MSAA-enabled renderbuffers in external FBO's, then blit them back to the primary buffer? Then you could do post processing, etc, on the resolved buffer (less expensively). Would this prevent the need to do the weird dummy window trick? Even if it is possible, would it completely waste the benefits of MSAA? Thanks for any help!
  4. Ok, I think I found the problem. It had to do with my use of use of Indices. In my rendering code, I was assuming unsigned short indices, and the screen meshes were trying to use unsigned bytes. (smacks forehead). I'll follow up when I'm positive that I've found the problem. I'm a little disappointed that glGetError() didn't help me catch this, but I guess it can't make guarantees about size checking and stuff like that... So, if anyone is reading this, do they have any debugging tips in general for dealing with the dreaded "it just isn't showing up" graphics bug?
  5. Hi all - Ok, so this is difficult to explain adequately, I think. I recently/finally changed over the mesh system in my engine to use VBO's. (It's been a long time coming) For most things, it's working great. %90 of my meshes render exactly the same way as they did before, all shaders bound appropriately, etc, except a nice little performance boost. However, my Screen Quad meshes all broke. I'm not sure exactly what happened, but since the switch to VBO's, my screen quad meshes simply don't render. The screen quads themselves are classes that hold instances of the actual meshes, which of course now use the VBO's. Here's the thing - none of the shaders have changed, none of the numbers/code for positions or UV's have changed, the only thing that has changed for the screen quads is the need to Map and Unmap around filling the buffer data. however, the screen quad meshes simply don't render (or at least, they're not visible). I've been hunting for the actual bug for nearly 2 days now, and I'm at my wits end. glGetError hasn't told me anything, stepping through the code in the debugger is showing that all the variables are legal and valid, and there's nothing to show that anything is broken, except that I can't get the post-VBO'ed meshes to draw screen quads. The meshes use a pass-through vertex shader (I use clip space coords for the verts) and for now just a pixel shader that outputs red, but I'm still not seeing anything. I'm testing everything in an old app that used the old system, and I didn't change any of the application rendering code, so it's not like depth or alpha test changes, or anything like that. There's a bunch of code in the overall system, so I've abstained from posting it here. I'm just wondering if anyone has run into any similar problems, or could offer up any good debugging techniques for when the mesh simply "isn't showing up", despite no compiler complaining, no debugging errors, etc.
  6. Hi all - I'm trying to do something which seems fairly simple, but is probably wrought with Peril. I'm working on a TweakVar serialization system, and I'm starting with some templated code to print a value to a buffer. There's a little craziness because I'm really doing all the work through a templated wrapper class. The idea is that people can provide their own specializations if they want to serialize new primitive types. This is supposed to be a little simple, only for tweakvars, not for say, the entire gamestate, which supports far more complex types and such... // pre-declaration template <class T> void PrintVal( const T* dat, char* buf, unsigned int bufSize ); //trimmed code for explaination class ITweakVar { virtual void Print(char* buf, unsigned int bufSize)=0; }; template <class T> class TweakVarImpl<T>: public ITweakVar { T* pRuntimeVal; //set in a global ctor at startup void Print( char* buf, unsigned int bufSize ) { PrintVal<T>( pRunTimeVal, buf, bufSize ); } }; //specializations template <> void PrintVal<float> (const float* dat, char*buf, unsigned int bufSize) { unsigned int len = sprintf( "%f", buf, *val ); assert(len < bufSize-1); } template <> void PrintVal<int> (const int* dat, char* buf, unsigned int bufSize) { unsigned int len = sprintf( "%d", buf, *val ); assert(len < bufSize-1); } The ITweakVar class is some autoregistered/autolisted craziness that is tabulated so all the tweakvars can be dumped to a config file through a single function call ( in this case, by calling ITweakVar->Print() ). The problem comes when a tweakvar is set to a char* (for use in strings, etc). template <> void PrintVal<char*> (const char** dat, char* buf, unsigned int bufSize) { unsigned int len = sprintf( "%s", buf, *val ); assert(len < bufSize-1); } /* Above doesn't compile, generates a " void PrintVal<char*>(const char**, char*, unsigned int) is not a specialization of a function template " */ Hmm, so function specializations of pointer types seems to be problematic. I'm considering the possibility of just using overloads instead of specializations, but I kinda prefer the template syntax - it just seems clearer, and seems less likely to generate unintended incorrect type conversions. Thanks for any help!
  7. Stoic

    md2 modle loading

    Hi - It's been awhile since I've looked at this stuff in my own system, but I seem to recall that MD2 stores its UV coordinates in image space with the origin in the Upper left (pixel start), rather than the Lower Left (which OGL uses). The code to fix this would look something like - foreach (UV in Mesh) { UV.v = 1- UV.v; } I can't be sure this is your problem, but it's a common one. Good Luck.
  8. Hi all - I've been writing shaders for quite some time, but I'm still a little bit confused about batching and what the driver does when shader parameters change and shaders are (re)bound, etc... (I'm using Cg as my shader solution, on PC) Let me give two relatively simple examples, simple texture change on static meshes, and different instances of animating characters in a vertex shader... The simplest case is that I have two boxes with the same lighting effects and environment, but different textures. (Like a basic "static mesh effect") Let's say that I have my static meshes sorted and stored in a single list or array(and I know they're all opaque, etc). What is the appropriate way to handle rendering? BindStaticMeshShader() foreach (static mesh) { SetTextureParam( this mesh's texture ) SubmitMesh } Or do I have to do this? foreach (static mesh) { SetTextureParam( this mesh's texture ) BindStaticMeshShader() SubmitMesh } It seems to be basically the same problem for handling animation in the vertex shader... Let's say that I'm doing a morph-target style Vertex blend in a vertex shader. Like, say I have a "blob" entity in my game that I'm instancing, but each blob is at a different point in the animation (so 't' values would be different for each instance of the entity). For skinning, it would be an even bigger deal since you have that huge array of bone vectors to pass in... Can I just bind the shader a single time, then use the Cg runtime to update the values of the parameters in between submissions? Or do I have to set the parameters, then rebind the shader? I guess I'm confused by not understanding how and where compiled shaders are stored and treated by the driver. What actually happens when you change a parameter? Does it patch it into a compiled version of the shader and upload it to the graphics card, or does something else happen? Can someone try to explain this to me? Thanks for any help!
  9. Hi all - I'm doing some quick research and upgrades to my old game engine, and I want to change everything to a more modern OpenGL approach. I'm using Cg as my shader solution, and I'd like to (finally) switch over to using Framebuffer Objects to manage render targeting. It all seems fairly straightforward, with one minor exception - sampling the depth buffer like a texture in a pixel shader. Is there a standard way to do this? I see that the depth buffer is usually referred to as a "renderbuffer" instead of a "texture," which means that it doesn't have a glTextureObject ID, and can't be used as a texture. OpenGL has become pretty huge over the years, and I know there are plenty of ways to force the issue, such as copying the depth buffer into a texture (seems slow), or storing the depth out directly in the shader either in a free channel( low dynamic range, have to change lots of shaders ), or using MRT (slow, and potentially have to change shaders)... I guess I'm a little confused. I've written some shaders where somebody else handed me the depth buffer, but that was on a console, so I'm not sure if it's possible on a standard windows pc... It would be really nice to have, since so many post-processing effects count on having access to depth information. Anyhow, thanks for any help!
  10. Stoic

    Audio in Direct X 9

    Quote: Thanks for the suggestion I'll look into FMOD. However I notice alot of things being mentioned here DirectSound, XACT, FMOD and I'm curious as to the differences and advantages. A brief summary (and not super accurate) - Direct Sound: Microsoft's low-level API for communicating with Sound cards and drivers. It's very similar to the role OpenGL and Direct3D have with graphics cards, except for sound cards. You have lots of control, but, as with any low level programming, you have to know a lot and write a lot of code to even get simple stuff working. There are some subtle disadvantages, such as the fact that it doesn't have native support for Mp3, etc... However, as described below, most other middlewares and tools are written on top of this. XACT: Microsoft's toolset for authoring Sounds and scripting sound for games. It has some native runtime bindings(which run on top of Direct Sound) which you would then include in your project, and then you'd use the XACT tool to author sound and script it for your game. The whole process can be rather unintuitive for a new programmer, since the final purpose of the whole thing is for sound designers (rather than programmers) to be able to author and script sounds in professional environments (like at Bungie). In short, it's much higher level than Direct Sound, to the point where it's a completely different process for getting sound into your game. But it is still built on top of Direct Sound. FMOD: FMOD is a cross-platform sound API, which has been around for a long time (somewhere in the neighborhood of 10 years). On Windows, it is also built on top of Direct Sound, but for say PS3, it's built on top of Sony's internal libraries, with the same interface. As far as programming goes, it's still fairly low level, but it eliminates some of the pain of Buffer memory management and adds useful features such as MP3 support. Direct Sound is largely about managing memory buffers full of sound sample data and sending them to and from the sound card drivers. FMOD does this stuff for you, giving you functions like "int LoadSound('fname')", and "void PlaySound(int handle)." (Update: FMOD Ex is a little more object oriented, but it's the same basic idea) There are some other sound middleware options, like SDL mixer (which won't work for you because you aren't using SDL (it's OpenGL based)), and OpenAL, which is functional, but not as popular as the above options. It all boils down to preference, but for most beginning programmers, I seriously recommend FMOD. It's easy to use, fast, and powerful, especially when you just want to get a background music loop playing and have some sound effects in your game. I'm not the best programmer in the world, and I was able to get FMOD running in my game in less than 2 hours. One of my friends who's hardly a programmer at all was able to get it running in less than 3 (my solution was better engineered, though :) ). The big things that you will probably need to learn as a complete beginner for sound programming is the relationships between sounds and Channels, and streaming. FMOD does a pretty good job of smoothing a lot of this stuff over, in my opinion. You can't ignore any of it, but they will have good default options for things that you can just use if all you want to do is just get some sounds playing in your game. In Direct Sound, you'd have to be more aware of this kind of thing, and you'd have to write more code to manage it. In XACT, it would probably still be mostly automanaged, but then getting a sound to play would be more complicated as a whole than simply having an MP3 set aside and calling LoadSound()/ PlaySound(). Uh, anyhow... Hope all that helps! :)
  11. Stoic

    Audio in Direct X 9

    Just to throw my hat in, if you're writing in C++, I would highly recommend FMOD . It's an easy to program API, has lots of good features, a pretty responsive community, and it's free for small noncommercial products. And no, I don't work for them. :) I've used FMOD for all my projects, and I've had good luck with it.
  12. @Zhalman - Hmm... that works. I was hoping to avoid needing to use macros for every member variable, but there are some advantages to that. I like the idea of being able to just do a mapping of names to types like that, so I'll keep a note. In terms of usefulness, I'll explain below. @Julian90 - That's super awesome... I think that's pretty close to exactly what I need. Filling in the extra information - my templated MemVarInstace classes all derive from a common base class that serves as an interface to the MemVarInstance, plus it allows them to all autolist by type. I have an AutoLister Mixin class which the base class derives from which adds the MemVarInstance to a list in its constructor. I also have similar macros for registering full on classes, which the MemVars may wind up accessing later. The ultimate goal for all this is to use it for auto-generating serialization code, network replication, and binding to scripting languages. It looks kind of like this - class IMemVar { public: virtual size_t SizeOf()=0; virtual size_t OffsetOf()=0; //this breaks the "pure virtualness", but leads to less templated code const char* name; }; //This class creates a Type for the Memvars to AutoList to // - wish I could get rid of the MI, but it isn't really harmful here template < class MemberOf > class BaseMemVar : public IMemVar, public AutoLister< BaseMemVar< MemberOf > > { public: BaseMemVar() {} }; //This is the same as the "MemVarInstance" thing I was talking about above template < class MemberOf, class MemberType > class MemVarImpl : public BaseMemVar< MemberOf > { public: MemVarImpl(const char* _name, size_t Offset):name(_name), offset(Offset) {} size_t SizeOf() { return sizeof(MemberType); } //offsetof is used as part of the registration macro size_t OffsetOf() { return offset; } size_t offset; }; I also have some external functions which search through the (auto)lists and can generate pointers, etc, based on string names, hence the serialization, scripting, etc... as an optimization later on, I was planning on generating TypeID's after startup by iterating through the lists (after sorting to get around uncertainties based on compiler, etc... ) The idea is that other people will simply create client classes, then expose them to this system by using the Macros in the .cpp file. This will give them all the functionality of the autosystem. I'm also trying to leave some hooks for people to overload the autogeneration system (probably through explicit template specialization of their class) to write their own optimized or specialized versions of serialization or whatever. Thanks a bunch for the suggestion! I'll have to try it out after work tonight!
  13. Hi all - Warning, silly Template Metaprogramming to follow... I'm doing some crazy macro-driven custom RTTI stuff, and I'm trying to extract the type of a class's data member from the class name and the member name. I want to use the extracted type to create a global instance of a templated class, so it all has to happen at compile time. My ultimate goal is to do something like this - #define RTTI_MEMVAR_REG( MemberOfClassName, memberName ) namespace { MemVarInstance<MemberOfClassType, MemberType> VarInst_##memberName(#memberName); } //Then it could be used like this .... //TestClass.h class TestClass { public:float f; }; //first problem is that this has to be public //in TestClass.cpp RTTI_MEMVAR_REG( TestClass, f ); It's proving a lot more difficult than I expected. I had really high hopes for this approach - template <class T> class ExtractMemberType; template < class MemberOf, class MemberType > class ExtractMemberType< MemberType MemberOf::* > { typedef MemberType result; }; The above classes compile, but unfortunately, it doesn't seem to solve my problem. void main() { printf( "%u\n", sizeof( ExtractMemberType<&TestClass::f>::result ) ); //Won't compile - "expected Type as template argument to ExtractMemberType" } The problem I'm running into is that I can't declare the type of the member data pointer without knowing the type of the member, which I'm having the darnedest time coming up with (at least in a fashion that could be used as a template parameter). Here's another idea, but I don't know how to get it into the template parameter I need... &( ((ClassName*)(NULL))->(MemberName) ) //This expression should have the correct type, but I can't use it as a template // parameter. I could pass the expression into a (runtime) function, but unfortunately that doesn't help me create the variable I need. Hmm.. maybe a static local variable to the function instead of the global.... Anyone have any other ideas? Thanks in advance for any help!
  14. Stoic

    A game loop idea.. Stack..

    Just to add my 2 cents - I've spent a lot of time thinking about this over my career, and like most design decisions, I've come to the conclusion that there isn't really one "right" answer to it. Like everything else, it depends on what your goals are for the game you're working on. What Antheus said is totally true. Decentralized models of execution for handling computation for your game are really useful for parallel architectures. But they come at some high costs. On my first game, I used some function pointer mechanisms to control logic flow, and it started to get really hairy. Debugging and tracing program flow became far more difficult, and we ended up regretting the decision in the end. For my second game, we went in the exact opposite direction, and just coded everything linearly. Of course, this was really inflexible, and by the time we were adding triggers and more complex effects, the design didn't hold up very well. 3 more projects later, and I'm using a hybrid approach. I have a simple game loop function plus a special "ProcessManager" class which supports a more flexible way of registering new things to happen on each frame. That way, I can control the order in which the big nasty subsystems need to be updated, and my little one off effects can be dynamically added to the update path with very little hassle, if I'm willing to pay a small performance cost for them. It's working pretty well for us so far... Basically, the ProcessManager just maintains a list of "BaseProcess" pointers, where BaseProcess has a virtual Update() function and an "ImDone" boolean which the ProcessManager checks every frame to kill the Processes off. Child classes to BaseProcess just need to know to set ImDone to true to have the ProcessManager clean them up. Any subsystem that wants to register new processes can take the ProcessManager in as a parameter and add processes to it. Then the hardcoded Update loop can decide when the process manager will actually get Updated, thereby controlling when all the Processes will be Updated. The solution isn't rock solid (handling parameters to the BaseProcess Update() function is a bit of a dirty Hack) but with some care, we're getting most of the benefits of both worlds - easily traceability on the big complex subsystems, and dynamism on small simple event-type processes. A huge benefit is integration with our level tool - designers can place level-specific processes using our level building tool, and out ProcessManager can create them when the level is loaded, but we still get to code most of the "techy" parts of the game in a straight procedural fashion. Uh... sorry if that got longwinded. Hope it helps, though! :)
  15. Stoic

    Comparing Quaternions

    @SiCrane - yeah, as emeyex said, the abs was meant to account for the fact that q and -q represent the same orientation. I probably should have stated that in my original post... I'm not sure, however, if quats that are "close" to -q are also "close" to q. I would assume they are, since if q' is close to -q, then -q` is close to q.... Your method of rotating a vector by the two quats would definitely work, although it would be more expensive. I would also assume that my function doesn't increase linearly as the rotations get more similar. (While SiCrane's would)...
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!