• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

134 Neutral

About KiroNeem

  • Rank
  1. Crash on reference obj method call OSX x64

    The configuration you suggested does solve the issue, below is the new method to register. ret = engine->RegisterObjectType("Vector", sizeof(GtVector), asOBJ_VALUE | asOBJ_APP_CLASS_CA | asOBJ_APP_CLASS_ALLFLOATS); assert( ret >= 0 );   Looking at my vector class, I did have a default constructor and assignment operator, but no explicit copy constructor or deconstructor.   I believe part of my problem was a confusion about what I was passing in.  I had assumed the the types being passed in for constructor/deconstructor/ect... dealt with what I was going to register on the class with the engine, not the actual class itself.  I now understand the separation of these two concepts.   I apprecaite the help, thank you very much!
  2. It has been my default assumption that I may have done something wrong.  After a few days of looking into the issue I came at an impass that might need info from people who are more familiar.   The issue is failrly simple.  I have a registered reference type, which in the script I am calling a method on.  In c++ the code asserts as the pointer of the object is not correct.  Below is the information I've gathered about the crash.   System: Mac Os X 10.8.2 IDE: Xcode 4 Achritecture: 64-bit Compiler: Apple LLVM 4.2 (default) also occurs with (LLVM GCC 4.2) Config: Appears in both Debug and Release builds AngelScript: 2.26.1   - The issue appears to be related to functions which return a specific registered value type (Vector) - Other method calls on the same object works fine - Multiple object types suffer from this crash - Occurs with both direct method calls and method wrapper functions - The pointer to the object stored in angelcode is correct - I have also tracked the object pointer up until the assebly code in x64_CallFunction code, and it appears to be fine until then - The same codebase works fine on Windows VS2010.  I'm currenting porting the code to OSX. - I've attempted to re-create the issue with a new class value type and new methods, but those work fine. - Issues also occurs if I use the "LLVM GCC 4.2" compiler   For reference, example configuration of the classes in question. (not full config)  Again, this code works fine on the Windows side so I think the issue is deeper.   //This is the suspect value type ret = engine->RegisterObjectType("Vector", sizeof(GtVector), asOBJ_VALUE | asOBJ_APP_CLASS_CDAK); assert( ret >= 0 ); //Example of object method that fails ret = engine->RegisterObjectType("Entity", 0, asOBJ_REF); assert( ret >= 0 ); ret = engine->RegisterObjectMethod("Entity", "Vector getOrigin()", asMETHOD(Entity, getOrigin), asCALL_THISCALL); assert( ret >= 0 ); //Another example of object method that fails ret = engine->RegisterObjectType("Level", 0, asOBJ_REF | asOBJ_NOCOUNT); assert( ret >= 0 ); ret = engine->RegisterObjectMethod("Level", "Vector getBlockOrigin(const Vector& in)", asFUNCTION(_Level_getBlockOrigin), asCALL_CDECL_OBJFIRST); assert( ret >= 0 ); //Example of code that will break Entity@ entity = CreateEntity("test"); //Works entity.setOrigin(Vector(1,1,1)); //Works Vector vector = entity.getOrigin(); //Fails: Entity::getOrigin() is called, but obj pointer is not right   At this point I believe the issue may deal with my compile configurations and/or the as_callfunc_x64_gcc.cpp.  I'm not versed well in assembler however, so I'm kind of stuck on what to do.  Any information anyone might be able to provide is welcomed.  If there is more information I can provide about my issue please let me know.
  3. Serverside movement processing

    I agree with the method Sirisian describes, and have done this method several times in my own games. If you are looking at tile based movement 10 updates a second will be more then enough for your needs, this number of updates also will work fine for more action based games if you add a interpolation method on the client to smooth the movements.
  4. I'm using a custom scene graph based off [url="http://www.geometrictools.com/"]Wild Magic[/url]. My scene graph was intended to be used with a fixed function graphics pipeline, but my requirements have evolved. Now I want to implement a shader based implementation. It's been a big transition to move away from the OpenGL fixed function pipeline, it has changed the scene graph a lot and I've had to learn a lot of new techniques. My main concern with the scene graph is supporting pseudo composite shaders, so that I'm not having to write a custom shader for every variation that I want. Obviously OpenGL does not support this directly, but I've read a few articles suggesting that you can achieve similar functionality by using a skeleton master shader. The technique turns on and off parts of one main shader depending on what other shader are added to it during compile time. I.E. Main Shader [code] void main(void) { vec4 color; #ifndef LIGHTING color = lightingFunction(); #else color = vec4(1.0, 1.0, 1.0, 1.0) ect... } [/code] Module Shader [code] #define LIGHTING vec4 lightingFunction() { Some lighting functionality... } [/code] I know the above technique works, I've used it before and it provides that modular like functionality that helps reduce the amount of unique shaders you need to write. The issue is my scene graph doesn't really know about or care about this technique, and I am wanting to change it around to actually better facilitate this. Below is how I suggest I will do this... Each scene node has a list of "shader" decorators, these decorators are propagated down to the geometry leaf nodes during a render state update pass. This means each leaf node knows all of the shaders that are attached to it in the hierarchy. Using this list of shaders each geometry piece can then compile it's own program that defines exactly how it will render. I know this technique will work, and it solves some unique issues.... Up Side - Allows the creation of modular/composite like shaders inside of the scene graph - Allows me to easily add and manage a shader in a node and not all all individual leafs Down Side - Possibly slow down creating many programs - Possibly many programs created which are completely identical - No verification that a particular shader is compatible with the master skeleton shader At the moment I feel the above solution, although having some down sides is going to be the best for what I want to do. But, I want to know what people think of it and if they have solved some of these issues in a different way I would love to hear them. Unfortunately, there is not a lot of great information about some of these more advanced topics that don't get weighed down with specifics. So tell me what you think, if you do something different, or if you think I'm missing something.
  5. There are many gui libraries out there, each with their own strengths and weaknesses. I'm looking for people's opinion on available gui's that are able to be skinned. This is a major deciding factor for my current project and would love to hear other people's experiences. For the project we are going to be using C++ and OpenGl. However if your know of/have experience with a good skinnable GUI using different back ends I would still like to hear about them.
  6. OpenGL Fast blit using GDI and PBO

    Quote:Original post by AndyEsser Why not use OpenGL to render direct into the Window? Good question. I wish that were possible for my situation, as it would make things much simpler. I will be happy to explain. - The offscreen OpenGL context is being used for rendering different scenes to multiple windows. This is a requirement of the application. - Some Non-OpenGL windows in the application are created with WS_EX_LAYERED. Which I was disturbed to find does not interact well with standard OpenGL windows. You get horrible flickering and incorrect layering of windows when the two touch each other. Quote:Setup a child window and use opengl on that like you would making any other opengl application. Check out frame buffer objects, I beleive they do rpetty much the same thing just faster. The child window idea wouldn't work for what I need. The issue is not embedding OpenGL in another window, but instead of transferring the data more efficiently. Frame buffer objects allow you to render OpenGL to a offscreen buffer of memory on the GPU. I actually already use them in the application, but not for the purpose above. At the end of the day I still need to abstractly get the data from the GPU and into the CPU. Quote:Note that you can have OpenGL render to anything with an HWND. This includes Pictureboxes and many other Windows Controls. Maybe I'm confused on your response, though I'm unsure how that would help. Rendering to a window is not an issue. There are several reasons I'm unable to use the standard WGL implementation.
  7. This seems like the most appropriate place to ask this question, though it involved two different APIs, OpenGL and Windows GDI. I'm attempting to improve speed in a custom operation that I'm doing. The speed of my implementation is fast, but at the resolution it's a bit too slow. Using a unseen OpenGL context to render my scene, reading back the data with glReadPixels into a DIB, then using that to BitBlt to a window. - Render scene - glReadPixels into DIB data - BitBlit into window For a 2560x1600 size the glReadPixels takes about 33ms and the BitBlit takes 3-4ms. I did some research and learned that PBO might be able to speed up my operation. I tested the speed and the actual readback was about 10ms faster, but the problem is that the PBO puts the data into it's own memory location. I can copy the data from here to the DIB, but then I loose the speed up I just gained. I would copy the PBO data directly to the window, but the GDI does not seem to give a good function to do so. So here are some specific questions I have to anyone who might know. - Is there a way to have a PBO map into my own allocated memory? I assume not. - Is there a better way to blit the data to the window? - Is there a way to directly change the bits pointer of a DIB bitmap? Is so this might also solve the issue, as I can intelligently change them around.
  8. Quote:Original post by Washu Quote:Original post by Hodgman Quote:Original post by Driv3MeFar Quote:Original post by KiroNeem I would assume then that the STL vector is doing runtime checking of the template type to see if it should call the constructor/deconstructors. What do you mean? It will always call an object's constructor when constructing it, and always call the destructor when destroying it. Why would you want an object that could be destroyed without a call to its destructor?Primitive types don't have destructors, so the following is invalid, isn't it?*** Source Snippet Removed *** As for KiroNeem's question - if the STL is checking if the type has a destructor or not, the check will definitely be done at compile-time, not runtime. *** Source Snippet Removed *** Is valid. :) Check your local standard for more interesting revelations about C++. Your right! That is magical and quite awesome. Thanks for bringing this up, it really glues together all of the pieces I had questions about. - Kiro www.wingsout.com
  9. Okay, that would makes sense. I've had to use placement new before so I'm familiar with the functionality, very cool stuff. Though I was unsure if there really was any alternative method the STL vector could have been doing. Thanks for the replies. ^^ I would assume then that the STL vector is doing runtime checking of the template type to see if it should call the constructor/deconstructors. - Kiro www.wingsout.com
  10. It's known that when removing an object from a STL vector it will correctly call it's deconstructor. Then, when the vector is deleted it will also call the deconstructors of the remaining objects. However I'm a little confused about what they doing to product this last result? For example, say we had a vector with size of 6 and enough memory allocated for 12 object. When the vector "delete"s that memory what is preventing the last 6 memory spaces from trying to call the deconstructor for the class type? I would imagine they are doing some memory trick, but I have yet to find anything that supports this. - Kiro www.wingsout.com
  11. Unity Building a level builder

    "My first thought is that it looks more like a game builder than a level editor. It looks like most of your game logic will be written in the level editor, which, depending on how you divide up the functionality between editor and game could make it a bit confusing, and possibly too tightly coupled to the specific game to be very reusable. On the other hand, since it is a shmup, this style of editor seems like it could make it very easy to extend the game and easily add a lot of levels. " I agree that this type of design is geared more towards shmups, and will most likely only be useful in that type of genre. Though within those bounds I see a builder like being able to be reused on multiple games since most of the generic concepts are there. "Also, it seems a little odd not to define and use sounds in the editor, since everything else is defined and built there." Well I want the capability of defining what sounds exist and allowing keyframes to trigger them. I'm not sure what else there is beyond that which needs to be integrated related to audio. Is that what you were mainly referring to? - Kiro
  12. After looking at the different forum sections this seems like the most relevant, however if you feel this would better be placed elsewhere please feel free to let me know. So I'm at the point with my own project that I would like to build complex levels which define how my game plays. The game is a standard vertical shmup, so a lot of what I need to deal with is where enemies are and how they move. The major problem with this is that I'm unaware of any good generic applications that are a good solution for doing this. So like a true nerd I decided that I would need to build my own. I'm not trying to build this as a generic tool, but mainly a internal tool that I may use for my own games. However, I'm a big advocate of releasing source code so I will most likely let people dig through and use it without restrictions. Right now I'm in the beginning stages of this and am making a list of everything I would "like" to see in editor. I've come up with a bunch of ideas, though I'm not sure if I'm missing anything or if I might be thinking in things the wrong way. So below I have my initial design notes as both as something to share with the community and to get feedback from people. Sequences The editor and the game will be broken up into what I'm calling "Sequences". Each sequence is an data structure which will define what happens in the game when the sequence is run over time. A time line with keyframes will be able to be edited, telling the game what happens in the beginning, end, and anywhere in between. I will also want more game specific logic, for both in game decisions and different end sequence conditions. Examples of logic that might be useful. - Reaching a time limit - Death of all enemies on screen - Reaching a certain score - On player death - On certain enemies death - Etc... Sequences are the mid level building block of the game. Once a few sequences are built you can then link them together with something like a level file to define how the game is played. When the game begins you will start at the beginning sequence, when that sequence ends then you move onto the next one until the entire level is completed. Keyframes Each sequence has a time line with keyframes, when a certain keyframe is hit on the time line then something useful in the game will happen. Below are some examples of some keyframes. - Play an animation - Spawn an enemy - End a sequence - Modify something on the player - Play a sound or switch to a different music track - Change the background Entities Each thing which exits in the game is an entity of some sort, a player, an enemy, a bullet, a static piece of the level, ect. In the editor you will be able to define entities and modify how they react in the game. You can create both generic entities that can be used several times, or you can create ones that are specific to a certain sequence. Entities have most of the same functionality that sequences do, such as a timeline and logic which can be applied to them. Below are some examples of keyframes that might be used. - Fire a bullet - Play an animation - Play a sound - Commit suicide Path/Spline Editing I have found that in shmups it is very easy to define how things work with splines. It can be used anywhere from how and enemy moves, what direction they are shooting, how bullets move though the air, ect. Therefore the editor will need the ability to edit splines and save them to the data it's related to. For my current needs I have found the Catmull Rom spline to be both good computational wise, easy to program, and have the ease of use modify on screen. Scripting Both sequences and enemies will be able to be scripted with a bit of logic to make their actions both more dynamic and sometimes easier to reuse. The scripting language I'm using is similar in structure to Bullet ML, however it has been written in c++. Also, the concept of how it works also differs in that it runs off of a input output design. To better explain I will give an example. I might want the enemy to move on a spline, so I will have a spline action which can be added to an enemy. That spline action does not modify the players position directly, instead it just calculates the current position in time without even knowing an enemy exists. I then add a output to that action telling it to copy the result to a particular enemies position. After each update the spline action will then run all of it's output commands which modifies the player's position. The purpose of this design in reuse. There could be millions of things I want a spline to control, but without this design I would need to code a specific case for each one. With the input output design I can easily link the action to the variable without worry. Now I can use the spline for controlling many things, enemies rotation, rate of fire for an enemy, speed of a bullet, ect... Inputs work in a similar way, the input it run every update before the logic and generally modifies something inside of the action from an outside source. An example of this might be me having a firing action and I want to input the angle of the enemy, or maybe just input the angle directly from the enemy to the player. Playback Part of editing anything is making a change and seeing how it looks, so it will be important to have a playback function. This will allow you to either jump into the actual game, or just do a pseudo playback of what would happen at any point in the timeline of a sequence. When you are done you can then go right back to editing and making tweeks as needed. Serialization Once you have all of this wonderful data in the game you will then need to save this information to disk to be loaded into the game or worked on later. This is a simple step, but a needed one. I already have a XML reader and writer so this will most likely be the format that I write to. I cringe every time I think about the size and speed overhead compared to a binary format. However for these kinds of top level data structures it proves to be the easiest to use, reuse, and the speed is not much of a concern. GUI The editor will need someway for the person to interact with the data, and this is where a GUI comes in. I have a personal OpenGL GUI that I have been working with and it has proven to work well for it's utilitarian purposes, so at the moment this is not a concern. Graphics Things will be drawn to the screen with OpenGL. I've chosen this and worked with it for several years now, so I have the most experience with it and it will allow me to compile my application on multiple platforms. Sounds At this point I don't see a need for sound in the editor, unless it was to be a feature of the playback system. I already have several code structures hooked up with OpenAL and the OGG format, so If it does need to be added it will be trivial. Final notes So that is my current plan on what I would "like" to see in the editor. Going into this my mentality is to build something that works for my purposes and nothing more. As time goes by I'm sure I will expand the project and it will grow into a more generic tool, though to move along I must be more focused on my game and less about making a tool. If you have anything to say, ideas, design concerns, ect, then please feel free to post back. I'm very open to hear what people think since I'm sure I'm not the only one who has had to think about this and/or make one of these before. - Kiro
  13. Quote:Original post by snowfell well I am glad I am not the only one. But guys what are the bare bone things that keep opengl running. Like if a very basic opengl app was split down into pieces, what would they be. Like how do u split up everything into window creation, rendering, timing, and everything else. Let just say I am mostly confused on what is needed to run a opengl app like this. There are any different ways to break up these aspects of an OpenGL application. Below is some pseudo code that can serve as an example of one way it can be done for a simple application. int main(int argc, char* argv[]) { //Init GLUT //Open a GLUT window //Register you event callbacks with GLUT //Init anything specific to your game //Init the game timer with GLUT //Enter into the GLUT loop return 0; } void timerHandle(void) { //Do the per frame game logic, update enemies, check game conditions, ect //Draw the game //Register the timer again } void keyHandle(char key) { //Check what key this is, then react in the needed way } void closeWindowHandle() { //Clean up anything specific to your game //Exit the application exit(0); } When the applications starts you set up all the things you need, like a window, your key events, the game timer, ect. Then when you enter into the GLUT main loop your handler functions will be called when the events happen and you react to them as needed. - Kiro
  14. When I originally got into OpenGL it took me a while, but what really got me through is the amount of resources online which talk about how to use it. Below are a few resources I used to get me going. http://nehe.gamedev.net/ http://www.lighthouse3d.com The forums on this website. :P Since your just getting into everything GLUT is the best start for getting your environment set up, it's simple in syntax and there are plenty of examples online about how to use it. I hope this helps. - Kiro
  15. [SOLVED] VBOs and glDrawElements

    I tried you code on my machine and it works perfectly. Your problem must be caused by a different factor, such as a color which can not be seen or your projection setup is clipping it from view. - Kiro
  • Advertisement