Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1392 Excellent

About Kwizatz

  • Rank

Personal Information


  • Github

Recent Profile Visitors

9993 profile views
  1. Hi!,

    Just wanted to make a quick announcement. I am working on my own open source game engine at Github, it is still rather crude and non functional yet, but I'd really like to get all kinds of feedback, and if possible get the follower count up :smile:

    Thank you for your interest!


  2.   No, that just means that the author of assimp didn't take the necessary measures for the generated static library symbols not to be marked as __declspec(dllexport) since not adding the extra preprocessor magick causes no immediate side effects... until you try to link an executable against two dlls that both independently linked against the static symbol exporting library.   Always err on the side of laziness or ignorance before cleverness or evil doing ^_^
  3. Found this post while doing something similar for a scenegraph editor, hopefully its not too late to add more to it.         Actually that does not seem to be entirely accurate, I think removeRows should remove (delete scene nodes), such that you can use it for that specific purpose, say in a context menu or when pressing the delete key while a node is selected. What I found is that you should enable the DragDropOverrideMode on the view for it not to call remove on the original indices, DragDrop vs InternalMove seems not to make a difference to me.   Either way, the design seems to be missing something, models deal almost exclusively with QModelIndex objects, with row,column and internalPointer being the most important aspects of them, but then when drag and drop comes into the picture, you have to deal with mime types, with no alternate option to just have a "drop" method for indices or an index list. Furthermore, the information encoded in the default mime data is row, column and a map with what the data method returns for each role, no internalPointer which is most likely what you really need in this situation.   Here is a partial solution that overrides QTreeView::dropEvent in order to access the selected items, but I rather not have to create a new class that inherits from QTreeView just for that, so I am looking into options, so far, I guess the least intrusive option would be overriding QAbstractItemModel::itemData to include a UserRole with the item's internalPointer so I dont have to override QAbstractItemModel::mimeData which would be a different mess.   Either way you definitely need to override QAbstractItemModel::dropMimeData, the default implementation calls insertRows, and tries to fill the inserted rows with setData/setItemData, which is probably not what you or I want since my insertRows implementation calls new Node and we don't want a new node, we want the already existing one moved. So a call to moveRows would be much more appropriate, but you need to rebuild the source indexes from the mime data.  
  4. Kwizatz

    CMake or Custom ?

    I like CMake, its not perfect, but I wouldn't want to keep 3 or more build systems in sync for multi platform development, sure you have to change compiler flags from Linux to Windows and a lot more, but at least you do it in just one place. The fact that a lot of open source libraries use it also makes it easier to add them to your own project and build them yourself in case you like that kind of thing.   But if you're targeting Windows with VS and nothing more, I see no problem in just commiting your solution and project files onto your versioning system. Custom build systems may take time to learn/understand to use for newcomers into your project, chances are, they may already know CMake.
  5. [quote name="jnurminen" timestamp="1396975255"] 1.) Lack of solid documentation is mainly because it is a pattern, idea abstracted away from its original use-case. But keep in mind that ECS can by used to solve several problems in game / game engines. It maybe a solution to a question "How designers can create new entities without help of programmers? / How to let designers easily to modify different properties/attributes of certain entities?" or it might be a solution for problem "How to create re-usable game logic and avoid OOP inheritance problems?".   First question is more visible in Unity 3D, it exposes entity construction to designers with simple UI. You can easily create new types of entities just by adding few components. With build-in components you can get far, but gets more complex when you want to create custom components.   If you create such solution by yourself you need to pay attention how you can easily create new components with editor. One solution might be defining a metadata to describe component attributes (keys, names, descriptions, types, mix & max values, default values etc.) or you can go with fully DB backed up solution. Once this is "solved" you need to figure out that do you want to simulate game inside your editor, if yes, you need to figure out how you can make editor to run that custom system that you just created. You could easily compile new engine (dynamic lib) for editor, or you could decide that editor is just a DB/entity browser and it will communicate with your game binary via TCP/IP. You might also get a request to create prefab system to aid level designers and ton of other things...   Second question is different beast. First question was more of how to create and pass data to engine's subsystems and it normally follows subsystems' features. But in logic / behaviour component side it is all about logic and of course ton of variables to fine tune game play. Some prefer doing this with scripting language so you have ScriptComponent or similar in our engine side and this component is exposed via editor UI. ScriptComponent points to some external script assets which has the actual logic, and usually has some extra attributes for engine. In side this script asset you then have your logic, and depending you choices you can have another ECS system implemented with script language and dedicated to make logic scripting more easy. If are planning to go without scripting language then ECS behaviour system is more vital to you, it helps you to create logic with languages like C++. It is also more vital since you most probably want to expose all possible variables to editor UI to make those accessible by designers. Depending what you are doing you will most likely need a message/event system between different system,  system are depending on each other more heavily etc. But this is game implementation side of things, so it can be messy and it certainly will, but having ECS in place you will have some changes to create re-usable logic components and debugging should be more easier.   2.) I would say that it isn't naive approach to duplicate e.g. position, orientation and scale for rendering and physics components. Those are two different "world", so it is pretty much how it should go anyway. What you really need is a message/event "system" that can be used to sync data between different components. It can be something sophisticated or you can design our engine's update loop in a way that you can pass updated cache friendly continuous data blocks from system to system. Duplication also helps you to make your system concurrent, each task/job can run pretty much parallel without any problems if you keep dependencies between system low.[/quote] 1. I would say it is closer to a paradigm than a pattern, you can usually describe a pattern with a class diagram solidifying the abstract idea, with ECS, you have to mix and match concepts depending on your needs/tastes, I don't disagree with you, I just feel that what you mentioned is a sign of the idea's novelty, as it matures, a more specific approach should surface. 2. I feel that if you have to keep data in sync between 2 different components/systems, you are in part defeating the purpose of keeping systems separated and independent from one another, might as well go back to the old ways, it would be less complicated. Also I don't see how rendering position and physics position belong to different worlds, OK, physics position may be the center of gravity, while rendering position may be the mesh origin, if you keep the physics position as an offset from the rendering position, or the other way around you don't have to sync them at all.
  6. Kwizatz

    Why YOU should embed a web server in your game engine

    An alternative to Boost that I was considering is lighttpd, written in C, using the BSD license.   I had also briefly looked at Libmicrohttpd (LGPL) and Mongoose (GPL), but licenses dissuaded me.  Your relation to various licenses might differ.     I am on the same boat, the problem is lighttpd is not suitable for embedding, or the developers don't care about that, so you're on your own (reference).  Different kind of embedding... should have known.   libmicrohttpd and thttpd would be my options, myself leaning towards thttpd because of the BSD license variant, though libmicro is LGPL 2.1, so no problems with propietary code as long as it is a shared library.   The problem with both of those is that there is no Windows port, if you use cygwin, you get GPLed, so you'd pretty much would have to create the build scripts to build them with MSVC or MinGW, not that it would be impossible or too complex, but definitely extra work you were not expecting to have.
  7. My gripes with ECS have to do specifically with the lack of solid documentation, all the documents and blog posts I've read so far are too abstract and some leave important concepts just mentioned as if they were obvious to everyone. It would be nice to have an Article-Tutorial going over the concepts with actual code intertwined. I have started coding a proof of concept "ECS Framework" over at github, so perhaps I may take that endeavor myself in the future.   One thing that I realized is flimsy about ECS is the idea of less cache misses due to keeping components contiguously in memory, as phil_t mentioned, this IMO is only valid if you have one system one component, and systems are the ones managing their components use of memory, but it doesn't take too long to realize that both the rendering system as well as the physics system will require access to the position component, so who is the owner of the component? the physics system may require access to other components such as a velocity component for example, so you'll be referencing at least 2 different arrays of components for one system. The naive approach of keeping position on multiple components, say a rendering component and a physics component just complicates things as now you have to keep all of them in sync, loosing one of ECS features, independence between systems.   This, I think requires more though, I could see how interleaved component arrays (keeping heterogeneous components for a single entity together) could avoid cache misses, but then you'll have to iterate systems over entities rather than components over systems... which I don't really think is such a bad idea, unless you do want to go with the original concept of an entity being no class or struct at all but rather just a primary key in a SQL Select statement.
  8. I am sort of late to this discussion but I am looking on how to implement the pattern myself,     From what I've gathered I would say neither of those are entities, they all should be part of a map resource, referenced by a map component that is updated by a map system.   To elaborate, your map may be a XML document with cell or tile elements themselves with position (relative to the origin), id and texture (itself a reference to an image, shader and/or material) attributes. You write some code to convert the XML into a runtime resource object, which is then referenced by a map component, the component is the "instantiation" of your resource, and it will then contain information specific for that instance of the resource, for example position if your map may coexist with multiple maps snapped together.   Later on, in your game loop you may have a map system which updates any variables in your map component, and a rendering system may render it later, or a collision system may query the component which itself would query the resource for collision information, etc.
  9. I see, immediate mode, there is no reason why my GUI won't work with that, but since is no longer on core profile, I decided to drop it. but you're right, the amount of vertex calls for a single quad are too few in comparison to a full mesh that immediate mode shouldn't have that much of an impact in this situation.     I hadn't though much about that, It is nice, and you gave me a reason to expose shaders to the user, its all pimping of the UI as you said though, what I meant was that you never see any shaders for the basic operations, for example doing the bulk of the drawing on the fragment shader rather than just effects, but I guess its not really practical, and the way to do it is still the same as it was before the dynamic shader pipeline.   Thanks for your help! :)
  10. immediate OGL calls? what exactly do you mean? did you mean as I said before render lines with GL_LINES, rects with GL_QUADS, on DrawArrays/DrawElements?   I did that once, but it was not consistent, depending on the card nvidia/ati/intel, a line would end a pixel short or a pixel too long, an outline rect would not exactly match a filled rect, etc, even with the 0.375f pixel offset trick, I wouldn't get pixel perfect matches and would have to compensate one way or another.   I am not seeing any issues right now with my approach either, I am not really looking for alternatives right now, but its kind of something you don't see talked about too much, I've seen all kinds of shaders for example, but not one specific to GUI rendering, so I was just wondering if there was some sort of defacto way to do it I didn't knew about.
  11. Well, I do support alpha blending, in fact I have a software implementation in the library to blend at the client buffer, I can't recall if this was one of the reasons to drop it.   I do think one of the main reasons for the drop is that with glDrawPixels I have to make the call each single frame, even if no widget changes are recorded, with a texture, if no changes are recorded, then no changes to the texture are required, no call to glTexSubImage and you can just render the overlay quad with the same texture as it was on the previous frame.
  12. Yes, I was thinking about that, and thinking that maybe if I made all changes to the client buffer and fire the transfer at the end of a frame, doing a deferred render of the overlay on the next frame, in other words, fill the buffer and start the transfer, do all other rendering operations and then render the overlay, the overlay would always be a frame behind though.   Anyway, it seems that the way I am doing it is the way to go, I tried before using OpenGL primitives (for example use GL_LINES to draw a line, GL_QUADS to draw rectangles, etc), but that was never consistent between different graphic cards. I also tried glRasterPos and glDrawPixels, but I read somewhere that doing that was far from optimal...
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!