Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

2353 Excellent

About AgentC

  • Rank

Personal Information


  • Github

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. AgentC

    What are you working on?

    So many excellent looking projects here! Here's mine: day job - programmer at a medium-small North Finland game studio, mostly HTML5 and Unity projects these days. Current active hobby project - Steel Ranger, a Turrican / Metroid inspired scifi action-adventure for the Commodore 64. Now at the point of designing the game world layout, with an aim to get about 500 screens total. The basic game engine, player movement, player weapons and world tilesets are already done. I also maintain a mostly technical and sparsely updated development diary for it.
  2. Since the topic is so complex, I'll just put some very high-level notes while not even trying to formulate a complete answer to the actual question.   1) For cache-friendliness, try to ensure each operation's required memory is in a continuous memory region, and preferably accessed linearly from start to end. For example updating a particle system or an animated skeleton. Avoid "jumping" from heap object to another through pointers. A well-structured entity-component system is possibly a good match here, at least if the systems don't need accessing each other's data a lot.   2) A typical approach nowadays is to structure the execution of a frame into a task graph, and the tasks are executed on any available cores by worker threads. Data from each task or operation flows to the next, for example physics simulation & animation together produce the final positions of game objects, which go to culling, from which a visible object list goes to render command generation, which is finally submitted to the graphics API. When the culling / render stage for the current frame is running, you can already be calculating the logic for the next frame.   For both points 1 & 2 it's probably best to keep scripting for low-volume, high-level operations (e.g. ask pathfinding to start moving this object toward position x) or configuration (stats of game objects, list of render passes and postprocess effects for the actual rendering code). For such infrequent use even a singlethreaded script VM could be fine.   Here's Naughty Dog talking about their engine's multithreading & memory allocation: http://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine   Also old Bitsquid development blog entries may be interesting reading: http://bitsquid.blogspot.com/
  3. To take the window creation and management as an example, SDL implements it separately (using the relevant OS-level API's) for each OS / platform it supports. Same for e.g. audio, joystick etc.
  4. Consider also debugger support. If you program game logic directly in native C / C++, you can debug using your compiler's tools. With a scripting system, you'll have to implement debugging yourself if you want the same level of access. The script VM may help in this by providing hooks for breakpoints, single-stepping etc.   Though, once you have script debugging in place, you can go above and beyond of what gdb or Visual Studio could provide by tying it into your game engine systems (e.g. editing the game object properties while the game is running, selectively suspending or restarting specific scripts..)
  5. hypester nails a lot of points I was thinking of as well. Though the popular YA series still do include physical combat, so I wouldn't see a reason to exclude it entirely. The production cost of choices and branching storylines vs. the expected sales could be a big hurdle, until there's some major evolutionary step in storytelling technology   If a game is heavily focused on interpersonal relationships, and these tie into the game mechanics, they could become somewhat cheapened once players manage to dig up the formulas used to guide the story / relationships (compare to the Mass Effect 2 Suicide Mission flowchart). Though that's just my gut feeling.  
  6. It's possible it's not documented strongly enough that it's a helper only. And it's also extensively used by all the included samples :)   But anyway, there's the Engine class that does that actual work (and which is simply instantiated by Application, which you could also do yourself.) Even that isn't strictly mandatory, if you know what you're doing you can instantiate only the subsystems you need, but then we're deep in "advanced use" territory.
  7. Just a minor Urho-related correction: the Application class is an optional helper, not the "core of the engine". The most useful thing it does, for simple applications which don't need more sophisticated control, is to abstract away how the platform wants the main loop to be run, which differs on iOS and Emscripten ("run one frame when requested" instead of "run in a loop forever until exited"), but you're not forced to use it.
  8. AgentC

    Game Engine Modules/Projects

    Strictly speaking you can also live dangerously and take the approach that the engine DLL and the game simply need to be compiled with the same C++ compiler, and therefore allow working through C++ ABI directly instead of making a proper C API (C4 engine, and some graphics libraries like Ogre work this way.)   But I agree that dynamic libraries are a complication and the OP likely doesn't need them right now.
  9. AgentC

    Modern OpenGL Transforms, HOW?

    AAA engines don't necessarily use the most modern strategies out there, but just something that works well enough for the target platform(s) + the game they're running.   Typically the most amount of geometry comes from static world geometry, which can be batched together (offline, not runtime) into bigger chunks to reduce draw calls.   Something like trees and foliage would just be rendered instanced. When far away, sprite impostors can be used for even lower performance impact.   One particle effect with all its particles is usually one draw call, and its vertex buffer is typically completely rewritten each frame when it's visible and updating. Most of the math can be done in vertex shader so that the CPU-side calculations for the update stay light.   On modern desktops you can get quite high with the draw call count (a few thousand should be no problem) but on high resolutions and less beefy GPU's the shading would easily become the bottleneck instead.   One thing to consider is that AAA games on Windows usually use D3D, which can have better optimized drivers. If you have bad luck with OpenGL drivers (Intel?) and an older card the driver may be doing work on CPU which should really be done on GPU. Using VBO's and a good driver, over a thousand draw calls should not be a problem on either D3D or OpenGL.  
  10. That clears it up.   You can get quite nice results with assuming a very low version like GL1.1 and checking for extensions, with the downside that the more extensions you use, your code can become quite complex and ugly from the supported / not supported codepaths, and you'll be using functionality which is deprecated from the point of view of newer GL (3.0+) versions, so you may learn bad habits.   The old GPUs (except historically Nvidia) can come with poor GL drivers, so you may encounter bugs where the driver tells you that some extension is supported, then you go ahead and use it, and crash. Though if you mostly just render textured triangles to the backbuffer, maybe using (simple) shaders, there's not much extensions needed and not much that can go wrong.   The Second Life open source client code (LGPL) can be quite illuminating in how they only require a low OpenGL base version and then check for supported features to enable e.g. VBOs (if no VBO support, they use vertex arrays instead) and GLSL shaders. There's also quite some driver bug workarounds in there. Or at least it used to be like that, haven't checked in a few years to see if they rewrote everything to be more modern :)
  11. Is the program actually required to run on machines with very old GPU and drivers that only support GL1.x, no shaders etc. or is it just that you want to supply a legacy OpenGL style API for users of the program?
  12. Will echo what many have already said.   The engine runtime is the fun part, and also mostly easy, unless you go for challenging state-of-the-art tech, or try to maximize performance. Creating the scene management, rendering and lighting, physics integration, possible multithreading etc. This can still take a lot of time (easily a man-year) depending on your expertise and how much features you're going to add.   After you've got the runtime done, the rest is to make the system usable for actual game creation. Up to this point you probably haven't needed to make any concrete decisions on how the game projects made with the engine are structured, how assets are imported and cooked into a build, how the game logic or rules are inserted and how they interact with the runtime, how the world data is represented for processes like precalculated lighting or navigation data creation, and how to make all the workflows usable for the creators. Now you're going to have to make a lot of decisions which influence what kind of games you can make with the system, and how usable it turns out in the end.   It helps if you can handle 3D modelling yourself so you can continuously test from a content creator's point. In reality working on the runtime & tools / workflow will very likely intertwine, I just separated them to illustrate the difference.   You can also decide to limit yourself to just creating a coder-oriented runtime library (compare e.g. to Cocos2D or Ogre), rather than a full-blown game engine (like Unity). It will still be a worthwhile learning experience, but probably not something that's directly useful as a game creation tool. Getting to the full-blown stage will certainly take man-years.
  13. The typical approach is to just sample the animation at the time position to which it has advanced to, according to the time step between previous frame and current. If this leads to skipped keyframes, then so be it. Your idea to preserve the "dominant" movement of an animation even in low FPS conditions is noble, but I don't know any engines that actually go to such trouble. At low FPS the gameplay feel will be poor anyway, so usually the engineering effort goes to ensuring that the FPS preferably never goes unplayably low.   Of course, if you know that you will never render faster than e.g. 30 FPS, it will be a waste of memory to store animations with higher keyframe frequency than that, in which case you could just re-export the animations with a suitable frequency.
  14. The 68k is a very clean instruction set; you have a number of data registers, which all work the same, and address registers. There are nice instructions for math, including integer division / multiplication.   If your eventual goal is Megadrive and as you have previous C/C++ experience it doesn't seem like a stretch to go directly for the 68k.   However, there may be some difficulty in setting up a development toolchain so you can compile and run Megadrive programs, and also you would be learning the hardware features at the same time (to e.g. learn what addresses you need to poke to get something to show up on the screen). There are reverse-engineered / leaked resources for this, but not as abundant as for retro computers. When an 8/16bit console boots up and starts executing your program, it typically starts from almost nothing, on the other hand a computer typically has the screen already displaying some sensible data (like text), and has ROM operating system routines to help you.   Therefore, for the quickest, hassle-free introduction into the retro/asm programming mindset with minimal setup and immediately visible effects I'd recommend the C64 as well. For example with the VICE emulator, you can break into the built-in debugger/monitor and write & run simple asm programs directly; no toolchain setup needed. The C64 CPU instruction set is extremely limited though, you have three primary registers (A,X,Y) which all are used differently and you can forget about more complex functions like multiplication - they don't exist and must be written manually using bit-shifting arithmetic.   If you don't feel overwhelmed by the prospect of learning the hardware and having to set its state up from scratch, you'll waste less time going directly for your target platform though.
  15. AgentC

    Story First? Story Last?

    I can speak from the perspective of writing recent (ie. post commercial era) story-driven sidescroller platformer / shooter games on the Commodore64. These are for the most part solo efforts.   It's not necessary to have fully detailed story first (ie. down to each line of dialogue), and code can be started as soon as you know roughly what kind of gameplay mechanics there are going to be. However I would advocate having the basic progression of the story down before the game world design and artwork production starts, to make sure the locations flow logically and no artwork is wasted. Personally the world artwork production represents the most grueling part for me so it has a heightened importance to get right; may not represent an universal truth.   Certainly, there is going to be (or should be) back-and-forth interplay of the story and mechanics during development; otherwise you may miss many wonderful creative opportunities.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!