Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 16 May 2005
Offline Last Active Today, 10:21 AM

#5257971 Game Engine Creation: What are the biggest challenges faced?

Posted by AgentC on 19 October 2015 - 02:23 PM

Will echo what many have already said.


The engine runtime is the fun part, and also mostly easy, unless you go for challenging state-of-the-art tech, or try to maximize performance. Creating the scene management, rendering and lighting, physics integration, possible multithreading etc. This can still take a lot of time (easily a man-year) depending on your expertise and how much features you're going to add.


After you've got the runtime done, the rest is to make the system usable for actual game creation. Up to this point you probably haven't needed to make any concrete decisions on how the game projects made with the engine are structured, how assets are imported and cooked into a build, how the game logic or rules are inserted and how they interact with the runtime, how the world data is represented for processes like precalculated lighting or navigation data creation, and how to make all the workflows usable for the creators. Now you're going to have to make a lot of decisions which influence what kind of games you can make with the system, and how usable it turns out in the end.


It helps if you can handle 3D modelling yourself so you can continuously test from a content creator's point. In reality working on the runtime & tools / workflow will very likely intertwine, I just separated them to illustrate the difference.


You can also decide to limit yourself to just creating a coder-oriented runtime library (compare e.g. to Cocos2D or Ogre), rather than a full-blown game engine (like Unity). It will still be a worthwhile learning experience, but probably not something that's directly useful as a game creation tool. Getting to the full-blown stage will certainly take man-years.

#5256690 How to handle skipped animation frames (skeletal animation)

Posted by AgentC on 11 October 2015 - 10:33 AM

The typical approach is to just sample the animation at the time position to which it has advanced to, according to the time step between previous frame and current. If this leads to skipped keyframes, then so be it. Your idea to preserve the "dominant" movement of an animation even in low FPS conditions is noble, but I don't know any engines that actually go to such trouble. At low FPS the gameplay feel will be poor anyway, so usually the engineering effort goes to ensuring that the FPS preferably never goes unplayably low.


Of course, if you know that you will never render faster than e.g. 30 FPS, it will be a waste of memory to store animations with higher keyframe frequency than that, in which case you could just re-export the animations with a suitable frequency.

#5255959 What's the best system on which to learn ASM?

Posted by AgentC on 07 October 2015 - 01:40 AM

The 68k is a very clean instruction set; you have a number of data registers, which all work the same, and address registers. There are nice instructions for math, including integer division / multiplication.


If your eventual goal is Megadrive and as you have previous C/C++ experience it doesn't seem like a stretch to go directly for the 68k.


However, there may be some difficulty in setting up a development toolchain so you can compile and run Megadrive programs, and also you would be learning the hardware features at the same time (to e.g. learn what addresses you need to poke to get something to show up on the screen). There are reverse-engineered / leaked resources for this, but not as abundant as for retro computers. When an 8/16bit console boots up and starts executing your program, it typically starts from almost nothing, on the other hand a computer typically has the screen already displaying some sensible data (like text), and has ROM operating system routines to help you.


Therefore, for the quickest, hassle-free introduction into the retro/asm programming mindset with minimal setup and immediately visible effects I'd recommend the C64 as well. For example with the VICE emulator, you can break into the built-in debugger/monitor and write & run simple asm programs directly; no toolchain setup needed. The C64 CPU instruction set is extremely limited though, you have three primary registers (A,X,Y) which all are used differently and you can forget about more complex functions like multiplication - they don't exist and must be written manually using bit-shifting arithmetic.


If you don't feel overwhelmed by the prospect of learning the hardware and having to set its state up from scratch, you'll waste less time going directly for your target platform though.

#5250567 Extending Unity editor and new UI

Posted by AgentC on 04 September 2015 - 08:15 AM

If you want to use the GameObject / Component based new UI for tool functionality, then the tools need to be actually running inside your Unity application. In that case you could look into any Unity new UI tutorials, but you are limited by what the Unity runtime API allows you to do.


However if you want to extend the editor itself, and create new windows or inspectors for it, you still need to use the old-style UI functions, as per Unity's documentation: http://docs.unity3d.com/Manual/editor-EditorWindows.html

#5247988 Reducing CPU usage in Enet thread

Posted by AgentC on 21 August 2015 - 01:14 AM

When I used to use Enet, I did just as hplus0603 suggested, run & update Enet in the main thread as part of the rest of the frame loop.


If I was also rendering, the rendering would naturally limit the framerate & CPU usage, especially if vsync was used.


In case of a headless server, the main loop could sleep to avoid spinning at 100% rate, when it doesn't have simulation work to do. Getting the sleep accurate may be a problem depending on the OS.

#5240042 Unity or C++?

Posted by AgentC on 13 July 2015 - 04:06 AM

Note that C++ programming in terms of Unity is used for plugins to extend Unity's functionality, for example making a custom movie recording functionality. Using it for gameplay purposes will be cumbersome, as you'll have to write all data marshalling (for example the positions of scene objects) between C++ / Mono yourself; Unity doesn't come with built-in C++ scene access.

#5235453 Is OpenGL enough or should I also support DirectX?

Posted by AgentC on 18 June 2015 - 06:12 AM

If you're fine with fairly old rendering techniques (no constant buffers, just shader uniforms, no VAO's) I wouldn't call OpenGL 2.0 or 2.1 bad. It has a fairly straightforward mapping to OpenGL ES 2.0, meaning your rendering code won't differ that much between desktop & mobile, and because it's older, it's better guaranteed to work with older drivers. You'll just need to be strict with obeying even the "stupid" parts of the specs, like for example if you use multiple render targets with a FBO, make sure they are the same color format.

#5235439 Is OpenGL enough or should I also support DirectX?

Posted by AgentC on 18 June 2015 - 05:01 AM

On Windows, do you want to support machines that probably never had a GPU driver update installed, and whose users don't even know how to update drivers? If the answer is yes, you pretty much need to default to DirectX on Windows.

#5235262 rendering system design

Posted by AgentC on 17 June 2015 - 04:28 AM

Or if the materials need a different shader, build instance queues/buckets on the CPU before rendering so that the queue key is a combination of the mesh pointer and the material pointer.


e.g. queue 1 contains instances of mesh A with material A, these will be rendered with one instanced draw call

queue 2 contains mesh A with material B, another draw call for them



I'd only put a "default material" pointer to the actual mesh resource, and allow the mesh scene nodes (instances) to override that.

#5233191 glGenBuffers is null in GLEW?

Posted by AgentC on 06 June 2015 - 12:38 PM

On Windows, SDL doesn't fail even if doesn't get the requested OpenGL version, in this case 2.1, but instead initializes the highest version it can. The "GDI Generic" renderer name points to the computer not having an up-to-date Nvidia graphic driver installed, in which case it's using the Windows default OpenGL drivers which support only OpenGL 1.1.

#5220737 Game Engines without "Editors"?

Posted by AgentC on 01 April 2015 - 11:01 AM

Urho3D actually acquired D3D11, OpenGL 3.2 and WebGL (Emscripten) rendering recently, the site documentation was just lagging behind. However it doesn't yet expose some of the new API exclusive features like tessellation or stream out, so if you require those you're indeed wise to choose another engine.

#5219557 Link shader code to program

Posted by AgentC on 27 March 2015 - 04:00 AM

That is inside humor from my workplace, where we used to talk about "boolean farms" where a class accumulates, over the course of development, a large amount booleans to control its state in obscure ways. If each combination of booleans indicates a specific state, a state enum could be more appropriate. As for the "manager" classes, it's somewhat a matter of taste, but usually "manager" tells very little of what the class is actually doing. For example if it's loading or caching resources, then Loader or Cache in the class name could be more descriptive.

#5219436 Link shader code to program

Posted by AgentC on 26 March 2015 - 02:16 PM

Maybe this will explain better, read particularly the beginning and the section "inbuilt compilation defines" http://urho3d.github.io/documentation/1.32/_shaders.html


This is from the engine I've been working on for a couple of years. Using this approach you'd build for example a diffuse shader, or a diffuse-normalmapped shader, as a data file on disk, and the engine would load it, then proceed to compile it possibly several times with different compile defines as needed. The engine would tell in what lighting conditions it will be used, and what kind of geometry (for example static / skinned / instanced) it will be fed. The engine would typically maintain an in-memory structure, like a hash map, of the permutations (e.g. "shadowed directional light + skinning") it has already compiled for that particular shader.


Naturally compilation defines are not the whole story, in this kind of system you also need a convention for uniforms, for example that the view-projection matrix will be called "ViewProjMatrix" so that the engine knows to look for that uniform and set it according to the current camera, when rendering.


Note: I don't insist this is the best way to do things, just the one I'm used to.


#5218437 Link shader code to program

Posted by AgentC on 23 March 2015 - 06:22 AM

Is this an accepatble approach for big engines aswell? It seems a bit "unprofessional" to guess that there will always be N num of attributes and X num of samplers in the shader program, and all that pointers are set in a model loading system which isn't directly connected to the shader code.


Some possible approaches would be:


- The engine just takes in model and shader data and it doesn't care what attribute or sampler index is which. The user of the engine is responsible for loading data that makes sense. In this case the model data format could have a vertex declaration table, where you specify the format & semantic for each vertex element, and these get bound to the attributes in order.


- The engine specifies a convention, for example "attribute index 0 is always position and it's always a float vector3" or "texture unit 0 is always diffuse map". In this case the model data doesn't need to contain a full vertex declaration, but it's enough to have e.g. a bitmask indicating "has positions" or "has normals".

#5216844 How to limit FPS (without vsync)?

Posted by AgentC on 16 March 2015 - 08:22 AM

Graphics APIs do a thing called "render-ahead" where depending on the CPU time taken to submit the draw call data to the API, and other per-frame processing, the CPU may run ahead of the GPU, meaning that after you submit draw calls for frame x, the GPU is only beginning to draw frame x - 2, for example. This results in apparent worse input lag, because the visible results from input, like camera movement, get to the screen delayed. Vsync makes the situation worse, as the maximum amount of frames to buffer is fixed (typically 3) but with vsync enabled there's more time between frames.


There are ways to combat the render-ahead: on D3D9 (and I guess OpenGL as well) you can manually issue a GPU query and wait on it, effectively keeping the CPU->GPU pipe flushed. This will worsen performance, though. On D3D11 you can use the API call IDXGIDevice1::SetMaximumFrameLatency().