1.How would a rendering engine work like or be designed,would it have a set of classes that manage meshes and decides how they are rendered through a customized abstraction layer and what would be a bunch of good practices for creating a abstraction layer?
First off, OpenGL and DirectX are merely a set of APIs that interface with the graphics pipeline. So a 3d engine basically begins by wrapping basic constructs of these APIs and applying layer upon layer of abstraction. For example, OGRE3D offers ways to create Vertex and Index buffers regardless of the rendering API being used since they abstract the DirectX and OpenGL API from the user. Then they offer higher-level classes to perform various things that are pretty common, such as a ManualObject to that wraps creating these vertex/index buffers and pushing them to the GPU. All a user of a ManualObject needs to do is feed the class the vertices & indices.
3.How would you handle input precisely? What i mean is how do you specifically program the input to work properly, would you use booleans to determine which key is pressed?
First off, Input is generally platform and language specific. Depending on the target platform and language, you'll basically have a wrapper that listens for input signals. You then need to turn those signals into some "state" which for keyboards is typically an array that is either 1 (pressed) or 0 (released). For analog type input such as a mouse, you'll need a bit more structure to how you store the state but inevitability it's similar.
The most important aspect with input is that you need to make sure that however you handle state, that you merely capture it and dispatch it at predefined points in the game loop rather than dispatching it immediately when it happens. This makes sure that state remains valid throughout a single frame rather than having some parts of your game behave as if a key wasn't pressed and other parts of the same frame seeing the key as pressed.
Keyboard and mouse events are captured whenever dispatched by the platform OS and the input system caches them. At the top of the frame, those events are dispatched to two important systems in the following order.
This allows input that could be affecting the GUI (such as typing into a textbox) to get first dibs on saying the input was handled. If it was handled, input doesn't get dispatched to the other layers. If the input isn't handled, it gets dispatched to the action system where it can turn something like the 'W' key into a MoveForwardAction. Our input system also maintains an internal state table of these events at the top of the frame so that systems that would rather poll for input state (e.g. is key 'W' pressed or not) can do so and doesn't have to concern themselves with actions or events.
4.How are physics applied to a mesh? Is there something called a rigid Dynamic Body which basically is the same shape as the mesh and it covers collision detection and determines which part of the mesh collides with other objects?
I personally prefer to simply leverage an existing physics library and hook into their simulation step. Generally speaking, most physics implementations require that you first determine is your object a Rigid Body or Soft Body. Then you assign a shape to the physics object (box/capsule/sphere/mesh/custom). Then when the simulation is ticked, you can query the simulation to determine what objects collided versus which ones moved and update your own scene objects accordingly.
5.How is it all combined into game logic? How would you combine 3D Graphics,Input,Sound and Physics together to create a playable actor?
6.How does a game loop work? Say you have a game loop and you call some events in the game loop, would you have to update the game loop everytime?
There are usually two approaches to gluing these things together and it depends usually on the game's complexity.
For a simple game, a GameObject hierarchy that relies on inheritance will work just fine. You have some GameObject that you begin to split into things like a Player, NPC, Enemy, etc and go from there. But as your game's complexity grows, you will start to see pitfalls of this approach.
For more complex games, it's better to favor composition over inheritance hierarchies. You begin to decompose your game objects into bits and pieces of functionality. Then you construct your game objects as though they're a container of these pieces of functionality. If you read up on Entity/Component or Component-based systems, you'll start to get an idea of how powerful composition can be in a GameObject system over the traditional approach above.
Lastly, a program by it's very nature is a 'loop' of sorts, regardless of whether it carries out its set of operations a single time or repeats itself until some trigger constitutes that the execution must end.
In a game, this loop basically dictates an order of operations that must be done to initialize the game, the operations that are repeated over and over such as 1) gather input 2) update logic 3) render to the back buffer 4) swap buffers 5) perform post frame operations and lastly the set of operations to perform cleanup. Hitting the escape key is captured during step 1 and some system says hey set your game loop's stop variable. Then steps 2-5 happen and when the top of the loop checked the stop variable, it exits the loop.
Hope all that helps.
Edited by crancran, 20 January 2014 - 11:28 AM.