Crossbones+ - Reputation: 1529
Posted 30 April 2014 - 05:47 AM
Members - Reputation: 1385
Posted 30 April 2014 - 10:46 AM
Games all vary and so will their construction. Using a 3D application is fine as is using a custom inhouse tool.
As with the modelling the collision side of things will be handled differently by different folks in different situations, but the end goal is the same. Accurate collisions as fast as possible - you basically want to quickly establish what objects the player MIGHT be colliding with... then based on that short list you do more detailed testing, i.e. per polygon or primitive if appropriate.
GDNet+ - Reputation: 10739
Posted 30 April 2014 - 12:16 PM
A "map" in a general FPS app is actually comprised of several different element types. That is, there are static immoveable objects (walls, ground, etc.), moveable objects (boxes, rocks, etc.), characters (animated meshes), attachments (weapons, helmets, clothing, etc.), and so on. Each of those may be modeled in a separate program. There may be an in-house "terrain" modeler designed specifically to create data efficient for use in a particular app. Blender or Max or Maya may be used for character models, weapons, attachments, animation sequences, etc. Those models are likely converted to a format efficient imported/loaded into the game app. Models are normally decorated with textures which may be created in yet another program.
There are several collision/physics libraries available (Bullet, Physx, ODE, etc.), each of which requires handling objects differently, often depending on whether the object is immoveable, moveable, simulated with a plane or sphere or cylinder or box, etc. Those physics objects are commonly separate from the model to be rendered and have to be created in yet another fashion compatible with the particular game engine's implementation.
Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.
You don't forget how to play when you grow old; you grow old when you forget how to play.
Members - Reputation: 2556
Posted 30 April 2014 - 12:33 PM
I wanted to develop my map, so before developing I wanted to ask that should I keep every part of map separate and then render or should I render everything at once?
Definitely check out that link I provided. It has a lot of useful information for designing maps.
One thing AAA companies do is bake all texture maps into a 3d model. This includes color/diffuse textures, specular textures, normal textures, ambient occlusion textures and whatever else you want.
The 3d model, as noted by someone, can be created in your choice of 3d modeling software. I personally use Blender3D, Sketchup, Sculptris, and Wings3D.
Also, a lot of AAA companies use repeated geometry (re-used 3d models) to make the map full. You could look up the term "modular design" for more information on how it is done.
A lot of people hone their level design skills by modding games. The first game development site I joined was http://www.mapcore.org/
The game engine you use should have collision detection in it. A lot of time people don't test collisions against the actual 3d models, but against dummy objects, which are usually basic geometric shapes like a cube, or sphere. Testing collisions against a 3d model that has a bunch of faces could result in severe lag, since the computer is calculating collision with each face on the 3d model. So the 3d model is rather given a bounding-box, and collisions are checked against that.
Only test collisions against things that MUST have collision. You don't need to test collisions against scenery objects (objects that are just there for looks, but have no actual functionality). For example, you could test collisions against the ground, but not against every rock and blade of grass on the ground.
If you want to check it out, you can download the engine I use "Maratis3D" and check out this project here:
I use collision detection to simulate the 5 senses.
Edited by Tutorial Doctor, 30 April 2014 - 12:39 PM.
They call me the Tutorial Doctor.
Crossbones+ - Reputation: 4972
Posted 01 May 2014 - 12:15 AM
The replies here are so generic they make me - me - confused.
So, let's start easy.
Back when computers had KiBs of RAM yes, levels were often encoded into the program. This practice is rarely justified nowadays as the need to squeeze out those savings are no more there but there might be some reasons to go.
FPS games usually had some kind of editor. Every game had basically at least one major game-specific editor. Doom had them. Quake had them. Interestingly, some people have been able to produce Doom or even Quake maps by hand, doing the computations manually but they would still have to go through a data load phase (not in code anymore).
Unreal is perhaps the only major exception so far as it was always dominated by its own editor UnrealED.
Generally, those editors allow to work on simplified meshes (cubes, wedges, pyramids) to be manipulated through CSG operations. The resulting meshes are made solid by default.
This approach is inadequate for modern games (say in the last 10 years), it can still get a lot of work done, but it's no more sufficient.
Nowadays the level might be still "ironed out" in the editors but the contribution from other DCC tools is bigger and bigger. Those external program get integrated in workflow through filters or importers which are usually fairly fine in control. Keep in mind that using a DCC tool (such as Blender) for level creation is likely to be less time-efficient than using the appropriate editor but it's still way better to crack your head writing one.