Jump to content

  • Log In with Google      Sign In   
  • Create Account

GuyWithBeard

Member Since 04 May 2006
Offline Last Active Today, 02:37 PM

#5302288 Game Actors Or Input Components?

Posted by GuyWithBeard on 24 July 2016 - 04:54 AM

I would advise against having an input component that reads straight off the input hardware. It makes it unnecessaily difficult to switch the object that is controlled by the player. For example, lets say you have a character that walks over to a car, gets in it, and drives off. At some point you want to switch the controlled character to the car, possibly removing the character object at the same time. While this is certainly possible to with an input component you have to be very careful that the car and the character dont exist both for a frame or two and happen to both read a key press that causes them both to do something only ever a single object should do at any point in time.

Another example is having the capability to control an enemy character during development. Being able to switch to another character by clicking on them is priceless when testing certain gameplay features.

The solution to this is to have an additional layer between the input hardware and the objects, called a controller. In my engine each controller can have zero or one objects called their "avatar" which is the object that controller controls at the moment. I believe Unreal calls this "possessing". Each human player that joins a game gets their own controller (which can also handle things like key-remapping etc that game objects really should not care about).

Communication between the controller and the object is done through messages. This means that you can make a common movement interface for both your player character and enemies and simply switch the avatar object of the controller. The movement will work the same for both. If the player character can jump but the enemies cant they just wont react to the jump message.

Regarding AI characters, I am not really sure having their behavior be driven through a controller is necessary. In networked games it makes sense, but for single player games I usually have a component that drives their internal AI logic. If the object becomes the avatar of a player controller, the AI component simply switches off.


#5297574 Problems with "stickiness"

Posted by GuyWithBeard on 22 June 2016 - 06:42 AM

Can you not have high-poly objects for rendering and lower-poly physics spheres if the physics geometry is what is driving your "stickiness"...?




#5296729 Merging groups of entities into convex hulls

Posted by GuyWithBeard on 15 June 2016 - 03:12 PM

I made a little progress.

 

In my system horizontal merging is a lot more important than vertical because the visibility system is essentially 2D. Every object has an upper and a lower bound so they can be included/excluded in the visibility calculations as the observer moves up and down, but the construction of the line-of-sight geometry is all done in the X-Z-plane (Y is up in my engine).

 

So I came up with this algorithm (dunno if this is an established way of doing things, but it is working fine for me):

  • Get a list of objects that we consider for merging, called the "main list" below
  • Pick the first object in the list, called "reference box", or "ref box" below
  • Check if the shape of the object is a box, if not we cannot merge it so just submit it as-is
  • If the shape is a box, we will try to merge it with other boxes
  • Get a new list, called the "merge list" containing all objects except the ref box
  • Iterate over all objects in the list and process those that are boxes, sorted by distance to the ref box (closest first)
  • For each box, check if the upper and lower bounds are the same as the ref box, if not then we cannot merge
  • If the bounds match the ref box, calculate 2D convex hulls for both shapes in the X-Z-plane, then calculate the areas of the convex hulls
  • Calculate the convex hull of the combined shape, then calculate the area of that convex hull
  • Check if the sum of the original areas of the convex hulls roughly equal the new combined area
  • If the areas are equal, we can merge the boxes together
  • Continue with the rest of the objects in the merge list, attempting to merge each with with the ever-growing ref box
  • Remove all processed objects from the main list

Using this technique I managed to nicely merge most of the geometry in my test level. Check out the picture below:

 

merging.png

 

The colored boxes represent box objects. The colors are selected from a list of a few colors and sometimes the same color appears on adjacent objects. From the image you can see that the algoritm works with both world space axis aligned objects as well as rotated ones. You can also see (on the wall of boxes in the lower left corner) that the algorithm does not merge objects vertically.

 

In the test level above the number of objects decreased from 266 to 45.

 

Fun!




#5294979 Frustum-based Interest Management?

Posted by GuyWithBeard on 04 June 2016 - 10:43 AM

Bungie's Halo networking model uses this (among many other things). Check out this Halo Reach talk:

 

http://www.gdcvault.com/play/1014345/I-Shot-You-First-Networking




#5290580 Basic level editor

Posted by GuyWithBeard on 07 May 2016 - 03:21 PM

Maybe Tiled? http://www.mapeditor.org/




#5288106 Question about adding bullets to arraylists

Posted by GuyWithBeard on 22 April 2016 - 02:40 AM

Also, it's not really good forum etiquette to remove your questions and replace them with "[solved]". That way no-one else can benefit from the question and answer later on. When your issue is solved, reply to the thread letting other people know what the problem was. Then try to learn from the experience.




#5280933 How do you generate a Navigation Grid for 3D environments?

Posted by GuyWithBeard on 12 March 2016 - 02:37 PM

Well they are hardly squares since we are talking about a 3D environment here. Anyway, the voxels produced by Recast should be exactly what you need.

 

If you cannot figure out where the generation is then you haven't looked very thoroughly at the code (in the file that I suggested earlier).

 

https://github.com/recastnavigation/recastnavigation/blob/master/RecastDemo/Source/Sample_SoloMesh.cpp

 

The complete navmesh generation is in Sample_SoloMesh::handleBuild(), but you don't necessarily need to do all steps 1-7. Step 2 is concerned with creating the voxel height field. Search for duDebugDrawHeightfieldSolid() to see how the demo app draws the voxels.




#5280919 How do you generate a Navigation Grid for 3D environments?

Posted by GuyWithBeard on 12 March 2016 - 12:44 PM

Not exactly sure what you mean by squares, but Recast does indeed generate a voxel mold out of the geometry as a intermediate step when generating a nav mesh. I haven't used the voxel mold directly but I don't see why that wouldn't be possible. Try out the recast demo with some of your geometry to see if the results look like something you could use. The demo app can draw the voxels for you.

 

What exactly don't you understand about the generation phase? The samples that come with Recast are very well documented. Check out Sample_SoloMesh for example.




#5275706 What are you guys using to simulate packet loss/latency while testing?

Posted by GuyWithBeard on 15 February 2016 - 01:33 AM

This tool works fairly well:

 

https://www.softperfect.com/products/connectionemulator/

 

It does not allow you to mess with the loopback interface though, so you basically have to have two boxes (or two physical network interfaces). Some people build this sort of thing right into their network layer. That way you don't need any separate software at all.

 

EDIT: I almost forgot. There is this thing as well: https://github.com/jagt/clumsy I have not used it though.




#5273448 Good tutorial's for 2D voxel engine development with unity5

Posted by GuyWithBeard on 31 January 2016 - 04:52 AM

2D voxel? Wouldn't that just be a pixel?




#5273275 Drawin Mesh (without model file)

Posted by GuyWithBeard on 29 January 2016 - 04:08 PM

You can certainly do this. The rendering API has no idea where the data originally came from (model file, network stream or even defined at runtime).

 

You just need a way to define each vertex (a vertex buffer) and some way to define the ordering the vertices into triangles (an index buffer) and fill those with the data you want. For texturing, just make sure your vertices have UV coordinates and set the texture you want to use. The exact details of this depend on the engine/framework you are using and on the shaders used.

 

In theory you could even generate the data on the GPU completely using eg. the geometry shader, but lets' not go there. My point is just that the CPU does not even need to know about your meshes.




#5273270 Game Command Console

Posted by GuyWithBeard on 29 January 2016 - 03:50 PM

I have a similar system set up. I use fastdelegates for hooking up executable functions to console commands (you can just as easily use std::functions if you want). A very simple system would look like:

public:

  typedef FastDelegate0<void> CommandCallback;

  void registerCommand(const std::string& commandName, const CommandCallback& cbck)
  {
    // Add to map here...
  }

private:

  std::map<std::string, CommandCallback> mRegisteredCommands;

After parsing a command you just look up the correct callback in the mRegisteredCommands map and execute it. You can even support passing arguments to the callbacks. If you use fastdelegates you can save all mementos to the same map regardless of the number and types of arguments. You just need a way to recreate the proper callback before doing the actual call.

 

Another way (and the way I am doing it) is have all registered functions be FastDelegate0<void>:s, ie. parameterless functions, but allowing (and indeed requiring) the functions that are registered with arguments to pop them off a temp storage stack that is filled by the console during the command parsing phase.




#5271039 3D Editor Transform Gizmos(Handles)

Posted by GuyWithBeard on 14 January 2016 - 07:16 AM

I do something similar to Daixiwen. If you want the gizmo to remain the same size onscreen with a perspective camera you can use the following code to set the scale:

const float gizmoSize = 0.2f;
float scale = gizmoSize * ((camPos - gizmoPos).length() / tanf(cam->getFovY() / 2.0f));

You probably want to clamp the scale between some values that make sense for you. The gizmoSize depends on how large the gizmo mesh is, how large you want it to be etc.




#5269562 good Game Development resources

Posted by GuyWithBeard on 06 January 2016 - 12:12 AM

Since you want to create you own 3D engine I am going to recommend Frank Luna's Introduction to 3D Game Programming with DirectX11. Awesome book:

 

http://www.amazon.com/Introduction-3D-Game-Programming-DirectX/dp/1936420228/ref=sr_1_2?ie=UTF8&qid=1452060625&sr=8-2

 

It assumes you have a farily good understanding of C++ but nothing too advanced. It also teaches you a fair bit of 3D math that is the same regardless of API, language of platform.




#5269561 Which development kit to use for my new project?

Posted by GuyWithBeard on 06 January 2016 - 12:06 AM


You should never use the term “dev kit” to refer to anything but specialized hardware and related development tools

 

Well, Wikipedia says the following about SDKs: "A software development kit (SDK or "devkit") is typically a set of software development tools that allows the creation of applications for a certain software package, software framework, hardware platform, computer system, video game console, operating system, or similar development platform." I don't agree that devkit necessarily means hardware.

 

Anyway, that's beside the point. Like Nypyren said, sounds like you are better off using an API such as winforms (or wpf) for what you have in mind. The database connection is easy enough with an ADO.net connector and you can still hook into the GPU if you want to use the hardware for bitmap manipulation with something like SharpDX. As for rendering, in my level editor I am drawing using DirectX into a winforms panel and it works like a charm. It can then obviously be used like any other winforms control.

 

Hope that helps.






PARTNERS