Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 12 Mar 2005
Online Last Active Today, 03:52 PM

Posts I've Made

In Topic: how to handle Immensely large multiplayer worlds?

21 October 2016 - 02:55 PM

Like I said everyone misses the point , ITS MULTIPLAYER not single player, so even if one player cannot see it server still need to keep track of it


The things that exist on the server are different than the things that exist on the client.


There is nothing preventing you from using fixed point math in one representation --- as discussed with the world as a fixed point 64-bit integer --- and also having a floating point representation used for your client game engine.


The most common practice that I read about for overcoming floating point precision in video games with large worlds is by dividing the world into cells and load it with player's position and shift origin of objects with respect to player.


Note that this is extremely common because it is very effective.


Floating point works well for most games, assuming a meter scale, up to about a 4 km value from center. That's +/- 4km in x and in y, or about 64 square kilometers, or about 25 square miles.  That's an enormous area.


Occasionally, perhaps after you've traveled a kilometer or two, or a mile or so, you update the world so you're in a new center location.



Even for large worlds it is rare for RPG-style games to cover more distance than that in a single region.   WoW is currently on the order of 60 square miles of actual content. Most of the older areas are vacant ghost-towns. Skyrim and Dragon Age Inquisition are both on the order of 20 square miles of content. GTA 5 covered about 100 square miles, but the vast majority of it was fake/duplicate buildings. It was probably around 2-3 square miles of actual modeled stuff plus about 10 miles of modeled roadway segments.

In Topic: TDD and predefined models

21 October 2016 - 02:44 PM

It works however you end up making it work for you.


The philosophy of test driven is embodied in "Red-Green-Refactor".


That is:  Modify the test suite so it covers what you want to do.  Make the tests go red, they fail.   Then modify the code so it does what you expect the new behavior to do. Make the tests go green, they pass.  Clean up the code so it looks pretty and is easy to work with.  Verify that all tests still pass, it should stay green.



This is different from testing afterword:  Modify the code so it does what you expect, tests go red. Look at the tests that failed, make them go green.  


There are several risks with testing afterword. The key risk is that the tests can be modified to not test what you actually expect them to test: they go green because they don't test anything and always pass.  Basically your tests go green but are testing the wrong behavior.  The next big risk is that code doesn't fit nicely in a test harness. Since you're developing the tests and the code at the same time you can make sure there are mock objects or injection points or abstractions, and all of them work.  If you develop the tests later it may require rewriting stuff you already built, which is an additional development cost. Yet another big risk is that you are making changes that weren't covered in a test; by never having a test change from fail to pass you never verify that the functionality is tested.

In Topic: TDD and predefined models

21 October 2016 - 09:20 AM

Regarding how tests make things permanent:



You may write a unit test for code.  Say it gives an input of 3, 5, and 9, and the output is "M".  That becomes the permanent behavior. The test will always pass as long as 3, 5, and 9 produce "M".


As your test suite grows, you have thousands of tests for thousands of different pieces of code.  If you change something and the behavior is modified, the tests fail.  If your change means 3, 5, and 9 now produce "X", it will show up in the test.



Once you have a comprehensive suite of tests, every functional change you make to the existing software should change the test results.  No matter what the functionality is you are modifying it should be covered by something in the tests.


Hence, tests help make code permanent.  



Note also that they don't necessarily make code correct, only permanent.  The correct result for 3, 5, and 9 might have been "Q", but if the test says it is "M" then that is the result being tested.  Tests should be designed to help minimize defects, but sometimes bugs exist both in the main code and the test code.




TDD with an existing suite of tests means you modify the test first so the test fails, then modify the code to make the tests pass.  In this example you may modify it so 3, 5, and 9 should result in "Q" and the test immediately fails.  Then modify the main code to the new version, which should make the test pass.  Finally you can clean up anything that needs cleanup, which should not cause any test to change results, verified by rerunning the tests and having them still pass.




This is why automated tests for experimental code or for systems with design in flux is usually a bad idea.  Any time your system changes you need to change both the test code and the main code.


However, for engine code, core code, shared code, anything where the system needs to stay permanent, that is were automated tests are amazing. 

In Topic: TDD and predefined models

20 October 2016 - 05:38 PM

You create the test that fails first. This ensures that your test actually tests the thing. Without that rule, people write tests that don't actually test the thing they expect, it passes the first time they run it because the test is wrong, and it always passes forever because it tests the wrong thing.

The "minimum" amount for tutorials about how to use TDD tend to be oversimplified. The goal I aim for personally is a 3-5 minute window. If it is functionality I can write in 3-5 minutes, and it is a single piece of unit testable code, that makes a good test. If it isn't unit testable for various reasons --- maybe it should be an integration test or acceptance test or something else -- then don't do that.

Just keep in mind the purpose of testing and the goals of TDD.

Tests make code permanent. You test for the things you want to remain unchanged over time. TDD works best when you know what you want that permanent behavior to be. Unit tests should only test individual 'unit' behavior, generally a transformation of input to output, or input to event, or similar.

If you are writing code that has no permanent behavior, perhaps you are building an experimental system to try a game mechanic, then don't write automated tests for it. That doesn't make sense for TDD.

If you are writing code that has no long life, perhaps it is throw away or once-off code, then don't write automated tests for it. That doesn't make sense for TDD.

If you are writing code that is longer than a unit, something that takes multiple calls or multiple events or works with data from disk or from the network or from a database or from a user, that is not a unit test. That is one of many other types of tests. While it may work for TDD if you are building a suite of acceptance tests, it generally doesn't work well for the Red-Green-Refactor style of TDD.

In Topic: Next Step

19 October 2016 - 09:49 PM

Tic tac toe is great because it doesn't require graphics. Just printing out three lines of text, then accept some input:

1 2 3
4 5 6
7 8 9

(input 5)

1 2 o
4 x 6
7 8 9

(input 1)

x 2 o
4 x 6
7 8 o


The easy randomly-generated ai can be coded up in a matter of minutes for an experienced developer.