Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 01 Oct 2003
Offline Last Active Yesterday, 09:38 AM

Topics I've Started

Changing Username

26 May 2013 - 01:42 AM

Forgive me if this is in the wrong forum, but I didn't see a "Website Questions" forum.


How do I change my username?  I don't mean my display name, I mean the name I have to type in to actually log in.  I changed my display name, and that change seems to work, but I can't use it to log in - I still need to use my old display name.


I created my account many years ago, before I first began using the internet handle that has become my standard.  I've changed my display name to reflect this, but I'd like to change my username as well, as it's much easier to remember when logging in to websites and forums.



Looking for Specific Tron-like 3D PC Game

26 May 2013 - 01:33 AM


I'm looking for a specific game that I played many years ago, but cannot remember the name of.  I'm asking around here because I think I first found it on these forums or the old Image of the Day feature (and if not, I'm sure enough of you have played games that maybe one of you can recognize it! :).   Unfortunately, due to some of the search criteria, I can't seem to narrow down my Google searches enough to find it.


-It was a 3D, first-person PC game.


-It took place inside a virtual, cyber world, whose atmosphere (and graphic theme) was VERY similar to the cyber world of the Tron universe - bright glowing colors, basic polygonal structure.  (This is what I mean by "Tron-like" - not that it was a light-cycle game or an adaptation of the original arcade game).


-The goal was to sneak around and achieve certain objectives.  You walked around the cyber world/city, into buildings, and there were computer terminals you could access (and possibly hack?).  I believe some doors were locked, and part of the objective was finding keys or accessing these locked areas.


-I do remember a major part of the game was trying to figure out how to play the game - learning about the world and how it worked was part of the point of the game, I think.


-Outside, in the city streets, I think some Tron-like tanks or robot sentinels were roving around, which had to be avoided.


-It was a downloadable freeware game.  Note that it WASN'T a Tron game (official or otherwise), just in a setting clearly inspired by Tron.  So not Tron 2.0.


-It came out many years ago (well before Tron: Legacy).  I think I played it sometime between 2004 and 2009 (likely in the early half of that time range, rather than later).


Does any of this ring a bell for anyone?  If so, I'd greatly appreciate it if you could point me in the right direction.  Looking for "game inspired by Tron" that isn't actually a Tron game or a lightcycle game has yielded nothing for me, as you can imagine.



Cleaning Up - Stalling When Deleting Huge Arrays

11 January 2012 - 02:40 PM


I have written some numerical simulation code in C++, which runs in a Win32 console. Nothing fancy.

My code deals with huge amounts of randomly-generated numerical data, but even so, it runs pretty fast and smoothly.

However, I have found that - once the simulation is complete - the program appears to stall at the end, when deleting all of the program objects and allocated memory. So the simulation runs fine, except at the end, during cleanup.

My simulation, at the beginning, essentially allocates a huge array of objects (each object representing an agent in my agent-based simulation). This allocation occurs all at once, though each individual object will have to allocate memory as well (see below). I am currently dealing with 10,000 agents (so allocating a dynamic array of 10,000 instances of this class).

Each object, upon initialization, allocates something on the order of 40 kB of memory. Each object also has 2 STL deques, which increase in size during run-time, but peak at around 2 kB of memory.

So all-told, the program initially allocates 400 MB of memory, and increases to a peak of about 500 MB during run-time.

Again, the program runs fine. But when cleaning up these 500 MB of memory, the program slows down. As the program continues to clean up, the program continues to run slower and slower.

I investigated the cleanup behavior by stepping through with the debugger. I said there are 10,000 agents. When deleting the array, the destructors of the agents are called in reverse order. So the 10,000th agent's destructor is called first (where the 40-50 kB of memory that agent had allocated is released). Then the 9,999th agent's destructor is called. And so forth.

The total time it takes to delete the first 1000 agents (so Agents 9000-9999) is at most a second. But the next 1000 agents (Agents 8000-8999) take something like twice as long. The next 1000 after that take twice as long as the previous thousand. And so forth.

So by the time I'm trying to clean up Agents 500-1000, I'm waiting 30 seconds. I can't even get down to the last agent, because the cleanup time per agent ramps up so high - i.e., as I said, the progam appears to stall. While it's running this cleanup code, of course, much of the allocated RAM remains allocated, and the CPU usage remains as high as ever.

Though I don't think it would cause this issue, I have checked my code for memory leaks, dangling pointers, and so forth, and my code looks fine.

Can anyone provide any insight to this behavior?

Though the "stall" doesn't seem to last forever, and is quite manageable, with far fewer numbers of agents, such a workaround is undesirable, as my program really needs to be able to handle something like 10,000 agents. Additionally, the code may eventually be run on a grid computing kind of service with multiple instances (testing different parameters in the simulation), and so a program that appears to stall - but still eats up CPU time and RAM - won't be accepted

Thank you.

.NET Framework in Windows SDK

15 December 2011 - 06:16 PM


A cooked hard drive in my laptop necessitated a replacement, so I'm re-installing Microsoft Visual Studio 2010 (for C++) and the Windows and DirectX SDKs.

I noticed that the Windows 7 SDK automatically includes .NET Framework libraries, documentation, code snippets, etc. But I can decide not to install those if I wish. It saves me 2 GB hard drive space - which is a small fraction, but not insignificant of my total hard drive size. More importantly, I'm on a slow connection, so keeping it will add quite a bit of time to my download and installation time.

Is it necessary to include the .NET framework stuff, or can I forgo those components? I never recall having to install it before.

My development is primarily Windows programs (basic stuff, so I need the Win32 API) for Windows XP and Windows 7, as well as Windows game development - specifically including DirectX 9 graphics, DirectInput, DirectSound, and WinSock.


Handling Client Input in the Authoritative Server Model

16 November 2011 - 04:29 PM


I'm trying to develop a fast-paced multiplayer action game, and have reached some conceptual difficulties that I would like to discuss, regarding how the server handles input from client players. For my networking model, I am using the "dumb client/authoritative server" model, primarily focusing on the techniques discussed by the Valve Source multiplayer articles found on-line (and linked to in the forum FAQ). Note that I would like to be able to have players use "listen servers" to set up and host games (on the internet or locally) rather than using dedicated servers. At this time, my difficulties can be split into two general questions.

My previous experience has been with single-player games, in which the main game loop repeates over and over. Each loop, the game takes player input, performs "game physics" calculations (based on the world state, AI decisions, and player input), and then renders the graphics. I find that the physics calculations tend to comprise the bulk of the total computational work. Hence, doubling the number of "game physics" calculations can easily have the effect of cutting the frame rate in half.

To be clear, "game physics" tends to comprise mostly of collision detection sweep tests (for moving objects and projectiles) and line intersection tests (for instantaneous bullets). The details of my algorithms are beyond the scope of this post, but it's helpful to have a basic understanding of what happens: All objects figure out which other game objects may *potentially* collide with it during the next frame (usually using spatial partitioning, among other techniques), and sweep tests are performed. All objects are moved forward up to the time of the first collision, using their velocities. Then any of the objects involved in that first collision are re-tested (using sweep tests) with objects that they may still *potentially* collide with, as the previous collision may have altered velocities, necessitating new sweep tests. All of the objects are then moved forward to the time of the second collision, the necessary objects are re-tested for collisions, and so forth, until everything has had a chance to move forward up to the duration of the last frame.

Let's move on to multiplayer games specifically:

A) It's my understanding that input is handled in the following way:

1.) Client keyboard captures the most recent frame's input and sends it to the server.
2.) Server receives client input (which refers to a frame in the past now, due to latency).
3.) Server calculates how old the received client input is, and rewinds its view of the world by that amount of time. Using the newly received input (and any other input it had previously received), it performs "game physics" calculations up to the current server time (potentially many frames forward), to update the positions of all objects to the most accurate possible, based on what it knows of client inputs.

Now the problem is that, in any given frame, the server generally is going to have to run "game physics" for a length of time equal to several "frames-worth." If latency is consistent for each client, then the client with the largest latency will determine how long a duration of "game physics" will have to be simulated for, each frame.

The Valve article suggests using interpolation, with intervals of 50 ms, to keep things looking smooth on the client-side. This suggests a minimum frame rate of 20 FPS. If we can expect a round-trip latency of up to 300 ms, then client input will be received 150 ms after the fact. This means, each frame, when the server receives client input that is 150 ms old, it will have to rewind its view of the world back to the point it was at 3 frames ago, and then recalculate "game physics" up to the current server time. While we can limit such recalculations to tests between objects that might *potentially* collide with this player with newly-received input (and not repeat such tests for other players who could not possibly have interacted with them anyway), that's still a lot of recalculating. Especially when you have a ripple effect, when another player (who we have received input for) is affected by the new tests - this player must then recalculate all of HIS collisions up to the current server time as well (and any new collisions result in more players having to then recalculate, etc.).

What's worrisome to me is that it will then have to recalculate "game physics" for 3 frames' worth of action - and be done in the time that one of my single-player games would have used to calculate only 1 frame's worth of action. I fear a race condition here - that the server will spend so much time calculating collision tests, that it will not achieve an acceptable framerate.

How should I handle this situation? Is my understanding of the "authoritative server" algorithms incorrect? Or is it just understood that such servers are generally run on machines that are FAR more powerful than consumers' PCs, to avoid the above race condition, making the use of a listening server generally unfeasible?

B) How do I handle the case of a client whose frame rate is greater than the server's frame rate, when it comes to input? I think it's understood that a client shouldn't limit its frame rate to that of the server's. Instead, interpolation should be used to create a smoothly moving word on the client's screen, at as fast a frame rate as the client's machine allows.

But in this case, the client will be generating multiple frames' worth of inputs for each "frame" that's run on the server (50 ms, above). How will the server handle multiple inputs per-frame from multiple clients - each with potentially different lengths of time for which that input was valid (i.e., Player A held "down" for the first 15 ms, "down" for the second 15 ms, and "up" for the next 15 ms, while Player B held "down" for the first 25 ms and then "right" for the next 25 ms)? The more inputs it receives, the longer it will take to calculate "game physics," contributing to the race condition above.

Is it acceptable (or even widely practiced) to cap input-capturing on the client to the server's frame rate, while allowing the client to render at as fast a frame-rate as it can? So a client can be rendering at 60 FPS, but only testing the keyboard every 50 ms (at 20 FPS), since that's what the server is running? I guess I don't see why not, but I would welcome your opinions and thoughts.