Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 24 Apr 2011
Offline Last Active Dec 17 2014 08:46 PM

Posts I've Made

In Topic: Three.js for desktop application?

14 December 2014 - 08:11 AM

Another item to consider which works pretty well is Node-Webkit (https://github.com/rogerwang/node-webkit). I have been using it with Three.js as a prototyping tool for some things and it works surprisingly well. With the exception of a couple hitches once in a while when V8 decides to re-optimize things, it's a very capable little platform for simple desktop applications.

In Topic: 3ds Max or Maya?

10 December 2014 - 08:33 AM

Unfortunately there are few 'one package solves everything' items out there and you need to balance things based on your target. First, for modeling in general, most artists don't tend to work in either Max or Maya exclusively for that and tend to use a pipeline of tools. For instance, one of my modelers tends to work as follows: Silo 3D for rough shape modeling, ZBrush for detailing, Max for application of final materials. But, on the other side, many (most?) of the artists who do animation tend to prefer the tools available in Maya over most other packages. Finally, programmers will tend to prefer writing plugins for Maya over Max since the SDK is so much cleaner, but unfortunately the choice is often forced on folks to be Max because that has a longer history of supporting games.

If I had to suggest on though, I'd probably suggest Maya since it makes life easier on the programmers and most of the places it doesn't work "great" are fixed through use of other tools. I've shipped games which had no custom editors and through use of plugins, our level editing and everything was part of Maya. Max *can* do similar but it it generally a much more difficult undertaking. Of course I'd have to say that Max is the 'prettier' of the two and Maya is quite utilitarian out of the box, primarily this is because Maya is designed to be customized, not so much used as a stock install.

As to Blender, if your artists can use it, it can work quite well. Other packages to consider in the long run (though more niche of course): Modo, Lightwave, Houdini, Cinema 3D and a whole slew of others.

It's a tough choice and there is no "one best" answer unfortunately.

In Topic: Throttling

28 July 2014 - 07:40 AM

So, I don't completely agree with hplus in regards to the sent data, given that I tend to send some redundant data in UDP, but I do agree that there is missing information in your blog post and above description.  How much data are you averaging per packet is the key missing item?  I will make some assumptions about the data but mostly just cover at a very high level (i.e. missing enough details to drive a truck through dry.png ) some of the problems involved with the naive solution you have.


First off, there is little reason to be sending packets at such a high rate.  Your reasoning for wanting to get things on the wire as fast as possible is relevant but when you look at the big picture, hardly viable.  The number you need to be considering here is latency, but of course latency consists of three specific pieces: delay till put on the wire, network transit time and actual receiver action time.  Assuming that your nic is completely ready to receive a packet and put it on the wire, your minimal latency is 5-10 ms because the nic/wifi/whatever needs to form the data into a packet, prepend the headers and then actually transmit the data at the appropriate rate over the wire.  Add on top of this the fact that you are sending packets every 33.33~ms you have a potential maximal latency of 40ish ms from the point you call the send API to when the data actually hits the wire.  If the network is busy, the wifi is congested or weak, you can easily be in the 50+ms range before a packet actually even hits the wire from the point you call the function to send the data.  In general, you need more intelligence in your system than simply sending packets at a high fixed rate if you want to reduce latency but still not "cause" errors and dropped packets.


The next thing to understand is that routers tend to drop UDP before TCP.  At a high level this is a technically incorrect description, it's more to the fact that the routers will see small high rate packets from your client, potentially even having two or three buffered for transit to the next hop, and then larger packets of TCP at a more reasonable rate and prefer to drop your little packets in favor of the larger packets from someone else.  Given there are easily 10+ hops between a client and a server, the packet lottery is pretty easy to loose under such conditions when the network is even minimally congested.  Add in reliable data getting dropped regularly and now your latencies are creeping up into the 200+ range depending on how you manage resend.


How to start fixing all these issues to deal with the random and unexplained nature of networking while maintaining low latency is all about intelligent networking systems.  Your "experiment" to reduce packet rates is headed in the correct direction, but unfortunately a simple on/off is not the best solution.  The direction you need to be headed is more TCP like in some ways, specifically you should be balancing latency and throughput as best you can while also (unlike what hplus suggested) using any extra bandwidth required to reduce the likelyhood of errors causing hickups.


I'll start by mentioning how to reduce the effect of errors on your networking first.  The common reliable case is the fire button or the do something button which must reach the other side.  In my networking systems I have a "always send" buffer which represents any critical data such as the fire button.  So, if I'm sending a position update several times a second, each packet also contains the information for the fire button until such time as the other side ack's that it received it.  So, baring massive network hickup, even through a couple packets may have been dropped the fire button message will likely get through as quickly as possible.  This is specifically for "critical" data, I wouldn't use this for chat or other things which are not twitch critical.  In general, this alone allows you to avoid the worst cases of having "just the wrong packet got dropped" which throws off the players game.  Yup, it uses more data than strictly necessary but for very good reason.


Moving towards the TCP like stuff, let me clarify a bit.  What you really want here is the bits which replace your "experiment" piece of code with something a bit more robust.  In general, you want three things: mtu detection (for games you just want, can I send my biggest packet safely), slow start/restart packet rates and a non-buffered variation of the sliding window algorithm.  So, the MTU (maximum transmission unit) is pretty simple and kinda like your current throttling detection, send bigger packets until they start consistently failing then back off till they get through.  Somewhere between where they were failing and where they are getting through is the MTU for the route you are transmitting on.  You don't need to actually detect the MTU for a game, you just want to know that if everything starts failing, MTU could be the reason and you should back off large packets till they get through.


The second bit, slow start/restart is actually a lot more important than many folks realize.  Network snarls happen regularly, either things are being rerouted, something has a hickup or potentially real hardware failures crop up.  In regards to UDP, the rerouting can be devastating because your previously detected "safe" values are now all invalid and you need to reset them and start over.  A sliding window deals with this normally and is generally going to take care of this, but I wanted to call it out separately because you need to plan for it.


The sliding window (see: http://en.wikipedia.org/wiki/Sliding_Window_Protocol) is modified from TCP for UDP requirements.  Instead of filling a buffer with future data to be sent, you simply maintain the packets per second and average size of the packets you "think" you will be sending.  The purpose of computing the sliding window though is so you can build heuristics for send rate and packet sizes in order to "play nice" with the routers between two points and still minimize the latencies involved.  Additionally, somewhat like the inverse of the nagle algorithm, you can introduce "early send" for those critical items in order to avoid the maximal latencies.  I.e. if you are sending at 10 a second and the last packet goes out just as a "fire" button is received, you can look at sending the next packet early to reduce the critical latency but still stay in the nice flow that the routers expect from you.  A little jitter in the timing of packets is completely expected and they don't get mad about that too much.  But, even if some router drops the packet, your next regularly scheduled packet with the duplicated data might get through.


I could go on for a while here but I figure this is already getting to be a novel sized post.  I'd suggest looking at the sliding window algorithm, why it exists, how it works etc and then consider how you can use that with UDP without the data stream portion.  I've implemented it a number of times and, while far from perfect, it is about the best you can get given the randomness of networking in general.

In Topic: Does anyone know of this game editor the video uploader is using?

24 July 2014 - 06:23 AM

If you are talking about the capture starting around 1:38, that's just 3DS Max with a custom plugin.  Otherwise, the only other stuff I saw was some in game UI work to help editing.

In Topic: Storing position of member variable

14 July 2014 - 06:29 AM

You might want to look at doing this in a more C++ like manner without the potentially difficult to deal with side effects of pointers into protected/private data.  The way I went recently was to implement a system like this using std::function.  Basically using accessors I could bind up access in nice little objects which worked with my serialization.  So, for instance, in your given example, I wouldn't expose x, y, z separately I'd simply bind the accessor to the vector as:


std::function< Vector& () > accessor( std::bind( &MyComponent::Position, std::placeholders::_1 ) );


With that, pop it in a map of named accessors and you can get/set the component data without breaking encapsulation of the class with evil memory hackery.  Obviously there is more work involved in cleaning this up to allow multiple types, separate get/set functions and a bunch of other things I wanted supported but it is considerably better behaved than direct memory access.  The primary reason I avoided the direct memory access was because I have a number of items which need to serialize but work within a lazy update system, if I bypass the accessor, the data in memory can be in completely invalid states.  With a bound member (or internal lambda), everything can be runtime type safe (unlikely to get compile time type safety), works with lazy update systems and generally a much more robust and less hacky solution.