Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 03 Mar 2006
Offline Last Active May 31 2012 04:03 PM

#4944022 Question about TCP

Posted by on 28 May 2012 - 08:14 AM

What happens when you send a tcp message (packet) larger than it should be.

TCP is a stream, there is no "too big". It's a pipe, you keep pushing things on one end and they come out in same order and intact in other.

do packets always get received in FULL

There are no packets. See forum FAQ for details.

Even though the defined maximum tcp/udp packet size is 64kb,

UDP has maximum datagram size. TCP does not.

There are however send/receive buffers, which are something else.

{read byte by byte here until all data is read?

That is how TCP stream is read, you just don't read bytes individually, but you ask to receive all that are available.

It is up to you to make sense of these bytes, such as where one message begins and ends, how long to read, etc... It's just a stream, like reading from file.

byte[] data_received = NetworkObject.Receive(); //Blocking call; this is assuming packets are received full and there is no need to read each byte.

This will return data that has arrived so far. Again, it is up to you to determine if all that was sent has arrived, or if you need to call receive again.


Minor nitpick - it's either ASCII (American Standard Code for Information Interchange) or ANSI (American National Standards Institute).

#4943983 UML IT infrastructure

Posted by on 28 May 2012 - 06:22 AM

UML is pretty mature

UML has several problems:
- impedance mismatch (it works the way managers work, not how software is naturally written)
- developed, authored and owned by private for-profit committee - they make money off training and product sales, not improving productivity
- overhead added by UML can only be covered by large companies, think 500+ development teams
- it is used for risk aversion and establishing paper trail as well as process certification, it completely fails if software usability is at premium
- CASE are used for business intelligence and business process management, and even there the generic package isn't widely used, there are better toolings provided by other vendors. They are not a good fit for actual software (code-related) development. They work best when managers define various processes and workflows, off which managers under them base their work. If you look at eclipse forums, you'll notice there are more posts in workflow and BI forum than all other topics combined. CASE is not for developers, it's for managers who don't do technical work.

NoSQL is on the rise

No... Not really. Maybe for 2 person web shops.

There is no database in enterprise that cannot be handled by SQL (DB2, Oracle or MSSQL - others do not exist), there is today sufficient hardware available to scale these vertically for any real world data set.

ETL and ad-hoc analysis works on different data sets, but again, for enterprise usage problems lie elsewhere and they are currently catered at both high-end (again IBM and Oracle) as well as low-end (S3-based solutions).

SAP is also in there somewhere. Surprisingly, it tends to work quite well, so it gets little press because of lack of snafus.

I started working on a Java implementation

Does it work with every existing technology and standard? Out-of-box? If no, then it's of no use. It's the first thing you will get asked.

UML is both a mess and "success", because Rational and IBM made sure it works with every dead and living standard and technology stack. The result is multi-gigabyte monstrocity built on top of Eclipse (and previously Visual Age).

- makes the definition of models (UMLDL) and instances (UMLML) that go with the models possible
- allows developers and users to browse the models (UMLQL) if they have the rights to make the query (UMLCL)

I remember working on something like that.
- there was abstract model
- after it got complex, these models got auto generated, so meta language was developed for describing the models
- this meta language got too complex, so meta-meta language was create (no joke)

Meanwhile, actual users and developers gave up on it and used 5 line text files instead 500,000 LOC meta-meta framework. Application as it was meant to be written was 500+ MB of source + 500MB of generated code at the time I left.

BTW: the thing won an award from one of these CASE groups.

But the most important thing to understand here is that problems inherent here are not technical. It's complex because of the way enterprise world works.

For a modern take on enterprise software, I'd look at salesforce. They are considerably smaller in scope than players above, but they are based from ground up on web-centric approach and are investing a lot into more dynamic/live approach.

#4943678 using "this" within a shared_ptr<>

Posted by on 27 May 2012 - 05:58 AM

Pass shared_ptr by reference as well. Otherwise, reference counter is modified each time a function is called, despite ownership not being changed.

#4943540 Thread pool (basic idea)

Posted by on 26 May 2012 - 01:21 PM

As for I/O completion port, i already use it for multiplexing network I/O. I also used a fix number of threads to manage those asynchronous I/O completions, for Windows of course (on Solaris, i'll probaly use /dev/poll). I want to use another threadpool mostly to process the requests (database queries, file lookup or transfer, etc) in order to avoid the I/O threads to block too long when a request takes too long to process.

Considering boost::asio does exactly that, using same API, on all these platforms, inclduing Solaris, has been tested over years by tens of thousands of people, is available under no restriction...

That includes asynchronous file IO, asynhronous callbacks, strands for interoperability with non-threaded APIs, asynchronous networking, thread pooling, ....

The code will have to work on Solaris 10.

Boost and asio are supported on Solaris.

Thread scheduler in asio went through several rewrites, so did other parts. It really is a hard problem, even when foremost experts in platforms and C++ come together.

#4943210 setting up virtual server

Posted by on 25 May 2012 - 06:23 AM

If looking for a more streamlined process, look into vagrant. It uses Oracle's VirtualBox instead (also free).

#4943209 MMORPG Passing Arround IP's

Posted by on 25 May 2012 - 06:19 AM

Voice comunications

Skype gave up on that model and now uses centralized servers.

Would I be breaking some sort of unwritten rule by doing this ?

It's a security/privacy issue.

There was a recent article on how to identify users using Skype protocol and even correlate that with whether they download anything over bittorrent.

A more practical concern is that most users are behind NAT and cannot connect to each other anyway.

#4941192 How to make my pc host "mygame.com"

Posted by on 18 May 2012 - 09:20 AM

I currently found http://www.000webhost.com/ which costs 5$/month with unlimited bandwidth and storage.

That's a web host. It's for serving HTML pages via PHP. Are you sure this is what you need?

Also - unlimited is never unlimited.

I'd be very careful of hosts that do not disclose exactly what the limits are upfront. Othewise they can easily bill you at the end of the month. Or, if they use shared bandwidth, then there are 1000 servers sharing a 10MBit link, meaning they'll often be inaccessible or very slow.

It's also common for such "free" hosts to simply terminate your account if you exceed some unwritten quota.

EDIT: I'm also wondering how to prevent DDoS attack from users (and how hosting services avoiding damages, if they can)?...

It depends. Some just bill you for traffic, others disconnect your server, third don't care, your site just isn't accessible.

There is nothing you can do about it, unless you create your own hosting environment with multiple backbone providers for peering so that you can route DDoS and switch to different networks/links as needed. But at lower end, there isn't much you can do, AWS can be used to bring down many such servers for free.

Hosts with uptime guarantees start at some $500/month. DDoS resistant hosting is much more.

#4940045 Monitor (Threading) Locking Problem

Posted by on 14 May 2012 - 06:49 AM

why should i fire an event "onAdded" when the Object is always in my ObjectList? Or... should each object which detects that it has to be updated by the driver fire an "Add" or "Calulate" event?

I merely demonstrated on how to dispatch callbacks. Instead of doing it from main loop from inside a lock (each callback may take a long time, acquire locks and such), you instead defer them so they can run later outside of a lock.

The events you have are onUpdate(), executed from inside a lock. Instead, prefer to use the example above. While inside a lock, put the objects to be notified into a queue, then notify them without holding a lock.

#4939910 Monitor (Threading) Locking Problem

Posted by on 13 May 2012 - 04:37 PM



Does any of these two calls lock inside?

P.s.: I tried other variants with Semaphores and Mutex but none of them solved the deadlock problem

The proper way to fix this is to light 4 candles.

Deadlocks occur when locks are acquired in different order. If there are two threads acquiring A->B and B->A, then a deadlock will occur. If locks are always acquired in same order, it won't.

The overall design is incredibly fragile as well. It can be made to work right, but requires a lot of infrastructure in place. Event-based callbacks are essentially a no-go for threaded code.

Instead, there's a simple solution:
List<Something> updates;

while (running) {
  List<Callbacks> callbacks;
  lock(updates) {
	// add everything from updates to objectList
	// add all added objects to callbacks

  // fire onAdded events for all items in callbacks


void addObject(...) {
  lock(updates) {

Such design is robust and cannot deadlock. Converting it to actual C# code left as exercise to the reader.

#4939864 Ideas for how to improve code

Posted by on 13 May 2012 - 01:56 PM

HP drivers I use are 370MB. Radeon drivers are 250MB.

I have 16GB of RAM and 3TB of disk.

Just saying.

For machines where 30 bytes matters, C++ is absolutely the wrong choice, precisely due to the hidden magic and "bloat", compared to C.

#4939271 Relays?

Posted by on 11 May 2012 - 05:49 AM

Terminology is somewhat uncommon and seems to come from this article.

Two-way means that all data is sent to remote server without understanding context.
Short-circuit means that some data may be sent to either some local server or directly to other peers, such as those on same LAN.

Short-circuit design would be old Skype, where some users could opt to become super nodes and where communication is done either as peer-to-peer or over a super node.

In practice, each authoritative networked node adds considerable complexity, especially in unreliable or untrusted environment. Added complexity of such systems is typically not worth the trade-offs considering strict latency requirements, which cannot be provided by ad-hoc hosts.

Another form of short-circuit evaluation however is regularly done - inside the client itself. Rather than being true dumb terminal (ssh/telnet), client hides latency by optimistically executing actions before receiving confirmation (avatar moves before server confirming valid destination).

This approach solves the reliability and complexity issues by not introducing an independent node, but by running inside same simulation loop, eliminating reliability problems.

In practice, moving parts add exponential complexity to the system. The more different types of servers there are, the more physical boxes there are, the more different architectures there are, the worse things will be to manage and more failure modes will need to be handled. Two parts are mandatory - client and main server. Adding a third proxy or even peer-to-peer makes things considerably more complicated.

#4938708 Avoiding cheating in a multiplayer HTML5 game

Posted by on 09 May 2012 - 11:41 AM

Even if you ignore the security issues I would go for server side game logic over p2p client logic to cut down on network traffic and synchronization issues.

Server-side logic doesn't prevent botting.

Keep in mind that there is a bot for SC2 which intercepts GPU calls, interprets the graphics (machine vision) then issues inputs via known controllers. It was, IIRC correctly, tested online.

Humans are a form of bot.

The only solution is to develop gameplay which offers no incentive to bot.

#4938690 Avoiding cheating in a multiplayer HTML5 game

Posted by on 09 May 2012 - 09:51 AM

The general answer I'm getting here seems to be that cheating is impossible to prevent anyway,

Botting != cheating.

Botting on HTML5 cannot be prevented, it's the wrong platform. Cannot be done as in Does not compute.

And since we are dealing with a theoretical problem

No theory here: ", to detect whether a command has come via the user clicking or via some cheat mechanism." - cannot be done. It is impossible. HTML5 does not know about input devices, it has no mechanism to distinguish where inputs come from.

It is however possible to monitor action patterns on server and apply heuristics to attempt to determine which actions are direct human input and which aren't. This will be unreliable, it will raise false positives and negatives and it will not be able to detect certain types of undesirable actions. But this has nothing to do with HTML5.

You cant just switch your entire platform and toolset every time you have a problem.

No, but it helps to choose a platform which is at least capable of solving a problem vs. one that isn't. Just like some software is still written in native code, simply because other platforms aren't capable of solving those problems.

#4938424 Reliable UDP and packet order

Posted by on 08 May 2012 - 10:58 AM

if the ack didn't arrive within 2*RTT.

I'd start with 20*RTT. And if client cannot continue without receiving past ack, UDP is the wrong choice anyway, just use TCP. UDP is good choice if can ignore some lost packets without resending them.

Even on best networks, hiccups happen. And 20x over low-latency connection of 100ms is only 2 seconds. Spikes up to 5 seconds happen regularly. Then there's drift and spikes, maybe shared wifi and someone downloads something big-ish and stalls the others or just generally increases latency.

IMHO, unless you don't completely tolerate up to 1000ms latency, let alone disconnect those that exceed this, the protocol will be next to useless in real world.

it's true that dedicated players want to have <50ms ping, but for majority having reliable connection will be worth much more. Look at Minecraft, people are ecstatic over how nice it looks and how much fun it is, only to learn that for them it runs at 2-3 fps and takes 5 minutes to load.

because it is rather complicated to implement

Yes it is. And that's even without real world testing, which will throw even the best designs off.

For hack&slash, evented simulation over TCP on global scale works fine. 3 examples: Starcraft2, Diablo3, Realm of mad god.

#4938110 toroidal array

Posted by on 07 May 2012 - 11:00 AM

For a 2D array with dimensions NxM:

torodial[i, j] = regular[i % N, j % M];

It doesn't have a use as such, but some algorithms may benefit from this property. For example:

// toroidal array now contains sequence:

It basically just says that rows and columns wrap around.