Jump to content
  • Advertisement
Sign in to follow this  
uutee

Asynchronous communication in UI apps

This topic is 4659 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, This is no specific problem. Just some dabbling over a reoccuring aspect of UI programming that has been on mind for a long time. Assume (for generality) that we have a UI with which we can do database queries. These queries are then processed by a background thread. A rather straightforward solution would be to have a "callback" for DB->UI communication (i.e returning query result), and a "message queue" for UI->DB communication. Now the pseudocodes would look like this, UI: rsltCallback(rslt) { displayRslt(rslt); } DB: while(true) { msg=nextMsg(); rslt=process(msg); ui.rsltCallback(rslt); } This begs for a question about symmetry: why does only the DB have a message loop, while the UI callback displays the result synchronously? How about, UI: rsltCallback(rslt) { addRslt(rslt); }, messageLoop() { while(true) { rslt=nextRslt(); displayRslt(rslt); } } DB: while(true) { msg=nextMsg(); rslt=process(msg); ui.rsltCallback(rslt); } This latter version could allow for much more freedom: not only these one kind of DB results could handled with the UI message loop, but practically all kinds of messages to UI! But there is one catch: these "message queues" can grow on indefinitely! One possible way to overcome this would be to block the thread that tries to add a new message when the queue is "too large" and wait until the queue has become a little smaller. But in cyclic settings, such as in UI->DB and DB->UI, this can lead to deadlock, where UI waits for DB to finish its pending queries while the DB waits for the UI to display its pending results. (Yet another way to overcome this would be to simply throw away messages if the queues are too large...) So I guess my question is: what are the true pros and cons of using message queues in asynchronous communication? Where are message queues usually used, and where are they usually avoided? Is there some underlying theory, or pragmatic rules of thumbs that experienced developers know of? Thanks, -- Mikko

Share this post


Link to post
Share on other sites
Advertisement
A message queue is something you would use when you typically receive information faster than you can process it. The reason your first example has the server in a message queue while the client runs synchronously is because the server can potentially receive one or more requests from many locations, and it's expected that it won't be able to handle the queries at the same rate it receives them. Meanwhile, the client only receives a response as a result of a query. Thus if it only sends one query at a time and waits until the response it processes before sending the next query, it doesn't need a message queue.

If you were to switch to a model where the client can have multiple outstanding queries, then a queue of some sort would be needed assuming the client can't handle responses faster than it could potentially receive them. To keep the queue sizes in order, never allow the client to have more outstanding queries than it does space for the responses. Throw away any excess queries from the user and inform them. On the server, if the queue is full then return an error message to the client indicating that it couldn't processes the query.

Share this post


Link to post
Share on other sites
A potential con is the extra copying of the data. If your objective was extremely low latencies (sub 50us) you would not want to copy and queue data, you would want to process it based on an asyncronous event (e.g. interrupt).
In virtually all circumstances though messages queues are used in this type of communication scenario - it's not unusual even for interrupt handlers.

Using a message queue decouples the entities communicating and allows you to schedule things to happen in the future. Without a queue everything would always have to happen right-now, and consequentually it may not all happen in the expected order.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shannon Barber
A potential con is the extra copying of the data.[...]
Which can be completely avoided if there is shared memory (as there is for two threads in a single windows process) by queueing pointers to the shared memory (where a single instance of the object is stored). Of course, this brings new complications (need some way to specify an object has been consumed and the memory can be reused, etc), but if the objects are large enough, it might be worth it.

Share this post


Link to post
Share on other sites
You could use threads if your program isn't a resource monger. If you don't know about threads post back and i'll give you an example and a bit more information.

Share this post


Link to post
Share on other sites
The only real rule I know of goes like this (I just now verbalized this for the first time, so excuse the rough corners):

A. Analyze the needs of each point in the communication pipe (in most of the interesting cases, the pipe is many segments long; a two-ended pipe is a boring trivial case [wink]). Determine the performance and storage requirements and goals at each node. As Zipster described, look for cases where processing takes longer than receiving data chunks. The other big thing to look for is serialization vs. parallelization: does this node need to reply in the same order that it receives requests, or is any order OK?

B. Examine available IPC (and Remote-IPC) methods and pick one that suits your needs for performance. Sockets, named pipes, memory-mapped files, DDE, COM, DCOM, CORBA, Java Beans, Windows messages... find one that matches the technologies you're working with, and opt for one that you're comfortable with whenever possible.

C. Consider available storage mechanisms for tracking data (both requests and responses) at each node. Choose the storage mechanism that makes the most sense; factor in your storage/performance needs as well as ease-of-implementation. Thinking in terms of elementary data structures will usually point you in the right direction (e.g. do I want a stack? deque? tail queue? flat heap?)


This isn't really so much a hard-and-fast, put-in-your-requirements-get-an-algorithm sort of rule, but it should point you in the right way. Don't underestimate ease of implementation as a critical decision factor.

I tend to favor a socket-based, FIFO-queue approach when request fulfilment is greater than the request throughput (i.e. I can answer questions faster than I can be asked them). In the reverse case, I'll usually head towards a prioritized heap/list (linked lists of requests are great here) and choose a communication method based on the platform and requirements.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!