picture home | pixelblog | qt_tools

omino code blog

We need code. Lots of code.
David Van Brink // Sat 2011.07.30 13:04 // {code software architecture}

HTTP “Comet” Realtime Messages

What you Want

For some web applications, you want to send realtime messages between the browser and the server. That is, the browser can send a message to the server at any time (this is typical), and the server can send a message to your session at any time, too (this is not what HTTP was designed for!)

What you want: to send a free immediately-delivered message any time you want.

What you get: they’re not free, and they may be far from immediate.

This note will describe a cargo trucking analogy for HTTP requests, and expand on it for interactive (“Comet”) style use.

Let’s look at a couple of typical browser use models.

You see a link, you click on it, and the page comes up.

==> You send an empty truck to the factory, and it comes back with a load of standardized cargo.

You fill out a small form, press “Submit”, and some page comes up.

==> You send a lightly-loaded truck out. It has some instructions on a clipboard. At the factory, they read your clipboard, load up the truck, and send it home.

You upload a photo using a web-based form. A page comes saying, “Your photo is up, click HERE to see it.”

==> You send out a loaded truck, it comes back empty with a note saying, “We got your photo.”

These are all one-time events, initiated by you at the browser. The HTTP request/response model works pretty well for this. Let’s look at one more typical use case.

You’re very concerned about, let’s just say, the temperature in Fahrenheit at the Town Hall Weather Station. So every 5 seconds you click the refresh button. Refresh, refresh, refresh. And every 5 seconds, the empty truck rolls out, and returns with more or less the same cargo. Every few minutes, perhaps, the temperature changes and a different cargo comes back.


Interactive Communication

Let’s call the browser “you” and the server “the factory”. Here’s what we have so far:

  • You have an infinite supply of trucks.
  • You can send a truck to the factory any time you want.
  • You can choose what to put in the truck on the outbound trip.
  • The factory chooses what to load into the truck for the return trip.
  • The factory has no trucks unless you send one.

From here out, we’ll mix the metaphors and pretend not to notice.

Consider a chat session between you and a server-based robot. Let’s assume that it’s quite thoughtful and conversational, and doesn’t merely immediately reply to each thing you type. Rather, it might consider your words for a time, or even have something to blurt out on its own.

Here’s several possible implementations. Here, the truck is a Mail Truck, but that’s not important.

Single Truck Stays Home

Every time you type a message and press return, the truck goes out with your message, drops it off and immediately comes home. If there were any messages waiting for you, it picks them up.

Good: Acts just like a web request, nothing happens unless you hit return.

Bad: The robot never gets to tell you anything, unless you speak first. You end up typing “hello?” a lot.

Single Truck Waits For Reply

When you press return, the truck heads to the factory with your message. Then, it waits there until the robot has something to say. When it does, it comes back with the robot’s message. Meanwhile, you couldn’t say anything. You didn’t have the truck.

Good: Sometimes the robot can speak immediately, and sometimes you can.

Bad: Sometimes you cannot say anything, because you have no truck, and sometimes the robot can’t, for the same reason.

Single Truck Goes Back And Forth Always All The Time

Let’s say that every 5 seconds, the truck heads out with anything you’ve said in that interval, and comes back immediately with anything the robot has said. Now we are talking!

Good: Looks just like a regular web request. Messages are delivered more or less regularly. Things are looking up.

Bad: The truck is always making the trip, even when there’s no cargo. And the deliveries are never immediate.



You noticed we introduced a new concept: That of the truck waiting at the factory. That’s allowed! (Up to a point; if it takes too long you must assume the truck has been lost.)

This idea of the server holding on to the request for a little while is referred to as “Comet”, a play on “Ajax”, which comes from “asynchronous javascript and xml”.

Anyway, now we’re getting somewhere. A few more items:

  • Every time the truck rolls, it has a cost. Sending a truck home empty is wasteful.
  • Leaving trucks at the factory for a time has some cost.
  • It’s legal for a browser to open a TCP connection, send a request and get the response, and then close the TCP connection. Alternatively, it can keep the TCP connection open and use it again. Requests can be pipelined, and responses will arrive in order. Oops, trucks, right…
  • Sometimes you destroy a truck as soon as it returns, and sometimes you keep it around. Which is more expensive depends on how long you’re keeping it parked.

Multitruck Solutions

Forget about the chatty robot. You get the idea by now.

Expanding Fleet of Trucks

You start by sending a truck to the factory, just in case. Then if you need to send something, you send another truck to the factory. It stays there. Sending a truck home empty, you see, is wasteful. If trucks are free, we can leave them for use at the factory as needed.

When the factory needs to send something home, it’s got at least one truck ready. If it ever runs down to none, we’ll send it an empty one again.

Good: We’ve minimized the road-time of empty trucks.

Bad: The factory has limited parking, and we’re actually not the only customer. Trucks may get old and rusty sitting at the factory disused. (Or rather, they’ll get towed, slapped with a Timeout, and we may have to send out a fresh one.)

Fleet of Two Trucks, Variation One

We keep one empty truck at home, and one at the factory. We can send our truck over at any time, and it gets sent home immediately, empty. If the factory has something to send, it has a truck and uses it. We immediately send it back to the factory, empty.

Good: Now, we’ve both got trucks, except for very brief times right after we’ve sent something.

Bad: The factory-based truck runs a big risk of parking tickets, while the home-based truck doesn’t. Also, half the truck trips are empty, after all, alas!

Fleet of Two Trucks, Variation Two

We start by sending an empty truck to the factory. If either we, or the factory, need to send something, we use the truck that’s there. Whenever a truck arrives, we send out the other one. (If we both sent at the same time, then, hooray, we just use the truck that rolls in.)

If a parking ticket is imminent at the factory, we send it home empty, and the other truck arrives in its place.

Good: We can both send any time. If either of us send, we reset the parking-ticket time.

Bad: On average, half the truck trips are empty.


To recap, the algorithms described were:

  • Single Truck Stays Home
  • Single Truck Waits For Reply
  • Polling: Single Truck Goes Back And Forth Always All The Time
  • Many Requests: Expanding Fleet
  • A Slow Request and a Fast Request: Fleet of Two Trucks, Variation One
  • Ping-pong Requests: Fleet of Two Trucks, Variation Two

The first two are just broken. They don’t let communication readily occur.

The other four are all viable, with different kinds of costs.

Polling is nice and easy to understand. It puts a fixed upper bound on the latency of your messages, and an average latency of half that. But it sends a lot of empty trucks; lowering the rate of trucks increases the latency.

Using Many Requests keeps a lot of open connections, but minimizes empty trucks. It also contradicts the HTTP 1.1 RFC, section 8.1.4, which says you SHOULD NOT have more than two trucks out.

A Slow Request and a Fast Request is ok. (If you’re accustomed to web queries, it may “feel” nice because the home-based truck seems to associate requests and responses, but this is fallacious. If we’re really passing asynchronous messages, then the message protocol defines the request/response associations, not the HTTP protocol.)

Ping Pong Requests seems to be the nicest of the bunch. It’s a slight improvement over the Slow and Fast Request method, in that the server may have more chances to avoid a timeout on the waiting request. Its symmetry is perhaps slightly appealing as well.

Caveats and Conclusions

Me? I’ve never done any of these. Working through it now, have some prototypes up and running based on JavaScript client, and Restlet-based Java server. But the truck analogy has proven useful in contemplating these algorithms. I’m leaning towards Ping Pong, and plain old Polling as the two most viable modes.



HTTP 1.1 specification RFC 2616, see Chapter 8 on “Connections”.

oh, i dont know. what do you think?

(c) 2003-2011 omino.com / contact poly@omino.com