picture home | pixelblog | qt_tools

omino code blog

We need code. Lots of code.
David Van Brink // Thu 2014.04.10 13:17 // {broad generalities code}

Code Connectivity

Thought of the moment.

There are levels of interconnectedness…
Level 1 — basic — compiler knows they are the same. Command-click or F12 finds it…
Level 2 — intermediate — strings match. grep -r can find it…
Level 3 — majikal — you have to know something about some rules for construction…
Level 4 — arcane — you just have to know.

It’s nice to hover around Level 1 or Level 2.
If you’re in Level 3 or Level 4, maintenance is… challenging. But can be slightly mitigated if you can create a unit test or build step that, somehow, confirms that everything is lining up.

oh, i dont know. what do you think?


David Van Brink // Sun 2013.12.29 10:14 // {code ios}

iOS: Multiple Screens Oddity

Presently working on a little iOS app for some theatre related work. Wanted to display some information on a secondary screen. The iPhone supports this nicely, with either an HDMI adapter, or a composite video adapter, or AirPlay to an Apple TV.

Apple has an example project which shows this, and works well: https://developer.apple.com/library/ios/samplecode/ExternalDisplay/Listings/ReadMe_txt.html

Gotcha Number One
On my iPhone, the sample app works fine. I can enable and disable AirPlay, or plug and unplug the HDMI adapter, and the app rolls with it. But on iOS Simulator 7.0, using the Hardware:TV Out menu to do that virtually (TV Out disabled/enabled) causes the app to crash. So no testing there!

Gotcha Number Two
What is rarely mentioned, and I’ve only discerned by innuendo, is that your AppDelegate must expose a property for each UIWindow you create. The default generated AppDelegate always includes .window. And if you use a Storyboard Nib, then it’s all magicked for you.

So anyway, you need to have this in your AppDelegate.h:

@property (strong, nonatomic) UIWindow *window;
@property (strong, nonatomic) UIWindow *secondWindow;

The example project calls the second one extWindow.
And the idea is mentioned, in passing, in this apple note.

An iOS developer offers some tips (but not this one!) at this entry.

So many secrets!

oh, i dont know. what do you think?


David Van Brink // Sun 2012.03.11 09:55 // {Uncategorized}

Site hacked.
A compromised user on tobias.dreamhose.com (which hosts omino) looks for world-writable directories and adds in a PHP shell tool. Then access that tool from the web, runs as me, and puts eval(base64_decode(‘hack’)) on every php file.

Removed with:

find . -name '*.php' | xargs sed -i -e 's/eval(base64_decode("[^"]*"));/\/\*hack gone\*\//'
oh, i dont know. what do you think?


David Van Brink // Sat 2011.11.5 10:10 // {code java}

Threads

Aah, threads, so beautiful and so dangerous.

In this note, I’ll jot down a couple of recently-encountered threading hazards, for future reference. Perhaps this post will evolve into a Basics note at some point.

My context for now is Java, but the concepts are somewhat general.

Life Before Threads

I got my start doing programming on an Autonetics Recomp III. My Junior High School teacher Elliot Myron had one of these even-then-antiques as a hobby. And a dozen paper tape punches where we hand-entered octal assembly programs to run. No interrupts! And so, no threads.

Later, I wrote a few Apple II video games. These incorporated 1-bit sound effects and animation and multiple agents acting simultaneously… again, no interrupts, and no threads.

Lately, I occasionally program 30-cent PIC chips to blink light patterns and such. Yeah, you got it, no interrupts, no threads.

So it turns out you can (and probably should) do all sorts of interesting things without lots of threads. Modern programming is rather far removed from the lowly “interrupt” and I’ll refrain from mentioning again… but interrupts is where threads come from. Mostly.

Clotho

First red-flag: If you find yourself starting a sentence with, “And there’s one thread per…” just stop right there. Do not have one thread per player, or one thread per incoming request, or one thread per anything.

Lachesis

Second red-flag: When you do your locks within a class, some say, “Lock on the most local object you can.” I found this lead to confusion. I found it far safer to do all locks on “this”, just to keep it consistent. By all means, hold them as briefly as possible.

By locking always on the same thing (this) it eliminates the possibility of out-of-order lock and unlock.

Consider locking and unlocking inside a loop instead of outside it. Although I’ve found cases where unlocking made things slower, as more task switching occurred overall.

Atropos

Lastly, here’s the least-obvious and possibly most-important. If you’re managing handlers and callbacks… release the lock before calling your client’s callback code. Do your work, set your safe copies and state, and then release the lock and call them.

You see, they’re likely to make calls back into the library which then have more locks. In a messaging system, they’ll send messages on another queue which also has locks. Avoid deadlock by letting their code run free.

Sounds Awful

It is, it is. A really great article about some really, really smart people trying to use threads is here.

The best thing to do is, Don’t use threads. Browser-page JavaScript and server-side node.js both run in a single-thread style. Emulate that as much as possible.

Of course, to emulate that you need to write some thread-hiding layers. Hence, the above advice.

Carry on!

oh, i dont know. what do you think?


David Van Brink // Thu 2011.09.8 15:28 // {Uncategorized}

Fluff: Architecture

And in the nondisprovable content-free feel-good department, I offer you my deep insights on what makes good architecture.

High Performance
Just kidding! But this is what they cover in the classes entitled “computer architecture”, and is the sort of thing “architects” are supposed think about. Throughput and simple math to figure out thingums per second, and redundant paths needed, and how many zillabytes the database would take if you do it *that* way.

Maintainable
From down in the trenches, if you get a bug report or a feature request, and you’re able to just dive in, find the right spot to put it, and put it there, and you can test it, while at the same time not leaving a mess, that’s often a symptom of good architecture.

Explicable
If you can draw it on the whiteboard, and explain it, and the picture is reasonably attractive, and you’re not lying: You’ve got good architecture! Hooray.


I believe the last one is the most important. Maintainability falls out of explicability. (Remember — I said it’s got to be a pretty picture *and* you’re not lying!) And if it’s maintainable, you can properly scope which parts, layers, subsystems, or strategies need to be improved, from time to time, to meet performance.

You know, I guess I’m defining “maintainable” code as code where small changes are trivial, and huge changes are just more work, without risk.

oh, i dont know. what do you think?


David Van Brink // Thu 2011.08.18 20:25 // {javascript}

Spec from Code, and State Machines

Spec from Code??

I’ve worked in an organization where the expected development flow was to produce a Microsoft Word document, check it in to revision control, have others comment on it (in revision control), resolve comments, and then, finally, write code.

After coding, you were supposed to go back and revise the document to match the actual outcome. This rarely happened.

A certain amount of design and review, and, alas, yes, even consensus-building can be useful, but, call me old-fashioned, but I don’t think uSoft Word in Perforce is ideal for this. (I’ve also worked in groups that used almost no written designs, and a lot of conversation, and produced long-lived excellent products.)

You know where I like to do my editing? No really, guess. That’s right. In Eclipse. (Well, that’s my IDE of choice & I’m sticking to it.)

And you know where I like my specs to come from? Ideally, as a product from the actual, running code.

Specs are specs, but code is reality.

Isn’t that crazy?

Here’s two familiar examples of this. First: JavaDoc. Yes, your spec comes out as an HTML file, which you edit in the IDE.

A slightly more interesting common example is a command-line options parser. The options parser typically describes something about the allowed values of the switches, which are required, and other “metadata”. This information is used runtime validity testing, and also to produce your –help listing. Spec from code!

State Machines

State machines are cool. Tricky interactions can sometimes be more clearly phrased as a state machine. Instead of having a jillion little booleans and flags, you have one main state, and maybe some ancillary variables.

You process an “event”, and that may change the “state”, and cause other things to happen, too.

A colleague came across Jake Gordon’s JavaScript Finite State Machine “microframework” recently. Good stuff. Gives a clear way to specify a state machine as a JavaScript data structure, like so:

    var events = [    
        { name: 'warn',  from: ['green'],           to: 'yellow' },
        { name: 'panic', from: ['green', 'yellow'], to: 'red'    },
        { name: 'calm',  from: ['red'],             to: 'yellow' },
        { name: 'clear', from: ['red',   'yellow'], to: 'green'  },
    ];

The framework then gives you callbacks like ongreen() and oncalm() where you can do your application-specific work.

I was looking at it, and thought, Hey! That’s enough information to produce some documentation!

Fsm demo

The picture above is from a “DOT Language” graph, and was rendered by GraphViz. The source code for the graph is:

graph fsm {
   green -> yellow  [label="warn"];
   green -> red  [label="panic"];
   yellow -> red  [label="panic"];
   red -> yellow  [label="calm"];
   red -> green  [label="clear"];
   yellow -> green  [label="clear"];
}

You can see the connection, yes?

And, lastly, here is the JavaScript which turned Jake’s FSM Description structure into a DOT file:

function toDotty(events) 
{
   var result = "";
   result += "digraph fsm {\n";

   for(var i = 0; i < events.length; i++)
   {
      var event = events[i];
      var name = event.name;
      var to = event.to;
      var fromList = event.from;

      if(typeof(fromList) == "string")
         fromList = [fromList];

      for(var j = 0; j < fromList.length; j++)
      {
         var from = fromList[j];
         var aTransition = {from:j, to:to, name:name}
         result += "   " + from + " -> " + to + "  [label=\"" + name + "\"];\n";
      }
   }
   result += "}\n";
   return result;
}

Conclusion

Doing design work up front is important. But at some point, the code acquires a richer life than your predesign spec. After that, in a very real sense, the only spec is the code, itself. If you can force it to document itself, to reveal aspects of itself at appropriate levels of abstraction, you can continue to understand it.

oh, i dont know. what do you think?


David Van Brink // Thu 2011.08.4 21:32 // {java}

Stupid Java Tricks

Syntactic Sugar

You know what I hate? I hate that I always am writing code, usually for tests, that looks like:


	List<String> stuff = new ArrayList<String>();
	stuff.add("fish");
	stuff.add("cow");
	stuff.add("dog");

So now I just type:


	List<String> stuff = Om.list("fish","cow","dog");

Nothing exotic, but here’s the handy method that allows it, and several others of a related ilk. I particularly like the Om.map() method. Enjoy. I’ll share a few more soon.


package com.omino.roundabout;

/**
 * Generic and System utility methods, like printf.
 * @author poly
 */
public class Om 
{
	...

	/**
	 * Syntactic utility for inlining object lists without the tedium of creating a list and adding to it.
	 * @param <T> the implicit type of all the pieces
	 * @param pieces things to put into the list
	 * @return a mutable list. Add more to it or delete some if you like.
	 */
	public static <T> List<T> list(T...pieces)
	{
		List<T> result = new ArrayList<T>();
		result.addAll(Arrays.asList(pieces));
		return result;
	}

	
	/**
	 * Syntactic utility for inlining a map. The map signature is sussed from
	 * the first two items, but you can include as many pairs as you like.
	 * @param <K>
	 * @param <V>
	 * @param key1
	 * @param value1
	 * @param theRestOfTheKeyValuePairs
	 * @return a mutable map.
	 */
	@SuppressWarnings("unchecked")
	public static <K,V> Map<K,V> map(K key1,V value1,Object...theRestOfTheKeyValuePairs)
	{
		Map<K,V> result = new HashMap<K, V>();
		result.put(key1,value1);
		for(int i = 0; i < theRestOfTheKeyValuePairs.length - 1; i += 2)
		{
			K key = (K)theRestOfTheKeyValuePairs[i];
			V value = (V)theRestOfTheKeyValuePairs[i + 1];
			result.put(key,value);
		}
		
		return result;
	}

	/**
	 * Syntactic helper to make a set of objects.
	 * @param <T>
	 * @param items
	 * @return
	 */
	public static <T> Set<T> set(T...items) 
	{
		List<T> list = Om.list(items);
		Set<T> result = new HashSet<T>(list);
		return result;
	}

	...
}

oh, i dont know. what do you think?


David Van Brink // Sat 2011.07.30 13:04 // {code software architecture}

HTTP “Comet” Realtime Messages

What you Want

For some web applications, you want to send realtime messages between the browser and the server. That is, the browser can send a message to the server at any time (this is typical), and the server can send a message to your session at any time, too (this is not what HTTP was designed for!)

What you want: to send a free immediately-delivered message any time you want.

What you get: they’re not free, and they may be far from immediate.

This note will describe a cargo trucking analogy for HTTP requests, and expand on it for interactive (“Comet”) style use.

Truck4
Let’s look at a couple of typical browser use models.

You see a link, you click on it, and the page comes up.

==> You send an empty truck to the factory, and it comes back with a load of standardized cargo.

You fill out a small form, press “Submit”, and some page comes up.

==> You send a lightly-loaded truck out. It has some instructions on a clipboard. At the factory, they read your clipboard, load up the truck, and send it home.

You upload a photo using a web-based form. A page comes saying, “Your photo is up, click HERE to see it.”

==> You send out a loaded truck, it comes back empty with a note saying, “We got your photo.”

These are all one-time events, initiated by you at the browser. The HTTP request/response model works pretty well for this. Let’s look at one more typical use case.

You’re very concerned about, let’s just say, the temperature in Fahrenheit at the Town Hall Weather Station. So every 5 seconds you click the refresh button. Refresh, refresh, refresh. And every 5 seconds, the empty truck rolls out, and returns with more or less the same cargo. Every few minutes, perhaps, the temperature changes and a different cargo comes back.

Truck6

Interactive Communication

Let’s call the browser “you” and the server “the factory”. Here’s what we have so far:

  • You have an infinite supply of trucks.
  • You can send a truck to the factory any time you want.
  • You can choose what to put in the truck on the outbound trip.
  • The factory chooses what to load into the truck for the return trip.
  • The factory has no trucks unless you send one.

From here out, we’ll mix the metaphors and pretend not to notice.

Consider a chat session between you and a server-based robot. Let’s assume that it’s quite thoughtful and conversational, and doesn’t merely immediately reply to each thing you type. Rather, it might consider your words for a time, or even have something to blurt out on its own.

Here’s several possible implementations. Here, the truck is a Mail Truck, but that’s not important.

Single Truck Stays Home

Every time you type a message and press return, the truck goes out with your message, drops it off and immediately comes home. If there were any messages waiting for you, it picks them up.

Good: Acts just like a web request, nothing happens unless you hit return.

Bad: The robot never gets to tell you anything, unless you speak first. You end up typing “hello?” a lot.

Single Truck Waits For Reply

When you press return, the truck heads to the factory with your message. Then, it waits there until the robot has something to say. When it does, it comes back with the robot’s message. Meanwhile, you couldn’t say anything. You didn’t have the truck.

Good: Sometimes the robot can speak immediately, and sometimes you can.

Bad: Sometimes you cannot say anything, because you have no truck, and sometimes the robot can’t, for the same reason.

Single Truck Goes Back And Forth Always All The Time

Let’s say that every 5 seconds, the truck heads out with anything you’ve said in that interval, and comes back immediately with anything the robot has said. Now we are talking!

Good: Looks just like a regular web request. Messages are delivered more or less regularly. Things are looking up.

Bad: The truck is always making the trip, even when there’s no cargo. And the deliveries are never immediate.

Intermission

Truck1

You noticed we introduced a new concept: That of the truck waiting at the factory. That’s allowed! (Up to a point; if it takes too long you must assume the truck has been lost.)

This idea of the server holding on to the request for a little while is referred to as “Comet”, a play on “Ajax”, which comes from “asynchronous javascript and xml”.

Anyway, now we’re getting somewhere. A few more items:

  • Every time the truck rolls, it has a cost. Sending a truck home empty is wasteful.
  • Leaving trucks at the factory for a time has some cost.
  • It’s legal for a browser to open a TCP connection, send a request and get the response, and then close the TCP connection. Alternatively, it can keep the TCP connection open and use it again. Requests can be pipelined, and responses will arrive in order. Oops, trucks, right…
  • Sometimes you destroy a truck as soon as it returns, and sometimes you keep it around. Which is more expensive depends on how long you’re keeping it parked.

Multitruck Solutions

Forget about the chatty robot. You get the idea by now.

Expanding Fleet of Trucks

You start by sending a truck to the factory, just in case. Then if you need to send something, you send another truck to the factory. It stays there. Sending a truck home empty, you see, is wasteful. If trucks are free, we can leave them for use at the factory as needed.

When the factory needs to send something home, it’s got at least one truck ready. If it ever runs down to none, we’ll send it an empty one again.

Good: We’ve minimized the road-time of empty trucks.

Bad: The factory has limited parking, and we’re actually not the only customer. Trucks may get old and rusty sitting at the factory disused. (Or rather, they’ll get towed, slapped with a Timeout, and we may have to send out a fresh one.)

Fleet of Two Trucks, Variation One

We keep one empty truck at home, and one at the factory. We can send our truck over at any time, and it gets sent home immediately, empty. If the factory has something to send, it has a truck and uses it. We immediately send it back to the factory, empty.

Good: Now, we’ve both got trucks, except for very brief times right after we’ve sent something.

Bad: The factory-based truck runs a big risk of parking tickets, while the home-based truck doesn’t. Also, half the truck trips are empty, after all, alas!

Fleet of Two Trucks, Variation Two

We start by sending an empty truck to the factory. If either we, or the factory, need to send something, we use the truck that’s there. Whenever a truck arrives, we send out the other one. (If we both sent at the same time, then, hooray, we just use the truck that rolls in.)

If a parking ticket is imminent at the factory, we send it home empty, and the other truck arrives in its place.

Good: We can both send any time. If either of us send, we reset the parking-ticket time.

Bad: On average, half the truck trips are empty.

Observations

To recap, the algorithms described were:

  • Single Truck Stays Home
  • Single Truck Waits For Reply
  • Polling: Single Truck Goes Back And Forth Always All The Time
  • Many Requests: Expanding Fleet
  • A Slow Request and a Fast Request: Fleet of Two Trucks, Variation One
  • Ping-pong Requests: Fleet of Two Trucks, Variation Two

The first two are just broken. They don’t let communication readily occur.

The other four are all viable, with different kinds of costs.

Polling is nice and easy to understand. It puts a fixed upper bound on the latency of your messages, and an average latency of half that. But it sends a lot of empty trucks; lowering the rate of trucks increases the latency.

Using Many Requests keeps a lot of open connections, but minimizes empty trucks. It also contradicts the HTTP 1.1 RFC, section 8.1.4, which says you SHOULD NOT have more than two trucks out.

A Slow Request and a Fast Request is ok. (If you’re accustomed to web queries, it may “feel” nice because the home-based truck seems to associate requests and responses, but this is fallacious. If we’re really passing asynchronous messages, then the message protocol defines the request/response associations, not the HTTP protocol.)

Ping Pong Requests seems to be the nicest of the bunch. It’s a slight improvement over the Slow and Fast Request method, in that the server may have more chances to avoid a timeout on the waiting request. Its symmetry is perhaps slightly appealing as well.

Caveats and Conclusions

Me? I’ve never done any of these. Working through it now, have some prototypes up and running based on JavaScript client, and Restlet-based Java server. But the truck analogy has proven useful in contemplating these algorithms. I’m leaning towards Ping Pong, and plain old Polling as the two most viable modes.

Truck3

References

HTTP 1.1 specification RFC 2616, see Chapter 8 on “Connections”.

oh, i dont know. what do you think?


David Van Brink // Thu 2011.07.28 06:51 // {people}

Reentry

Rehi, all.

A couple of months back, I changed jobs. Previous: some ten years for Altera, on embedded drivers, cpu tests, and mostly on Java-based authoring tools. Current: working at Skype in Palo Alto, mostly on Java server-side stuff.

I showed up, and everyone was speaking backwards, with strange customs and habits, and mysterious acronyms. Going on about “artifacts” and “continuous integration” and “maven” and “ivy” and “nexus”.

After not too long I realized the translations were pretty easy:

Artifacts = binaries
Continuous Integration = (You need a word for "keep the build green"??)
Maven = "Yeah... we should have just used Ant."
Ivy = "We have to use it for Nexus"
Nexus = It was the name of his sled.

Anyway, perhaps I’ll check in from time to time with pithy observations about software engineering and testing and such. And occasional technical trivia.

oh, i dont know. what do you think?


David Van Brink // Tue 2009.01.6 08:55 // {levity mac os x}

This, just in


tuck into cheek, do not swallow

oh, i dont know. what do you think?



(c) 2003-2011 omino.com / contact poly@omino.com