Gregorian misery

The Gregorian calendar has been in use since 1582. Among its features is a moderately complicated rule for leap years: if n mod 4 is 0, then n is a leap year. However, if n mod 100 is 0, then n is not a leap year, unless n is a multiple of 400.

In addition, we live in a world with timezones and regional differences in when countries go on and off daylight savings time, if they have such a system. As yet another example of Japanese rationality, Japan does not have a DST system.

Implementing date and time computations correctly can be very hard for computer programmers and is invariably a source of many hidden bugs that may take a long time to discover. Yesterday, a large amount of Sony’s Playstation 3 game consoles stopped working normally.  This was later fixed. There was speculation the error was due to incorrect leap year handling. It wouldn’t be the first time this occurred if this was indeed the reason.

In a software company where I used to work, there would usually be massive troubles every time some country went on or off daylight savings time, or any other time calculation hit a sensitive spot. I’m fairly sure that the world’s software systems, including government, finance, insurance, health care, suffer untold billions of damage every year due to the complexity of the system. Maybe we should simplify it.

I suggest having “years” with 365 x 4 + 1 = 1461 days instead of the usual year for starters. This would move the leap year problem ahead until year 2100, when the next special rule comes in. By that time, software engineering technology should have improved enough that this should no longer be an issue, I hope. If not, we can invent another system by then. Let’s also scrap all daylight savings time everywhere. It’s easy to do and the savings would be huge.

Tips for academics who develop software

Academics and practitioners, having rather different goals in life, tend to approach software development in quite different ways. No doubt there are many things each side of the fence can learn from the other, but I think academics in particular could often benefit quite a lot by adopting some of the practices used in industrial development. And not just computer science academics!

A common misconception is that these techniques only are useful with large projects and large teams. I find, though, that they can help reduce much of the growth pains even in small projects, helping them reach maturity much faster.

Use version control. Classical, but invalid, counter arguments include “it’s a hassle and too much work to set up”, or “there’s only one person working on this project anyway”. Even if it’s only you, you will benefit massively from being able to undo your changes far back in time. It will let you experiment safely. Plus, setup is no longer an issue with free and easy-to-use services like github and bitbucket. My tool of choice is now Mercurial, and I used to use SVN. And there are many other good choices.

Use a debugger. If there is a debugger available for your language, and there most certainly is, then you should use it to find nontrivial errors, rather than extensive printf style testing.

Don’t optimise prematurely, but when you need to, use a profiler. Profilers tell you where a program’s performance bottlenecks are. You can profile things like heap usage (what classes use most space in Java, for instance) and CPU usage (which functions use the most CPU time). For Java, I’ve discovered that the NetBeans IDE has a very good built in profiler. Eclipse also has one, but it didn’t work on Mac last time I checked. For C/C++, GProf used to be good and probably still is.

Use unit testing wisely. All of the above apply even to very small projects, but I think some projects are too small to need unit tests, at least initially. You be the judge. I find that unit tests can have a lot of benefit when applied to the fragile, complicated parts of a system, where many different things interlock. If you are ambitious you can also write tests first and code later — test driven development.

Use a good IDE if you can. For a language like Java, where you have to type a lot of code to get something done and spread out your code across lots of files, a good IDE that can generate boilerplate code and navigate quickly can really speed up your work. It’s beneficial for other languages too. But I have no problem with people who use pure vim or emacs, after all these are practically IDEs.

I believe that honing your software development skills as an academic can pay off. Also see: Daniel Lemire on why you should open source your projects. (I will get around to doing this eventually, I promise ;-))

Why Scala? The mixing of imperative and functional style

Scala is a little wonderland sprinkled with useful things you can mix and match as you like to improve your coding experience while staying on the Java platform. The Option classes, the structural case matching, the compact declarations, lazy evaluation… the list goes on. But at the heart of it is the decision to mix freely the functional and imperative programming styles.

How does this work in practice?

  • Statements can have side effects, like in Java
  • The final statement evaluated in a function is its return value by default
  • Every statement evaluates to a value, even control flow statements like if… else, unlike in Java

The bottom line is that some problems call for a functional programming style, and others for an imperative one. Scala doesn’t force you into a mold, it just gives you what you need to express what you’d like to express. This can lead to very compact code. Here’s a function that recursively finds all files ending in .java starting in a given directory. The File class here is the standard Java java.io.File!

Remember, the last expression evaluated is the return value.

 def findJavaFiles(dir: File): List[File] = {
    val files = dir.listFiles()
    val javaFiles = files.filter({_.getName.endsWith(".java")})
    val dirs = files.filter({_.isDirectory})
    javaFiles.toList ++ dirs.flatMap{findJavaFiles(_)}
  }

But we can write it even more compactly at the expense of some clarity:

 def findJavaFiles(dir: File) = {
    val files = dir.listFiles()
    files.filter(_.getName.endsWith(".java")).toList ++
 files.filter(_.isDirectory).flatMap{findJavaFiles(_)}
  }

Now write this function in Java and see how many lines you end up with.

Standard new Mac setup routine

I just got a new laptop, courtesy of the lab. Naturally, it’s of the fruity kind. One of the first steps: install essential software.

I thought I’d make a list of software I consider absolutely essential on any new computer, and it became longer than I thought.

General use:

NetNewsWire for news reading

DropBox for file syncing

OmniFocus as a task organizer (the GTD methodology actually works — it has liberated me from reciting a long list of things to do in my head all day long)

CircusPonies Notebook for note taking

iStat Pro for system monitoring

If I want to develop software:

Eclipse

Fink and MacPorts so I can get various unix tools (I can’t settle for one or the other, since some tools are in one of them only, but normally Fink is nicer since the packages are precompiled)

Apple’s developer tools

If I want to read and write papers:

TeXShop

Mendeley Desktop

So these are the “absolute essentials”. Of course web apps like gmail count too, but they require no installation. Anything I’ve missed?

One thing I do not install, but perhaps should, is Apple’s MobileMe. Considering how fruity my environment is, there ought to be some benefit. But between Dropbox, my own DAV server for calendars, and built-in syncing of apps like OmniFocus, I can make things stay in sync anyway, so MobileMe is probably not worth the cost… I think.

Making playtime useful with color filling games

Flood-it, a color filling game. This version was made by Lab Pixies for the iPhone, but many others exist.

Flood-it, a color filling game. This version was made by Lab Pixies for the iPhone, but many others exist.

There’s a veritable torrent of little games constantly being released for the iPhone. One of the more likable ones is Flood-It, which I’ve been playing recently. The premise is extremely simple: you start off with a grid divided into squares of different, randomized colors. You are given a tool that works a bit like the bucket fill in a picture editor. At each turn, the player chooses a color to fill the grid with, starting from the upper left corner. The monochromatic area slowly grows, and the aim is to fill the entire grid with a single color within a limited number of turns.

A recent analysis showed that finding an optimal solution to games like Flood-It is a NP-hard problem. In addition, deciding whether the game can be solved in n steps for some n is NP-complete. The analysis relies on a reduction of Flood-It to an instance of the SCS problem (shortest common superstring). (It’s important to note that what is NP-complete is deciding whether a particular board can be solved in a certain number of steps, not solving the game with a bounded number of steps. This can be done in polynomial time.) For those who need a summary, ACM Communications had an excellent review of the state of the P/NP problem in September last year.

For a NP-hard problem H, there exists a polynomial time reduction of any problem in NP to H, meaning that if we can solve H in P-time, we can solve any problem in NP in P-time. Many optimization problems in society rely on approximate solutions to difficult problems: routing traffic, assembling DNA sequences from partial subsequences, mathematical theorem proving… On the hypothesis that evolution has turned people into efficient solvers of hard problems (i.e. we have good heuristics in our brains from birth and from experience), we ought to pay people to play these games on their phones, but map real problems into game instances, so that people effectively work while they’re playing. We ought to design games that act as front-ends for real combinatorial problems.

A computer game, as we understand it, can be defined as a very smooth learning curve, and if we only “play” very tricky instances of combinatorial problems, the game would probably present too much of a barrier to new players. So maybe the best way of executing this kind of scheme would be that a majority of all game instances do not represent real problems, but mere training or verification of already solved problems — but every once in a while, a real problem pops up. The player should still get paid though.

A double benefit would be blurring the line between work time and  play time, what is useful and what is useless — I think this line is often artificially constructed. Has technology ever before given us the possibility to literally turn work into play?

Acknowledgements. I am indebted to Christian Sommer for showing me the complexity analysis of Flood-it.

The Flood-It game, easy difficulty setting, with the player having made some progress.

The Flood-It game, easy difficulty setting, with the player having made some progress.