Programming, the writing of computer code in order to solve a specific problem, is a new intellectual discipline. It has a history going back to logic and mathematics, but it is relatively new as a human endeavour. It is constrained by hardware, by mathematics, by programming languages, and what we might call technical culture: APIs, programming interfaces, and concepts already invented by others. It is a vast canvas for thought: in the machine, imagination becomes physical reality.
For mysterious reasons I became fascinated with computers immediately when I saw one as a child in the 1990s (a friend’s Commodore 64). They seemed to be a window into a different world. As a teenager I tried to teach myself how to program. Sometimes I was self taught, sometimes formally trained, and over the course of my life it feels like I’ve touched (with varying aptitude) everything from hacky PHP and Basic to more rarefied Prolog and Haskell (today I almost exclusively use Scala). Having studied computer science academically and worked in a few fields, today I work in bioinformatics, where we study DNA and genomes – perhaps the “code” of biology and cells.
In my own estimation, my skill and creativity in programming is still increasing as I approach middle age. But over time I have come to value different things and think differently about this craft. It struck me that some of the perspective shifts I have made might apply to human creativity in general.
The biggest single such shift was a new focus on simplicity that I found only recently. In my twenties, I worked on many projects that were maybe too complex for the sake of being complex. I was probably (I’m a little ashamed to say) impressed by the difficulty of the things I myself was doing. This style of work was not pointless, as I could still produce meaningful results, and I was constantly learning. It was the free exploration of new frontiers. But it was powered by youthful arrogance: I would not easily have listened to older people telling me at the time to keep things simpler. I wouldn’t have known what they were trying to tell me. I would have insisted on covering all this ground on my own.
At some point, my designs got too big to sustain, and I was held back by my refusal to simplify things I was making. This was part a reluctance to part with work that I had put a lot of time into, and in part an indecisiveness and a desire to let my project be all things to all people. Slowly I realised that this had to change. I took the plunge and gave one of my projects a much sharper focus. The quality improved dramatically and I could keep up the work for much longer. I was wondering why I had not arrived at such a simple truth earlier.
What does such a sharp focus look like? I will try to generalise. In order to find simplicity we need a source of truth, a meaningful signal to measure against. In programming this could be the fundamental limits of algorithms (how quickly can we sort strings, how much memory do we need to represent them?) or of the domain problem (how quickly can we align DNA against a subject genome?). It could be product requirements (how fun can we make a video game that we are developing?). Once we have a source of truth, we can measure the value of any changes or additions we are making. Generally, adding code increases cognitive load, as the project becomes harder to understand and communicate. When we can measure value, we can investigate the ratio of the value gained to the cognitive price paid for it. It may not be a good return on investment. If not, we should refuse to add that code. Perhaps we make a note of the decision made and the reasons why we made it.
Simplicity often correlates with removing things, though size is not the only measure. What we should care about the most is cognitive load, the ease of understanding for a reader (and of course, ease of use for users). Code that is easy to understand is honest, and easy to return to and build on in the future. It is easy to communicate to others, so it can be a good basis for teamwork. But because this often does mean that we must mercilessly remove things that we put many hours into, the surviving code might often be just the tip of an evolutionary iceberg. Maybe in order for five branches to survive in the repository, fifty branches had to be attempted (hopefully leaving some records of what was learned). In order to find the best algorithm for a research problem, maybe ten algorithms had to be tried, and the nine unfortunate ones ripped out for the sake of simplicity. Keeping code of unclear value around (opportunistically) leads to confusion, although it can certainly be kept in some private branch in case it is needed in the future.
In general, a lot of experimental, creative processes that are trying to home in on a goal will have a lot of hidden history, in arts and crafts as well as technology. Certainly in technical and scientific research. A perfect gem can be the result of laborious experimentation that perhaps nobody remembers anymore. How much work went into discovering even the basic designs and appliances that we use in our homes every day? How many melodies did Bach never write down?
Although that may seem tragic, it brings vast benefits and is maybe the only way to reliably have a sustainable creative process in certain fields.The world might always remain complex, but we can mostly demand that productions of the mind should be simple. (I would make an exception for fields like philosophy and some of the arts, where the objective may sometimes be to undo ingrained simplicity for the sake of increased contact with reality.)
Finally as I write this first blog post in a couple of years: let’s keep blogging alive! Blogs were once part of a better internet that we could still preserve to some degree, and, I think, a format that will outlast the insane social media frenzy of today. He who has eyes, let him read.
Post a Comment