The aesthetics of technology

Different technologies have different kinds of aesthetics, and they affect us in various ways, whether we are particularly fascinated with technology or not.

The easiest technologies to understand on an intuitive-emotional basis seem to be those that involve physical processes. Objects rotating, moving, being lifted and displaced, compressed, crushed. Gases and liquids being sent around in conduits, mediating force and energy. In short, the technology that has its foundation in classical mechanics.

If these are easy to get a feel for, it would probably be in part because an understanding of mechanical processes has been of use to us throughout history, and also before the advent of civilisation. An intuitive understanding of things such as momentum, acceleration, gravity has no doubt benefited mankind and its ancestors for a very long time.

It gets trickier when we get to the more recent technologies. Take electricity to be an arbitrary watershed. We have no intuitive idea of what electricity is, apart from the fact we might be afraid of thunder. Electricity has to be taught through the abstract idea of electrons flowing in conduits, a bit like water in pipes (to name one of many images being used).

And then there’s analog and digital electronics, integrated circuits, semiconductors and so on, where intuition has long ago been left behind. We are forced to approach these things in a purely abstract domain.

Yet, when our Mp3 players, game consoles, mobile phones and computers do things for us, we are left with a sense of wonder. Our minds, always looking for stories and explanations, want to associate the impressive effects produced by these devices with some stimuli. With a steam engine, it’s easy to associate the energy with pressure, heat and motion, all of which are well understood on a low level. With a mobile phone, not so much. A lot of very abstract stories have to be used in order to reach anything that resembles an explanation, and still it doesn’t reach the essence of the device, which might be in its interplay between radio transceivers, sound codec chips, a display with a user interface and software to drive it, a central CPU, and so on, together with, of course, the network of physical antennas and their connectivity with other such networks. Is it too much to suppose that the human mind often stops short of the true explanation here? That we associate the effects produced by the device with what we can touch, smell, see and hear?

This is of course the point where many computer geeks worldwide start to feel a certain affection for the materials that make up the machines. Suppose that we are in the 1980’s. Green text on a black terminal background. A particular kind of fixed width font. The clicking of the keyboard. The dull grey plastic used to make the case. All of these things can acquire a lot of meaning that they don’t really have, because the users lack a window (physical and emotional) into the essence of the machine. The ultimate “disconnected machine”, to relate to my field, is software.

This brings up questions such as: how far can we as a species proceed with technology that we cannot understand instinctively, how can we teach such technology meaningfully and include it in democratic debate, and how can we use people’s tendencies to associate sensory stimuli with meaning and effects in a more meaningful way? – for instance, when we design hardware and software interfaces.

One year into the Ph.D. process

I thought I’d write a more personal note for a change.

It’s been just over a year since I started studying for my Ph.D. — formally, I entered the program in April 2009. With at least two years to go, how do things look with some hindsight? What do I think it means to obtain the Ph.D. degree, or, more specifically and usefully, to be a researcher in computer science?

Much of what I’m noticing are things that sound obvious and natural, like everyday truisms, when expressed with words, but the idea I have of it goes a little bit deeper than that. For instance, we all get told over and over throughout our lives, starting in high school, that we have to become good communicators. So it’s not going to be a surprise to anyone when I say that I think the process entails becoming a much better communicator than I’ve ever been before. Maybe what’s different is that I am trying to communicate things that haven’t been communicated before, things that I invented — or things that have hitherto been communicated only by a very small number of people. (Most of the communication I did prior to becoming a Ph.D. student may not have been terribly original.) Basically, reading and understanding a large amount of scientific papers, and understanding them with a particular use in mind, either having or getting a sense of how they fit into your own work. Then, writing your own papers, and communicating, somehow, what you thought, and what you were the first person to think, so that somebody else might read it like you read the works of others, and use it similarly. Then, presenting research, discussing it, and understanding what is being presented and discussed by others — similar challenges in speech instead of in writing.

I can’t speak for other fields, but in computer science ( I work with programming languages and software engineering), I find that a lot of this, for me, has been about building up a certain mental dexterity with formalisms. Understanding the implications of formalisms as you read about them and seek to apply them. Communicating formalisms to others. Some of this is still difficult, in particular the “communicating to others” part, but I think I am achieving things in this regard.

Communication, then, where does it take us? One of my mental images of academic knowledge is a big directed acyclic graph (a tree) where papers reference other papers. A surprisingly big part of writing a paper is ensuring that your work can get assimilated into this graph easily — placing it well, referencing the right things, making sure that you can be referenced easily. Also: defining the boundaries of your work extremely well — here’s where it begins, here’s where it ends. We assume precisely this and arrive at precisely that. It really seems that these things can never be made clear enough.

Which leads to another mental image of research: the paper/unit of work as a building block. The more solid it is, and the harder and sharper its surfaces and edges are, the better structures can be built from it (though I think there are other kinds of valuable works too). That’s one direction I think I need to be aiming for as an aspiring researcher.

Doing generality right

Many software developers, while making a tool to solve a specific problem, heed the siren call of generality. By making a few specific changes, they can turn the tool into a general framework for solving a larger class of problems. And then, with a few more changes, an even larger class of problems, and so on. This often turns into a trap, and there is a risk that the end of the line is an over-generalised tool that isn’t very good at solving any problem, because the specificity that was present in the first place was part of why it was powerful. In this way, constraints can equal freedom.

Sometimes, though, the generalizers get it right. These are often moments of exceptional and lasting innovation. One example of such a system is the fabulously influential (but today, not that widely used) programming language Smalltalk. Invented by the former jazz guitarist and subsequent Turing award winner Alan Kay, Smalltalk was released as one of the first true object-oriented programming languages in 1980. It is probably still ahead of its time. It runs on a virtual machine, it has reflection, everything is an object, and the separation between applications is blurred in favour of a big object box. On running Squeak, a popular Smalltalk implementation, with its default system image today, users discover that all the objects on the screen, including the IDE to develop and debug objects, appear to follow the same rules. No objects seem to have special privileges.

Another such system is an application that used to be shipped on Mac computers in the distant past, Hypercard. Hypercard enabled ordinary users to create highly customized software using the idea of filing cards in a drawer as the underlying model, blurring the line between end users and developers through its accessibility. I haven’t had the privilege to use it myself, but it seems like this was as powerful as it was because it served up a homogenous and familiar model, where everything was a card, and yet the cards had considerable scope for modification and special features. Even though, in some ways, this system appears to be a database, the cards didn’t need to have the same format, for instance. (Are we seeing this particular idea being recycled in a more enterprisey form in CouchDB?)

There are more examples of successful highly general design: the Unix file system, TCP/IP sockets and so on. They all have in common that they are easy to think about as a mental model, since a universal set of rules apply to all objects, they scale well in different directions when used for different purposes, and they give the user a satisfying sense of empowerment, blurring the line between work and play to draw on the user’s natural creativity. Successful general systems are the ones that can be easily applied in quite varied situations without tearing in the seams.

While not widely used by industrial programmers today, Smalltalk was incredibly influential. In 1981 Objective-C was created by Brad Cox and Tom Love, directly inspired by what the Smalltalk designers had done. Objective-C was subsequently used as the language of choice for NeXTStep, and later for Apple’s MacOS X when Apple bought NeXT. Today it’s seeing a big surge in popularity thanks to devices like the iPhone, on which it is also used. In 1995 Java was introduced, owing a great deal of its design to Objective-C, but also introducing features such as a universal virtual machine and garbage collection, which Objective-C didn’t have at the time. In some sense, both Objective-C and Java are blends of the C-family languages and Smalltalk. Tongue in cheek, we might say that it seems evolution in industrial programming these days consists of finding blends that contain less of the C model and more of smalltalk or functional programming.

Call for research interns

The project I’ve been working on for some time (the last year or so) is starting to acquire a more definite form, and hopefully more information about it will be released in the coming months. Its official name is now Poplar, and the current official overview is available here. Basically it revolves around protocol based component composition for Java.

There is now an opportunity to do a paid internship for 2-6 months at Japan’s National Institute of Informatics in this project. If you are a masters or Ph.D. student who is interested in programming languages and software engineering, and you think the project sounds interesting, please do consider applying. Unfortunately the internship is only open to students of institutions who have signed MOU agreements with NII. The list of such universities, which includes many institutions from around the world, as well as more formal information, is available here. If you know or want to learn Scala, all the better!

Overloading words in research and programming

In research and academia, one of the fundamental activities is the invention and subsequent examination of new concepts. For concepts, we need names.

One way of making a name is stringing words together until the meaning is sufficiently specific. E.g. “morphism averse co-dependent functor substitutions in virtual machine transmigration systems”. Thus the abstruse academic research paper title is born.

Sciences sometimes give new meanings to existing words. This could be called overloading, following the example of object-oriented programming. E.g. a “group” in mathematics is something different from the everyday use of the term. A “buffer” in chemistry is something different from a software or hardware buffer, even though a fragment of similarity is there. And so on. This overloading of words gives newcomers to the field a handle on what is meant, but full understanding is still impossible without understanding the actual definitions being employed.

Sometimes new terms can be created using inventors’ names and everyday words. E.g. a “Lie group” or the “Maxwell equations”, or “Curry-Howard correspondence”. This is potentially useful, but perhaps not something you can do freely with your own research without seeming like you’re trying to inflate your ego excessively. (Even though researchers love inflating their egos, nobody wants to admit it.)

There’s a similar problem in software development. When we invent names of functions, classes and variables, the lack of words becomes very clear. Intuitively, what is an “adapter registry”? An “observer list”? Or an “observer list mediation adapter?” My feeling is that we often end up compounding abstract words because we have no better choice. And here lies a clue to some of the apparent impermeability of difficult source code. We need better ways of making names. We’re inventing ideas faster than our language can stretch.