The limits of responsibility

(The multi-month hiatus here on Monomorphic has been due to me working on my thesis. I am now able to, briefly, return to this and other indulgences.)

Life presupposes taking responsibility. It presupposes investing people, objects and matters around you with your concern.

In particular, democratic society presupposes that we all take full, in some sense, responsibility for society itself, its decision making and its future.

However, he who lacks information about some matter cannot take responsibility for it. And thus we often defer to authorities in practice. Authorities allow us to specialise our understanding, which increases our net ability to understand as a collective, assuming that we have sufficiently well functioning interpersonal communication.

There are whole categories of problems that routinely are assigned to specific, predefined authorities and experts; for instance legal matters, constitutional matters, whether some person is mentally ill, medical matters, nuclear and chemical hazards, and so on. Fields where some degree of extensive training is generally required. (However, under the right conditions, these authorities could probably also be called into question by the public opinion.) The opposite is those categories of problems that are routinely assigned to “public opinion” and all of its voices and modulating contraptions and devices, its amplifiers, dampeners, filters, switches and routing mechanisms.

Responsibility aside, in order to maximise an individual’s prospects for life, and by extension society’s prospects for life, it seems important that the individual possess just the right knowledge that they need in their situation. Adding more knowledge is not always a benefit; some kinds of knowledge can be entirely counterproductive. Nietzsche showed this (“On the use and abuse of history for life”), and we can easily apply the idea of computational complexity to see how having access to more information would make it harder to make  decisions.

This is especially true for some kinds of knowledge: knowledge about potential grave dangers, serious threats, monumental changes threatening to take place. Once we have such knowledge we cannot unlearn it, even if it is absolutely clear that we cannot act on it and that we do not have the competence to assess the situation fully. It  takes effort and an act of will to fully disregard a threat on the basis of one’s own insufficient competence.

On the other hand, knowledge about opportunities, about resources, and about problems that one is able to, or could become able to deal with, would generally be helpful and not harmful. However, even this could be harmful if the information is so massive as to turn into noise.

Even disregarding these kinds of knowledge, one of the basic assumptions of democracy – that each individual takes full responsibility for society – seems to be an imperative that is designed never to be fulfilled. An imperative designed to be satisfied by patchworks of individual decisions and “public opinion”, and whatever information fate happens to throw in one’s way. Out of a basic, healthy understanding of their own limitations, individuals generally assume that the democratic imperative to know and to take responsibility was never meant to be taken seriously anyway, but one does one’s best to match one’s peers in appearing to do so.

It seems to me that the questions we must ask and answer are about the proper extent of responsibility, and the proper extent of knowledge, for each individual. For the individual, taking on no responsibility seems detrimental to life; taking on full responsibility for all problems in the world right now, here today, would also be an impossibility. There would be such a thing as a proper extent of responsibility. One’s initial knowledge and abilities would inform this proper extent of responsibility, and the two might properly expand and shrink together, rather than expand and shrink separately.

In a democratic society, in so far as one wants to have one, we should ask: what is the proper level of responsibility that society should expect from each individual, and what level should the individual expect from himself as an ideal?

More generally, empirical studies of how public opinion functions and how democracies function in practice are needed. It is inappropriate to judge and critique democracies based on their founding ideals when the democratic practice differs sharply from those ideals – as inappropriate as it is to critique and judge economies based on the presumption that classical economic principles apply to economic practice in the large.

Platonism and the dominant decomposition

I’m in Portland, Oregon for the SPLASH conference. There’s a lot of energy and good ideas going around.

I gave a talk about my project, Poplar, at the FREECO workshop. At the same workshop there was a very interesting talk given by Klaus Ostermann, outlining some of the various challenges facing software composition. He linked composition of software components to concepts in classical logic, and informally divided composition into a light side and a dark side. On the light side are ideal concepts such as monotonicity (the more axioms we have, the more we can prove), absence of side effects and a single, canonical decomposition of everything. On the dark side are properties such as side effects, the absence of a single decomposition, knowledge that invalidates previously obtained theorems, and so on.

One of the ideas that resonated the most with me is the tyranny of the dominant decomposition. (For instance, a single type hierarchy). Being forced to decompose a system in a single way at all times implies only having a single perspective on it. Is this not platonism coming back to haunt us in programming languages? (Ostermann did indeed say that he suspects that mathematics and the natural sciences have had too much influence on programming). What we might need now is an antiplatonism in programming: we might need subjectivist/perspectivist programming languages. If components can view their peer components in different ways, depending on their domain and their interests (i.e. what kind of stakeholders they are), we might truly obtain flexible, evolvable, organic composition.

What makes a good programming language?

New programming languages are released all the time. History is littered with dead ones. There are also many long time survivors in good shape, as well as geriatric languages on life support.

What makes a programming language attractive and competitive? How can we evaluate its quality? There are many different aspects of this problem.

Ease of reading and writing, or: directness of the mapping between the problem in your head and the model you are creating on the computer. This can be highly domain dependent, for instance languages such as LaTeX, Matlab and R are designed with specific problems in mind and cater to users from that domain. Their limits show quickly when you try to stretch them beyond their envisioned purpose. Speaking of general programming languages, I think Python deserves to be mentioned as a language that is extremely readable and writable. It has other shortcomings though – see below. Prolog is also highly read- and writable if it suits your problem.

Runtime performance. Arguably this is one of the few reasons to bother with using C++. For the majority of programming projects though, performance is much less of a problem than one might think, especially if one considers how close the performance of many JVM languages get to C++. When programmers think about their overall productivity and effectiveness in developing and maintaining a system, C++ is often not the best choice, obviously.

Scalability to large teams. The key property here is: does the language do anything to help me, as a developer, work with code that other people wrote? Ease of maintenance may be strongly correlated with usability in large teams. An anti-pattern here is languages that allow for solving the same problem in a huge amount of ways with very variable syntax. For instance, Perl and C++ can lead to notoriously unmaintainable code if used carelessly. Some say that Scala also suffers from this problem. Basically, the language helps here if it prevents me from doing things that other developers might not expect, and that I might forget to document or communicate. This is why Gosling famously called Java a blue collar language; it restricts you enough to make teamwork quite practical. It even restricts the layout of your source file hierarchy. (Now we begin to see that some goals are in conflict with each other).

Scalability to large systems. This is related to the preceding property, but whereas team scalability seems to be mainly about avoiding the creation of code fragments that surprise people other than their creators, system size scalability seems to be about avoiding the creation of code fragments that surprise other code fragments. Here one needs invariants, good type checking, static constraints of all kinds. Scripting languages like Perl and Python, lacking static typing completely, are some of the worst in this regard, since we cannot even be sure at startup time that methods we try to invoke on objects exist at all (Python).

Scalability over time (maintainability). If there is both system size scalability and team scalability, then the system is also likely to be able to live for a long time without great troubles.

Developer efficiency and rapid prototyping. Depending on the nature of the system being developed, this may depend on several different properties listed above.

Availability of quality tools. Mature runtime environments, such as the JVM, have many more high quality tools and IDEs available than a language than Ruby. Mature languages also have more compilers for more different architectures available.

These points begin to give us an idea of how we can evaluate programming languages. However, I also believe that making a good language and making people use it is largely about luck and factors outside the design itself. Just like there’s a big step between imagining and specifying an utopian society and making that social order an actuality, there’s a big step between designing an ideal programming language and achieving widespread adoption for it. We have seen a way forward though: with generalised runtime environments such as the JVM and the CLR, we may develop and deploy languages that take advantage of a lot of existing infrastructure much more easily than before. And what I hope for is in fact that it becomes even easier to deploy new languages, and that new languages are as interoperable as possible (insofar as it doesn’t constrain their design), so that we could see more competition, more evolution and more risk taking in the PL space.

Pointers in programming languages

It is likely that few features cause as much problems as pointers and references in statement-oriented languages, such as C, C++ and Java. They are powerful, yes, and they allow us to control quite precisely how a program is to represent something. We can use them to conveniently compose objects and data without the redundancy of replicating information massively. In languages like C they are even more powerful than in Java, since just about any part of memory can be viewed as if it were just about anything through the use of pointer arithmetic, which is indeed frightening.

But they also complicate reasoning about programs enormously. Both human reasoning and automated reasoning. Pointers allow any part of the program to have side effects in any other part of the program (if we have a reference to an object that originated there), and they make it very hard to reason about the properties that an object might have at a given point in time (since we generally have no idea who might hold a reference to it – it is amazing that programmers are forced to track this in their heads, more or less). In my effort to design my own language, multiple pointers to the same objects – aliases – have come back from time to time to bite me and block elegant, attractive designs. I believe that this is a very hard problem to design around. Aliased pointers set up communication channels between arbitrary parts of a program.

Nevertheless attempts have been made, in academia and in research labs, to solve this problem. Fraction-based permissions track how many aliases exist and endow each alias with specific permissions to access the object that is referred to. Ownership analysis forces access to certain objects to go through special, “owning” objects. Unique or “unshared” pointers in some language extensions restrict whether aliases may be created or not. But so far no solution has been extremely attractive and convenient, and none has made it into mainstream languages. (I know that someone Philipp Haller made a uniqueness plugin for the Scala compiler, but it is not in wide use, I believe.)

If we are to attempt further incremental evolution of the C-family languages, aliased pointers are one of the most important issues we can attack in my opinion.

Science and philosophy. Another angle.

This is an attempt at restating part of this old blog post in a simpler way.

Scientists are valuable to society. They help extract new knowledge and theories about the world. To the extent that they are right, they improve our affluence, our physical health, and, possibly, our outlook on life. But scientists can also provide us with tools that support or threaten regimes, with weapons, surveillance equipment, encryption, and unexpected discoveries that bring about social change with unexpected consequences. The mass industrialisation of the western world enabled the rise of the middle class like never before, an event whose full consequences might not yet be understood.

Philosophy is traditionally defined as the study of metaphysics, epistemology, ethics, politics, aesthetics and logic. Here are some reasons why scientists may want to expand their knowledge beyond the scientific realm into the philosophical one.

Metaphysics and epistemology. Scientific method originally grew out of philosophy – what we today call science was originally called natural philosophy. Scientific method is not fixed but continues to evolve, and we must continually revise what we know and how we obtain knowledge, particularly in emerging fields. Karl Popper’s famous assertion that scientific claims need to be falsifiable is only one of many recent viewpoints that have gained momentum. Thinkers such as Bruno Latour have asserted that scientific facts are socially constructed through a complex process.

Ethics and politics. New technologies may enable new kinds of interactions between people as well as new possibilities for the individual in their lives. The way that individuals interact with new knowledge and new technologies is determined by innate tendencies and desires, as well as social processes, conventional morality in the society where one lives, and political decisions. If the scientist understands these processes, they are in a position to guide their new findings into the world in an optimal way.

I omit aesthetics from this list for now since its link to science is not straightforward, and logic since it is now an inherent part of mathematics and thus also science. Where logic goes beyond mathematical/symbolic logic, it is of course also worthwhile to study it.

The scientist who also ventures into philosophy will be able to place their scientific findings within an ethical system and within an overall purpose-directed framework. The scientific process by itself mostly does not permit any consideration of these questions, and thus scientists must either submit to an existing ethical system, whether implicitly or explicitly, or create their own. To put it in very blunt terms: the scientist or engineer without an ethical system is sometimes a tool in the hands of others who do have such a system. Awareness that the choice exists can be crucial.