The coming politicization of mathematics and computer science

Increasingly, ordinary people encrypt their internet communications. Some want to share files. Some are worried about the increasing surveillance and threats of surveillance of Internet data that is taking place in many corners of the world. ACTA, Hadopi, data retention would be a few examples. People may simply wish to keep their data private, even in cases when the data is not objectionable. Others, hopefully not so ordinary people, have an acute need to hide from authorities of some form or another, maybe because they actually have a criminal intent, or maybe because they are regime critics in repressive countries. Maybe they are submitting data to sites like Wikileaks.

Various technologies have come out of academic experiments, volunteer work and government sponsored research to assist with encrypted communication. PGP/GnuPG and SSH are classic mainstays. Onion routing, as implemented in the TOR system, is an effective way of concealing the true origin and destination of data being sent around. Darknet systems like the I2P project aim to build a complete infrastructure for an entirely new kind of Internet, piggybacking on the old one but with anonymity and encryption as first class fundamental features.

I think we are only at the start of a coming era of political conflicts centered around communications technology, and that more and more issues will have to be ironed out in the coming years and decades. The stakes are high. On one hand control and political stability, on the other hand individual rights and democratic progress. This is not new. One thing that I think is potentially new and interesting though, is how mathematics and computer science ought to become increasingly sensitive and political in the coming years.

Today disciplines like genetics and stem cell research are considered controversial research areas by some people since they touch on the very foundations of what we think of as life. Weapons research of all kinds is considered controversial for obvious reasons, and the development of a weapon on the scale of nuclear bombs would completely shift the global power structure.  One fundamental building block of communications control is the ability to encrypt and to decrypt. These abilities are ultimately limited by the frontiers of mathematical research. Innovations such as the Skein hash function directly affect the cryptographic power balance.

Most of the popular varieties of encryption in use today can be overcome, given that the adversary has sufficient computing power and time. In addition, human beings often compromise their keys, trust the wrong certificates, or act in ways that diminish the security that has been gained. Encryption is not absolute unless the fact that something has been encrypted has been perfectly hidden. Rather, it is a matter of economics, of making it very cheap to encrypt data,and very expensive for unintended receivers to decrypt it.

It is not possible to freeze encryption at a certain arbitrary level, or to restrict the use of it. Computers are inherently general purpose, and software designed for one purpose can almost always be used for another. If the situation is driven to its extreme, we might identify two possible outcomes: either general purpose computers are forbidden or restricted, or uncontrolled, strongly encrypted communication becomes the norm. Christopher Kullenberg has touched on this topic in Swedish.

Those who would rather not see a society where widespread encryption is commonplace would perhaps still want to have what they see as desirable effects of computerisation. In their ideal world they would pick and choose what people can do with computers, in effect giving a list of permitted and prohibited uses. But this is not how general purpose computers work. They are programmable, and people can construct software that does what they want. If the introduction of non-authorised software somehow is prohibited, and all applications must be checked by some authority, applications can still usually be used for purposes they were not designed for. This generality of purpose simply cannot be removed from computers without making them useless – at least that is how it seems today. It seems that it would take a new fundamental model of computation that selectively prohibits certain uses is needed in order to make this happen. (In order to make sure that this kind of discovery is not put to use by the “other camp”, those of us who believe in an open society should try to find it, or somehow establish the fact that it cannot be constructed.)

Mathematics now stands ever more closely connected with political power. Mathematical advances can almost immediately increase or decrease the resistance to information flow (given that somebody incorporates the advances into usable software). The full consequences of this are something we have yet to see.

Utilitarianism and computability

I’ve started watching Michael Sandel’s Harvard lecture series on political philosophy, “justice”. In this series, Sandel introduces the ideas of major political and moral philosophers, such as Bentham, Locke, and Kant, as well as some libertarian thinkers I hadn’t heard of. I’m only halfway through the series, so I’m sure there’s other big names coming up. The accessibility of the lectures belies their substance: what starts out with simple examples and challenges to the audience in the style of Socratic method often ends up being very engaging and meaty. (Incidentally, it turns out that Michael Sandel has also become fairly famous in Japan, with his lectures having been aired on NHK, Japan’s biggest broadcaster.)

One of the first schools of thought he brings up is utilitarianism, whose central idea appears to be that the value of an action is placed in the consequences of that action, and not in anything else, such as the intention behind the action, or the idea that there are certain categories of actions that are definitely good or definitely evil. What causes the greatest happiness for the greatest number of people is good, simple as that. From these definitions a huge amount of difficulty follows immediately. For instance, is short-term happiness as good as long-term happiness? How long term is long term enough to be valuable? Is the pleasure of ignorant people as valuable as that of enlightened people? etc. But let’s leave all this aside and try to bring some notion of computability into the picture.

Assume that we accept that “the greatest happiness for the greatest number of people” is a good maxim, and we seek to achieve this. We must weigh the consequences of actions and choices to maximise this value. But can we always link a consequence to the action, or set of actions, that led to it? Causality in the world is a questionable idea since it is a form of inductive knowledge. Causality in formal systems and in the abstract seems valid, since it is a matter of definition, but causality in the empirical, in the observed, seems to always be a matter of correlation: if I observe first A and then B sufficiently many times, I will infer that A implies B, but I have no way of knowing that there are not also other preconditions of B happening (for instance, a hitherto invisible particle having a certain degree of flux). It seems that I cannot reliably learn what causes what, and then, how can I predict the consequences of my actions? Now, suddenly, we end up with an epistemological question, but let us leave this too aside for the time being. Perhaps epistemological uncertainty is inevitable.

I still want to do my best to achieve the greatest happiness for the greatest number of people, and I accept that my idea of what actions cause what consequences is probabilistic in nature. I have a set of rules, A1 => B1, A2 => B2… An => Bn which I trust to some extent and I want to make the best use of them. I have now ended up with a planning problem. I must identify a sequence of actions that maximises that happiness variable. But my brain has limited computational ability, and my plan must be complete by time t in order to be executable. Even for a simple problem description, the state space that planning algorithms must search becomes enormous, and identifying the plan, or a plan, that maximises the value is simply not feasible. Furthermore, billions of humans are planning concurrently, and their plans may interfere with each other. A true computational utilitarian system would treat all human individuals as a single system and find, in unison, the optimal sequence of actions for each one to undertake. This is an absurd notion.

This thought experiment aside, if we are utilitarianists, should we enlist the increased computing power that has recently come into being to help manage our lives? Can it be used to augment (presumably it can not supplant) human intuition for how to make rapid choices from huge amounts of data?

Partitioning idea spaces into containers

Some scattered thoughts on idea flows.

The global idea space is partitioned in various ways. One example would be peoples speaking different languages. English speakers all understand each other, Japanese speakers all understand each other, but there are relatively few people who speak Japanese and English very well. We can understand this situation in an abstract way as two large containers with a narrow passage connecting them.

Similar partitionings occur whenever there are groups of people that communicate a lot among themselves and less with people in other groups. For instance, there would be a partitioning between people who use the internet frequently and people who use it rarely (to some extent similar to a partitioning between young and old people). This partitioning is in fact orthogonal to the language partitioning, i.e. there is an English internet, a Japanese internet, an English non-internet, etc.

The partitioning of the space into containers has effects on the establishment of authorities and the growth of specialised entities inside the containers. The establishment of authorities is in some ways a Darwinist selection process. There can only be one highest authority on philosophy, on history, on art, on mathematics etc. that speaks one given language or acts inside a given container. Or for a more banal example: pop charts and TV programs. (Even though, inside the Anglosphere, each country may still have their own pop chart, they influence each other hugely.) If there are two contenders for the position of highest authority on art in a container, either they have to be isolated from each other somehow, or they must interact and resolve their conflict, either by subordination of one to the other, or by a refinement of their roles so that these do not conflict. As for the specialised entities, the larger the container is, the more space there is for highly niched ideas. This is in fact the “long tailidea. The Internet is one of the biggest containers to date, and businesses such as Amazon have (or at least had) as their business model to sell not large numbers of a few popular items, but small numbers of a great many niched items. Such long tails can be nurtured by large containers. (In fact this is a consequence of the subordination/refinement when authority contenders have a conflict.)

We may also augment this picture with a directional graph of the flows between containers. For instance, ideas probably flow into Japan from the Anglosphere more rapidly than they flow in the reverse direction. Ideas flow into Sweden from the Anglosphere and from Japan but flow back out of Sweden relatively rarely. Once an idea has flowed into a space like Sweden or Japan from a larger space like the Anglosphere, though, the smaller space can act like a kind of pressure cooker or reactor that may develop, refine, or process the imported idea and possibly send a more interesting product back. A kind of refraction occurs.

In the early history of the internet, some people warned that the great danger of it is that everybody might eventually think the same thoughts, and that we would lose the diversity of ideas. This has turned out to be an unrealised fear, I think, at least as long as we still have different languages. But are languages not enough? Do we need to do more to create artificial partitionings? What is the optimal degree of partitioning, and can we concretely map the flows and containers with some degree of precision?

Multiplayer protein folding game

You read it here first – Monomorphic predicted this development in February. In a recent Nature article, researchers describe a multiplayer online graphical protein folding game, in which players collaborate against the computer to fold a protein correctly quickly. (Also: NYTimes article.) It turned out that the human players were successful compared to the computers, and the comparison teaches us much about the problem solving heuristics that humans use. Which will be the next computational task to be turned into an online game?

Rasmus Fleischer’s postdigital manifesto

In his highly timely and readable 2009 book “The Postdigital Manifesto”, Swedish writer and historian Rasmus Fleischer discusses the effects of the digital on our relation to music and sets out his vision for how we can make music listening more meaningful. Fleischer is a prolific blogger (almost exclusively in Swedish) at Copyriot, and is probably best known for co-founding the Swedish think tank Piratbyran. As a side project, I am currently in the process of translating this book into English. It will be released in some form when it is done. The original work was released without copyright, so it is quite likely that some kind of PDF will simply be made available for download.

One of the central ideas of the manifesto is that our relation to music is dependent on physical presence and responsibility. Physical presence as opposed to the illusion that distances and places are made irrelevant by the internet and digital communications. Responsibility as opposed to the idea of mindlessly shuffling through a very large or infinite archive of recorded music. One of the ways in which music conveys something is when I choose music to play to somebody else, and I take responsibility for the effects of the music on that person or on a group of people.

Fleischer constructs the idea of a “postdigital situation” and holds it up as a model for how music is to be valued, critiqued, understood, and, essentially, how it is to take place, or come to matter. The postdigital situation is constrained by a physical space where music is being performed and listened to, where responsibility relations exist and evolve, and where bodies are set in motion. The digital world, the internet without boundaries, can be a means of gathering people in such a space and informing it, but it does not replace it. The “postdigital” goes beyond the naive idea of the digital, which ignores places and crowds.

Olle Olsson at SICS has also discussed this book in English. More to come!