Books: Deleuze’s Nietzsche and De Landa’s Nonlinear History

In 2012, so far, I’ve finished two very evocative books. One is Deleuze’s Nietzsche and Philosophy. The other is Manuel De Landa’s 1000 Years of Nonlinear History.

Deleuze’s Nietzsche is the author’s interpretation of Nietzsche’s thought. This is perhaps one of the most coherent interpretations of Nietzsche I’ve read. It succeeds in turning Nietzsche’s notoriously unsystematic philosophy into a system with something like well-defined concepts and their interrelationships at its core. The work feels simultaneously fresh and firmly grounded in Nietzsche’s own ideas. This is a book that I expect I will read again, because I’m quite certain I haven’t understood everything. For example, I don’t yet have a good feel for the difference between Deleuze’s active and reactive force  – I cannot even imagine what an active force is. Reading this work has made me suspect that I’ve thought of every force as being essentially reactive up until this point.

One caveat with this work is that it is a book with a mission; the mission is to destroy Hegelian philosophy and dialecticism. This is in line with the historical context of Nietzsche usage in France, where he was used mainly as an antidote to the dominant Hegelian thought, if I understand correctly.

I’ve previously read De Landa’s Philosophy and Simulation, a book about emergence and about corroborating philosophical theories with computer simulations. 1000 Years of Nonlinear History is an earlier, but no less interesting, work of his. It tells the simultaneous history of geology, genes and memes (in the form of languages). In order to fully appreciate this book I think I ought to gain some idea of the mathematics behind attractors and dynamic systems.  Still, there is a lot to be gained even without those insights. The parallels between the three different historical fields are interesting, and the essential point that is made is that there is nothing like progress or determinism about the forms that society, language, ideas, life or matter take today. Instead, the state of the world is a nonlinear system of interacting attractors. We are invited to view the world as rich but essentially accidental, and free of distinctions such as organic-inorganic and human-nonhuman.

 

Entering into bioinformatics

As of now, I have been working with bioinformatics in the Mizuguchi lab at NIBIO, Osaka, for about two weeks. The lab environment is stimulating and I feel quite fortunate to be here.

It is interesting to compare computer science and bioinformatics with just the hindsight of this short period. In computer science and electronics, we study systems that have been built from the ground up with well known components at various scales. They have been designed with certain high level functionalities in mind, and it is always well known how these high level functions are being realised and what makes them tick. In biology, in contrast, we encounter systems designed by nature. These systems, organisms, have certain high level functions we are aware of. We are also aware of some of the low level functions, such as molecules, atoms and cells (although we may have only a partial understanding of some of these). The problem in biology is now to explain what makes a high level function tick or not tick, how to steer it, enhance it, or suppress it. The intermediate steps are not always revealed to us, and we must painstakingly tease them out with experiments. As always with empirical science, we can never be sure that we’ve obtained the whole picture.

This difference – the fact that we must reconstruct all the design principles and intermediate mechanisms for organisms, but not for computers – leads to different styles of teaching and thinking. Biology texts appear to be very top down and focus on what has been observed and what it appears to be. Technology texts can be bottom up, building up a complex design smoothly by adding one layer at a time, starting from the core, having the confidence that nothing is being omitted. The contrast is striking.

A limitation that both biology and computer science share is the problem of defining the exact capabilities of an organism or a system. In biology we often do not know, and I doubt if we ever will, considering how complex the genetic code is. In computer science, the capabilities of very simple programs can be completely understood, but understanding a nontrivial program — for example, verifying that it does what is desired and does not do what is not desired — usually requires nontrivial formal methods, if it is at all possible.

My Ph.D. Thesis: “Extending the Java Programming Language for Evolvable Component Integration”

After three very hectic first months of 2012, the final version of my Ph.D. thesis has been submitted and I’ve gone through the graduation ceremonies. From the 1st of April I will be a postdoctoral associate in bioinformatics at the National Institute of Biomedical Innovation in Osaka, Japan. I will comment further on my Ph.D. experience and my entry into bioinformatics when I can.

Being a Ph.D. student in the Honiden laboratory has been a great experience, and I am very grateful to professor Honiden and to the other lab members for their support.

My thesis and the associated slides are available. The abstract is as follows.

In the last few decades, software systems have become less and less atomic, and increasingly built according to the component-based software development paradigm: applications and libraries are increasingly created by combining existing libraries, components and modules. Object-oriented programming languages have been especially important in enabling this development through their essential feature of encapsulation: separation of interface and implementation. Another enabling technology has been the explosive spread of the Internet, which facilitates simple and rapid acquisition of software components. As a consequence, now, more than ever, different parts of software systems are maintained and developed by different people and organisations, making integration and reintegration of software components a very challenging problem in practice. 

One of the most popular and widespread object-oriented programming languages today is the Java language, which through features such as platform independence, dynamic class loading, interfaces, absence of pointer arithmetic, and bytecode verification, has simplified component-based development greatly. However, we argue that Java encapsulation, in the form supported by its interfaces, has several shortcomings with respect to the need for integration. API clients depend on the concrete forms of interfaces, which are collections of fields and methods that are identified by names and type signatures. But these interfaces do not capture essential information about how classes are to be used, such as usage protocols (sequential constraints), the meaning and results of invoking a method, or useful ways for different classes to be used together. Such constraints must be communicated as human-readable documentation, which means that the compiler cannot by itself perform tasks such as integrating components and checking the validity of an integration following an upgrade. In addition, many trivial interface changes, such as the ones that may be caused by common refactorings, do not lead to complex semantic changes, but they may still lead to compilation errors, necessitating a tedious manual upgrade process. These problems stem from the fact that client components depend on exact syntactic forms of interfaces they are making use of. In short, Java interfaces and integration dependencies are too rigid and capture both insufficient and excessive information with respect to the integration concern. 

We propose a Java extension, Poplar, which enriches interfaces with a semantic label system, which describes functional properties of variables, as well as an effect system. This additional information enables us to describe integration requests declaratively using integration queries. Queries are satisfied by integration solutions, which are fragments of Java code. Such solutions can be found by a variety of search algorithms; we evaluate the use of the well-known partial order planning algorithm with certain heuristics for this purpose. A solution is guaranteed to have at least the useful effects requested by the programmer, and no destructive effects that are not permitted. In this way, we generate integration links (solutions) from descriptions of intent, instead of making programmers write integration code manually. When components are upgraded, the integration links can be verified and accepted as still valid, or regenerated to conform to the new components, if possible. The design of Poplar is such that verification and reintegration can be carried out in a modular fashion. Poplar aims to provide a sound must-analysis for the establishment of labels, and a sound may-analysis for the deletion of labels. We describe the semantics of Poplar informally using examples, and provide a formal specification of Poplar, which is based on Middleweight Java (MJ). We describe an implementation of a Poplar integration checker and generator, called Jardine, which compiles Poplar code to pure Java. We evaluate the practical applicability of Jardine through a case study, which is carried out by refactoring the JFreeChart library. We also discuss the applicability of Poplar to Martin Fowler’s well known collection of refactorings. Our results show that Poplar is highly applicable to a wide range of refactorings and that the evolution of integrated components becomes considerably simpler.

Technology and utilitarianism

Technologists and engineers often use the ideas of utilitarianism to evaluate their solutions. If something is cheaper, or faster, or lets people live 3.2 days longer on average, or some other number can be optimised, they judge a solution to be better. In short, they use a quantitative form of  judgment. This way of thinking is the appropriate way of judging engineering problems, but not the best way of judging design problems.

To a degree it is possible to come up with a new product by simply improving on some numbers from an old one. “Here’s a new hard drive with 1.3x more space.” However, such innovation will always be incremental.

The challenge for technology is how to create products and solutions that are not justified or evaluated from a quantitative, utilitarian perspective, but from an entirely different one, perhaps an aesthetic perspective. And this is also the challenge for social innovators and policymakers in society. Solutions that maximise numbers have value and can enable qualitative change in the long run, but in themselves they never constitute true progress.

To see how far the utilitarian thinking has gone, think about how many technology products are justified with sentences along the lines of “it makes more information available”, or “it makes X cheaper” , or “it makes you more connected”. In all seriousness, there are situations when it is not desirable to have more information.

Towards an understanding of will

Will has the potential to be turned into a fundamental concept through which ethics, epistemology, art, life and politics might be understood. How can we define the idea of will?

I’m sure I’ll find a lot of answers to this in the philosophical literature in time (maybe I should read Schopenhauer). But what I came up with myself, as a preliminary definition, is this:

A system can be said to have will if it makes progress towards some goal state in a wide array of circumstances, circumnavigating obstacles (including other systems with will) to some degree.

Here, progress doesn’t need to be an achievement – progress in the form of maintaining some state should also qualify.

This definition is dependent on definitions of states, progress, circumstances and systems. An intuitive conception of all of these should suffice for the time being.

One of my friends suggested that instead of trying to define will as an intrinsic property of something, it should instead just be understood as a human heuristic, a cognitive tool that we use as a lens through which to view the world. These two views are not incompatible, since the question here becomes: what is the minimal set of attributes that something must have for us to view it through the conceptual lens of “will”?