Values 2: Human reason is reactive

Previously I wrote about Nietzsche’s assertion that philosophers must create values, and a distinction between scholars, scientists and philosophers was made. The focus now shifts to the faculty of reason and its contrast with another mode of thinking.

Reason can be understood as man’s ability to think according to precise rules. Logic is one such set of rules: by using axioms and inference rules, we are able to generate vast arrays of valid statements. For instance, we can attempt to prove mathematical truths, or we can work out how to place furniture in a room, or the quickest way of carrying out five different errands in an afternoon.

Two essential functions of reason are finding solutions and validating solutions. In finding solutions, sometimes we apply reason as a search process, that is, we work through a number of combinations until we find one that works, or until we give up. By deduction we can reduce the size of the search space, and sometimes deduction will lead to a result without any search being necessary at all. In validating solutions, we might obtain the proposed solution from anywhere, possibly from outside reason itself, and then, again it is sometimes a search process: we may attempt to find contradictions that invalidate the proposed solution, and we do not always find them immediately. This would be validation by absence of contradictions, but we might also validate a solution affirmatively by using it in a problem. For instance, we can verify that 7 is the square root of 49 by computing 7*7, and it would be useless to verify it by testing that 7*7 does not equal any of the values 1,2,3…48,50,51,52… infinity.

Reasoning is a slow, tedious process, and it can only consider so many possible solutions in a given amount of time. But it is reliable, and the results of different pieces of reasoning can often be composed to yield a larger, consistent result. But it is clear that our minds have other ways of functioning as well, with other strengths and weaknesses. In particular, it seems that reasoning is essentially a reactive process. It reacts to a given problem with given constraints and rules of inference. But it seems to be unable to create. Creativity appears to always come from extralogical, extra-reasonable places. Creativity in the spontaneous sense of a child drawing a picture with crayons, or a novelist writing a book, or an orator using a particularly persuasive combination of words that captures a fleeting feeling, or a commuter taking a different route home from work, out of curiosity. The distinction is not always clear-cut: a decision like choosing the colour of a wallpaper could be done both using “principles” with which one reasons logically, or using a spur of the moment feeling about what is good. It is clear, though, that the two can interact very productively: often a complex mental activity needs a dialogue between reason and extra-reason, and not just in the sense that extra-reason produces a suggestion that reason validates. This seems to be the danger with excessive reliance on rationality and scientific skepticism, then – it risks shutting out the essential extralogical factor and reducing decision making to searching, or from another viewpoint, it risks invalidating the most powerful search heuristic of all.

It seems as if there is a parallel, of sorts, with modern democracy in this distinction. Democracy at the national level, too, is a reactive form of decision making today. It is true that groups of a small or moderate size sometimes can create things collectively, and when they do, it seems to be the case that the form of the group enables individuals to take turns in influencing the group and being responsible towards it: the individuals make serial contributions that layer on top of each other to form the collective contribution. But voters in a national democracy do not have a format that allows this process to take place across the entire group, and the scale is too great. Those who create proposals are smaller subgroups or elites, and the voters are reduced to playing one of the roles that reason can play: affirm or reject proposals. In fact, not even this, since they are typically not asked to affirm every proposal – they are able to stage a revolution if their discontent becomes tremendously large, and otherwise they only have the ability to voice rejection every four years or so. (The exceptional case where very large groups can create something collectively would be when they share a common sentiment very well, for instance in the event of a national crisis.)

The seat of creativity is ultimately in the individual, and not in the collective. When democracies create agendas, goals, projects and proposals, they are not acting democratically, but channeling individual elements within.

Values 1: Philosophy, science, and their relationship

This is hopefully the start of a short series of posts in which I attempt to relate the concepts of value and value creation, in particular as they were understood by Friedrich Nietzsche, to the modern world, in some kind of way. Comments of all kinds are encouraged!

In the beginning (understood as ancient Greece), there was philosophy. That is to say, most systematic inquiry into matters worth thinking about was collected under this umbrella term. Ethics, politics, epistemology and metaphysics went side by side with physics, biology and astronomy. As millennia passed, the collective human knowledge and scholarly labour grew, and some philosophical disciplines got their own name, cut the umbilical cord, and got to stand on their own feet.

There are many definitions as to what a philosopher is; one definition would be those who study the academic subject of philosophy in academic institutions. The German philosopher and philologist Friedrich Nietzsche wrote at length about what a philosopher really is; in his definition a philosopher is someone who creates values. Nietzsche rejected morals and universal truth as laid down by a God or higher authority; instead they are created by subjective human beings, and by philosophers in particular.

Perhaps [the genuine philosopher] himself must have  been critic and sceptic and dogmatist and historian and also poet and collector and traveller and solver of riddles and moralist and seer and “free spirit” and almost everything in order to pass through the whole range of human values and value feelings […] But all these are merely preconditions of his task: this task itself demands something different – it demands that he create values.

(Friedrich Nietzsche, Beyond Good and Evil, s. 211, Walter Kaufmann transl.)

We may understand a scholar to be a person who processes knowledge. Good scholarship entails marshalling what has been written and studied previously, perhaps with a view to settling a question or supporting a perspective. Scientists and philosophers can make use of scholars in their work. To the extent that the scholar does more than merely process knowledge, he or she is something more than a scholar.

In contrast, a scientist, as we understand him or her today, is someone who combines scholarship and primary investigation (in the form of calculation, experimentation, measurement and so on) in order to create models of nature and the world, in order to gain the power to explain. The classical scientific process involves repeated refinement of hypotheses until one that cannot be proven wrong has been found.

Today, science, which formerly was known as natural philosophy, has grown enormously large, and to most people probably appears to have much greater value than philosophy. The scientific mindset is widely appreciated and respected throughout the world — perhaps too respected. Scientists learn as one of their highest virtues to be skeptical and to reject assertions that are made without a basis in measurement or theory. Paralysis by skepticism is very much a possibility. To see the danger in this, we have to recognise that a great deal of valuable things in human history have been created without such a basis – by people who have been something like the ones Nietzsche describes.

The dangers for a philosopher’s development are indeed so manifold today that one may doubt whether this fruit can still ripen at all. The scope and the tower-building of the sciences has grown to be enormous, and with this also the probability that the philosopher grows weary while still learning or allows himself to be detained somewhere to become a “specialist” – so he never attains his proper level, the height for a comprehensive look, for looking around, for looking down. […]

Indeed, the crowd has for a long time misjudged and mistaken the philosopher, whether for a scientific man and ideal scholar or for a religiously elevated, desensualized, “desecularized” enthusiast and sot of god. And if a man is praised today for living “wisely” or “as a philosopher”, it hardly means more than “prudently and apart”.

(Friedrich Nietzsche, Beyond Good and Evil, s. 205, Walter Kaufmann transl.)

In fact, scientists today do not, in my experience, work like the ideal scientist described above. Scientists often use their own judgment and their own values in order to influence how their science is to be used. Einstein and Oppenheimer had opinions about the use and misuse of the nuclear bomb. Creators of vaccine may have opinions on how it is to be distributed and may be able to influence this. Sometimes these value statements made by scientists are pure judgments, applications of an ethic that the scientists already believe in. However, sometimes the situation is so new that the scientists effectively have to create values. To the extent that they do this, these scientists dabble in ethics, morality and philosophy, but this is often overlooked, as is the fact that scientific method itself was created by philosophy.

Nietzsche calls for philosophers to make use of scientists and artists, and create values in the service of mankind. He calls for a new recognition of the true role and dignity of philosophy, which does not at all need to mean a reduction of the value of science, but rather an expansion of the whole system. Philosophy stands naturally above science and scholarship and uses them as its tools. The activity of creating values based on philosophical insight by necessity goes on constantly and should not be confined to little nooks in the margins of society. The full extent of and need for this activity needs to be acknowledged.

Has the situation changed since Nietzsche wrote Beyond Good and Evil in 1886?

Type theory

One of the most interesting things I’ve been studying in the past year has been type theory. I feel that type theory is an area where a lot of separate fields can come together in a good design. In strongly typed languages, language implementation efficiency, syntax and language semantics all leave essential marks in the type system, and conversely, flaws in the type system can impair all of them.

Type theory was invented by Bertrand Russell as a solution to his own well-known paradox, which cast doubt on the foundations of mathematics upon its discovery.  The problem posed by the paradox is essentially: given a set of all sets that do not contain themselves, does that set contain itself? Another interesting problem arising from self-referentialism. Type theory resolves this issue by classifying the members of sets according to some type, and in such a system a set definition like this one is not permissible.

(The drama surrounding Russell’s paradox and Principia Mathematica can now also be read in the form of a “logicomix” graphic novel by Doxiadis et al!)

Type theory did not end its life as a metamathematical hack that solved a set theoretical problem. It has now come of age. Type systems are, of course, at the heart of the development of many modern programming languages. Java, C#, Lisp, ML and Haskell are but a few of the ones that depend completely on a nontrivial type system. (Even though C and C++ have type systems, they are kept in a crippled state by the fact that the programmer is allowed to ignore their commandments at will.)

What is the benefit of a type system in a programming language? In the words of Benjamin Pierce,

A type system is a syntactic method for automatically checking the absence of certain erroneous behaviors by classifying program phrases according to the kinds of values they compute.

So, generally speaking, a type system helps distinguish correct programs from incorrect programs. The parser also does part of this job, of course, since many programs that cannot compile will not parse. But there are programs that are syntactically correct but at the same time semantically unsound according to the evaluation rules of the language. Consider the following Java statement:

T x = new U();

This is clearly syntactically acceptable, but is it semantically valid? In order for it to be, either U needs to be the same type as T, or a subtype of T. But Java programmers are free to define new subtypes of existing types in source code. In order for the parser to check the correctness of the assignment, it seems that either the grammar would become enormously large, or it would place awkward restrictions on Java syntax. So we leave more refined aspects of correctness to the type checker, which is applied later, post-parsing.

What does a type system look like? As used in programming languages today, a type system consists of a set of typing judgments. The judgments can be stacked together to form the typing derivation of an expression in the language. Every expression normally has a unique typing derivation, and when a type can be derived, we call the expression well typed. As Benjamin Pierce’s formulation suggests, each typing rule makes some assumptions about the kinds of values that constitute the various parts of an expression, and then says something about the kind of value that will be computed by the expression as a whole.

Type systems can look prohibitive at first sight, since they use quite a few algebraic symbols. But in general, the structure and interrelationships between the symbols is the most important thing to discern from them, and focussing on this aspect makes reading them much easier.

In the following mini-type system, typing judgments will have the form \Gamma \vdash t: T, where \Gamma is a typing context, t is a term, and T is a type. Terms are essentially fragments of the source code, and from smaller terms we can construct larger terms using syntactic forms. So for example, if a, b and c are terms, then we might use the if-syntax to construct the larger term

if (a) {b} else {c}

in a java-like language. The typing context \Gamma is essentially just a map, mapping already typed terms to the types assigned to them. As we type increasingly large terms, we gradually add more information to the typing context. The symbol \vdash is just notational convention and has no particular meaning. The judgment \Gamma \vdash t: T is read as “term t has type T in typing context \Gamma“.

A fragment of a minimal system might look like the following.

<br /> \inferrule<br /> {<br /> \Gamma \vdash t_1 : T<br /> \and<br /> \text{vartype}(v) = U<br /> \and<br /> T <: U<br /> }<br /> {<br /> \Gamma \vdash v = t_1 : U<br /> }<br /> {<br /> \text{T-ASGN}<br /> }<br /> \\<br /> \vspace{5mm}<br /> \\</p> <p>\inferrule<br /> {<br /> \Gamma \vdash t_1 : \text{Int}<br /> \and<br /> \Gamma \vdash t_2 : \text{Int}<br /> }<br /> {<br /> \Gamma \vdash t_1~\textsf{intBinop}~t_2 : \text{Int}<br /> }<br /> {<br /> \text{T-BINOP}<br /> }</p> <p>\\<br /> \vspace{5mm}<br /> \\</p> <p>\inferrule<br /> {<br /> \Gamma \vdash t_1 : \text{Bool}<br /> \and<br /> \Gamma \vdash t_2 : T_0<br /> \and<br /> \Gamma \vdash t_3 : T_0<br /> }<br /> {<br /> \Gamma \vdash \text{if}~t_1~\text{then}~t_2~\text{else}~t_3 : T_0<br /> }<br /> {<br /> \text{T-IF}<br /> }<br />

Three typing rules have been introduced here, T-ASGN for assignments, T-IF for if-statements, and T-BINOP for binary operations on integers, like + and -.
Every rule has the same structure: above the line, assumptions that must hold true for the rule to be applied; below the line, the conclusion we may draw if the assumptions are true.

Above, the following example was given.

T x = new U();

Using T-ASGN, we can type an assignment statement like this one. The assumption T <: U says that T is a subtype of U. We have also assumed a lookup function vartype which gives the declared type of the variable we assign to. In English, we can read the rule T-ASGN as saying: “Assuming that the type of the right hand side is a subtype of, or the same type as, the declared type of the variable, the assignment will evaluate to a value that has the declared type of the variable”. Of course not all programming languages implement assignment in this way.

In the T-BINOP rule, we must of course substitute an actual operation on integers for the special word intBinop for it to be valid. For addition, the rule can be read as “Assuming that the left hand side and the right hand side are both integers, then the result is also an integer”.

Finally, the T-IF rule says “assuming that the truth condition is a boolean, and that the two alternative paths have the same type, then the if-statement evaluates to the same type as that of the two conditional paths”.

The shape of the type system follows the evaluation rules for the language quite closely, so that a well-typed term is also always possible to evaluate. We can thus avoid accepting programs that might get stuck without any valid evaluation rule at some point. (Consider what happens in Python if you try to invoke a method on an object, and the object doesn’t have a method with that name.) We are not limited to tracking just the kinds of values being computed in a type system. We can track various kinds of safety properties, such as memory regions being violated, as well, or we may track multiple properties at once. Type and effect systems is one way of tracking both values being computed and resource violations.

A good source of more information on type systems applied to programming languages would be Benjamin Pierce’s Types and Programming Languages.

The cryptographic-spiritual realm

Internet services and systems such as Google and Amazon usually appear to us as a visual representation of a page, as if it were taken out of some kind of printed publication. For almost all of the users, these visual qualities are all that will ever be seen. They are always present and never present, because we cannot point to the place where they really reside.

But of course they reside somewhere. Cables and machines embody the apparition that users interact with, and these cables can be found and cut. The machines can be shut off. Traceroute tells me which path the data is taking. But the thread that binds the body to the spirit is thin, and the two evolve in a largely independent manner.

With cryptographic methods, such as the I2P network, it is possible to hide the exact location of a system, disperse it across the fabric so widely that it cannot be excised without destroying the fabric itself.

The effect is the same as if the system had no physical existence at all. It now exists in a kind of spiritual realm, where it can only be touched with great difficulty.

Permanence and technology

1. Mt. Fuji, 3776 m high. A petrified mass of volcanic discharge, thought to have been first ascended in year 663.

2. Skyscrapers in Ootemachi, Tokyo and the City, London. Buildings belonging mostly to banks and insurance companies. They appear, on some intuitive level, to have been there forever, though most of these buildings can now be built from the ground up in less than a year. It is hard to fathom how they could ever be destroyed, though the work could be done in a matter of months (?) with the right equipment.

3. What is permanent? Anything that we cannot perceive as changeable, we call permanent. But this is a linguistic and epistemological error. The inability to perceive something has led us to declare its absence.

4. The earth. 5.9736 x 10^24 kg of matter, likely fused into a planet about 4.54 billion years ago. The sun will enter a red giant phase in about 5 billion years and swallow or cause tremendous damage to it. The sun is also currently the source of all fossilised energy on earth and the energy used by most life forms on it.

5. A certain class of mathematical proofs often consist in converting facts from one basis (family of concepts) to another. Such proofs often have a hamburger-like structure: first the initial facts are rewritten into a larger, more complex formulation that suits both the assumptions and the conclusion, and then the complex formulation is collapsed in such a way that the desired results come out and the original formulation is lost. The “beef” in such a proof often consists in carrying out the correct rewriting process in the middle.

6. Facebook takes off and becomes enormously popular, in part because it facilitates, on a huge scale, something that human beings want to do naturally. Communication and the need to relate to crowds and individuals could be said to be universal among humans.

An incomplete version of the technology lattice, as suggested in this post, with human desires at the top and the resources available in the universe at the bottom.

7. We can imagine technology as a lattice-like system that mediates between the human being, on one hand, and the universe on the other. As a very rough sketch of fundamental human needs, we could list drives like communication, survival/expansion, power/safety and art. (In fact, an attempt to make some of these subordinate to others would constitute an ethical/philosophical system. Here we do not need such a distinction, and the one I have made is arbitrary and incomplete.) When we place our fundamental drives on one end, and the resources and conditions provided by the universe on another – elements and particles, physical laws and constants – we can begin to guess how new technologies arise and where they can have a place. The universe is a precondition of the earth, which is a precondition of animals and plants, which we currently eat. And food is currently a precondition of our survival. But we can imagine a future in which we are not dependent on the earth for food, having spread to other planets. We can imagine a future in which oil and nuclear power are no longer necessary as energy sources, because something else has taken their place. New possibilities entering the diagram like this adds more structure in the middle – more beef – but the motivating top level and the supplying bottom level do not change perceptibly. (Of course, if they did, beyond our perception, they could be made part of an even larger lattice with a new bottom and top configuration.)

8. Technology is a means to the establishment of permanence, and a re-encoding of human desires into reality.

9. New technologies arise constantly. But can this evolutionary process go on forever? Does the lattice converge towards a final state?