This year I’ve spent a fair amount of time trying to read Martin Heidegger‘s great work Being and Time, using Hubert Dreyfus’ Berkeley lectures on the book as supporting material. By now I’ve almost finished division 1. I’m learning a lot, but it’s fair to say that this is one of the most difficult books I’ve read. I’m happy to have come this far and think I have some kind of grasp of what’s going on.
I’ve also come to understand that Heidegger played an important role in the so-called “AI debate” in the 70’s and 80’s. At the time, people at MIT, DARPA and other institutions were trying to make AI software based on the presumptions of an Aristotelian world view, representing facts as logical propositions. (John McCarthy, of Lisp fame, and Marvin Minsky were some of the key people working on these projects). Dreyfus made himself known as a proponent of uncomfortable views (for the AI establishment) at the time, such as Heidegger’s claim that you cannot represent human significance and meaning using predicate logic (more on that in a different post, when I understand it better).
There were even attempts at making a “Heideggerian AI” in response to Dreyfus’ criticism, when it became apparent that “good old fashioned AI”, GOFAI, had failed. But apparently the Heideggerian AI also failed – according to Dreyfus, this was because it wasn’t Heideggerian enough.
Using part 1 of Being and Time as inspiration, I have come up with a possibly novel idea for a “Heideggerian” AI. This is also a first attempt at expressing some of the (incomplete, early) understanding I think I have excavated from Being and Time. As my point of departure, I use the notion of equipment. Heidegger’s Dasein essentially needs equipment in order to take a stance on its being. It has a series of “for-the-sake-of-whichs” which lead up to an “ultimate for-the-sake-of-which”. In the case of an equipment-wielding AI, we might start by hardcoding its ultimate FTSOW as the desire to serve its human masters well by carrying out useful tasks. Dasein can ponder and modify its ultimate FTSOW over time, but at least initially, our AI might not need this capability.
Heidegger’s Dasein is essentially mitdasein, being-with-others. Furthermore, it has an essential tendency to do what “the they”/”the one” does, imitating the averageness of the Dasein around it. This is one of its basic ways of gaining familiarity with practices. By observing human operators using equipment, a well-programmed AI might be able to extract a working knowledge of how to use the same equipment. But what equipment should the AI use to train and evolve itself in its nascent, most basic stages? If the equipment exists in the physical world, the AI will need a sophisticated way of identifying this equipment and detecting how it is used, for example by applying feature detection and image processing to a video feed. This process is error prone and would complicate the task of creating the essential core of a rudimentary but well-functioning AI. Instead, I propose that the AI should use software tools as equipment alongside human operators who use the same tools.
Let’s now consider some of the characteristics of the being of equipment (the ready-to-hand) that Heidegger mentions. When Dasein is using equipment in skilled activity, the equipment nearly disappears. Dasein becomes absorbed in the activity. But if there is a problem with the activity or with the equipment, the equipment becomes more obtrusive. Temporarily broken equipment stands out and draws our attention. Permanently broken equipment is uncanny and disturbing. Various levels of obtrusiveness correspond to levels of breakdown of the skilled activity. And not only obtrusiveness: we become more aware of the finer details of the equipment as it breaks down, in a positive way, so that we may fix it. All this is certainly true for a hammer, car or sewing machine, but is it true of software tools? We may consider both how human users relate to software today, and how our hypothetical AI would relate to it.
Unfortunately it can be said that a lot of software today is designed — including but not limited to user interfaces — in such a way that when it breaks down, the essential details that need to be perceived in order to fix the problems are not there to be seen, for the vast majority of users with an average level of experience. When the software equipment breaks down, presumably our human cognition goes into alert and readies itself to perceive more details so that it can form hypotheses of how to remedy the errors that have arisen. But those details are not on offer. The software has been designed to hide them. In this sense, the vast majority of software that people use today does not fail smoothly. It fails to engage the natural problem solving capacity of humans when it does break, because the wrong things are exposed, and in the wrong ways and contexts. Software equipment has a disadvantage compared with physical equipment: we cannot inspect it freely with all of our senses, and the scrutiny of its inner details may involve some very artificial and sophisticated operations. The makers may even actively seek to block this scrutiny (code obfuscation, etc). In the case of software equipment, such scrutiny is greatly separated from the everyday use by numerous barriers. In the case of physical equipment, there is often a smooth continuum.
We now have the opportunity to tackle two challenges at once. First, we should develop an AI that can use software equipment alongside humans – that is, use it in the same way or nearly the same way that they use it, and for the same or nearly the same purposes. Second, we should simultaneously develop software that “breaks down well”, in the sense that its inner structure becomes visible to users when it fails, in such a way that they can restore normal functioning in a natural way. These users can be both humans and the AIs that we are trying to develop. Since the AI should mimic human cognition, a design that is good for one of these categories should be good for the other as well. In this way we can potentially develop a groundbreaking AI in tandem with a groundbreaking new category of software. Initially, both the AI and the software equipment should be very simple, and the complexity of both would increase gradually.
There would be one crucial difference between the way that humans use the software equiment and that the AI would use it. Human beings interact with software through graphical (GUI) or command-line (CLI) interfaces. This involves vision, reading and linguistic comprehension. These are also higher order capabilities that may get in the way of developing a basic AI with core functionality as smoothly as possible. In order to avoid depending on these capabilities, we can give the AI a direct window into the software equipment. This would effectively be an artificial sense that tells the AI what the equipment is doing at various levels of detail, depending on how smooth the current functioning is. This would be useful both for the AI’s own use of equipment and for its observation of how humans use the same equipment. In this way we can circumnavigate the need for capacities such as vision, language, locomotion and physical actuators, and focus only on the core problem of skilled activity alongside humans. Of course this kind of system might later serve as a foundation on which these more advanced capacities can be built.
Many questions have been left unanswered here. For example, the AI must be able to judge the outcome of its work. But the problems that it solves inside the computer will reference the “external” world at all times (external in so far as the computer is separated from the world, which is to say, not really external). I am not aware of many problems I solve on computers that do not carry, directly or indirectly, references to the world outside the computer. Such references to the external world mean that the “common sense” problem must be addressed: arbitrary details that have significance for a problem may appear, pop up, or emerge from the background, and Dasein would know the relation of these details to the problem at hand, since this is intelligible on the basis of the world which it already understands. It remains to be seen if our limited AI can gain a sufficient understanding of the world by using software equipment alongside Dasein. However, I believe that the simultaneous development of software equipment and a limited AI that is trained to use it holds potential as an experimental platform on which to investigate AIs and philosophy, as well as software development principles.
Comments 2