“My circuitry will need to adapt. It will take some time before they can compensate for your input.”
Artificial life form and Lt. Commander Data on Star Trek: The Next Generation
Translation: “Geordi, I will miss you.”
I am having trouble writing this essay. Somehow my computer has a mind of its own today. He is a bit slow and isn’t really very responsive. I don’t know what’s wrong with him today. Maybe tomorrow he’ll feel like himself again.
Have you ever caught yourself attributing human characteristics to your computer or to your car? (Lots of people, including myself, even give their car a name and can describe its personality.) You most likely do so to your cat, dog or even your goldfish. We all do this, and it is perfectly normal. We pat our computers on the back, we tell customers on the phone that “he doesn’t want to” today, or – in the spur of a techno-anger moment – we even hit our computers. YouTube is full of funny videos of people doing all this. Does it help? Does the computer behave better? What would happen if you would ignore your computer for a week after he (it) let you down? Or even better, could you make your Windows-based computer jealous by threatening to move to Apple? Nothing would happen, of course. We know that.
Still, it seems to be deeply human to describe things around us in our own – human – terms. Perhaps this is also the reason why the question “Can computers think?” is such a popular one in modern philosophy. The most influential person who has reflected on this question is undoubtedly Alan Turing (1912-1954). Turing was a pioneer in computer science and was responsible for cracking the German Enigma encryption machine during World War II. In a 1950 paper that is a remarkably good read, “Computing Machinery and Intelligence,” Turing introduced what is now known as the Turing test. Turing replaces the question of whether computers can think with a more practical one: Is it imaginable that a computer could fool a human being, and be taken for a human being as well?
The test that Turing devised described – and I am summarizing here – a situation in which a test person could ask a question to both another human being and a computer, without being able to see who was who. They would communicate through a computer screen. The test person would be allowed to ask questions, and the other human being and the computer would give answers. Both the other human being and the computer would even be allowed to cheat and respond with statements such as, “Don’t listen to him. I am the real human being.” To emulate the slower human speed, the computer would also be allowed to wait before responding to mathematical questions, for instance. If the test person could not to distinguish the difference, the computer passed the Turing test and would seemingly be able to think – in other words, display intelligence.
The Turing test provides a very practical solution to a very hard philosophical problem. What is intelligence anyway, and what does it mean to think? But the Turing test has been widely criticized, too. Because of its practical solution, it equates intelligent behavior with human behavior. This is not necessarily the case. Humans can display extremely unintelligent behavior (like hitting their computer and thinking it helps). And who says that human intelligence or the human way of thinking is the only way of thinking? People, and their cognitive capabilities, cannot be separated from their bodies and their senses. Why would computers have to be limited to means of communication such as language?
Maybe the concept of thinking can be defined in completely non-human ways. Ironically, if computers perform at their best thinking capacity in non-human ways (and you can argue computers do this all the time already), they would completely fail the Turing test instead of acing all intelligence tests.
Turing wrote his paper in 1950, so what he describes was pretty far-out thinking for his day. Nevertheless, he was very concrete in his predictions. He expected that by the year 2000, computers would pass the Turing test at about 70%. In his view, storage was the bottleneck, and he expected it to be “109
” by 2000. Assuming he was counting in bytes, this would be 10 gigabytes. It seems reality did overtake this prediction considerably! But what about the prediction of getting it right by 70%?
IBM, Chess and Jeopardy!
Turing ends his paper with a small discussion on where to start emulating human intelligence. Turing suggests an abstract activity first, such as playing chess; and in, fact, this is exactly what happened. IBM’s computer, Deep Blue, did defeat chess champion Garry Kasparov. The programming of Deep Blue was largely one of brute force, calculating an unbelievable 200 million positions per second and “thinking through” six to eight moves in advance. Additionally, Deep Blue contained a huge library of 700,000 chess games. The developments didn’t stop with Deep Blue’s victory. Today’s chess software running on standard PCs may not calculate as many moves per second, but contain much smarter algorithms.
Moreover, IBM has successfully moved beyond chess and has succeeded in a much, much harder domain. In 2011, the IBM Watson computer won the American game show Jeopardy!
, beating the two best contestants the show ever had. At the core of Watson was an engine to parse language, including trying to understand clever word play and slang, on which many of the show’s questions hinge. The computer’s programming was a combination of many different styles of algorithms trying to interpret the questions, in combination with four terabytes of semantically structured information – an infinitely more complex task than playing chess. Still, Watson is far from passing the Turing test. It may interpret language better than any other computer, but it is focused on providing answers instead of full conversation.
Returning to the original question, what does it mean for a computer to think, or to display intelligence? To even define what thinking really means is challenging already. Taking a very rational approach, thinking comes very close to reasoning and problem solving, where thinking describes the process of going through the various steps. Consequently, the more complex the problems you can solve, the more intelligent you are. According to the IQ tests, at least. Using this definition, it is hard to deny that computers can think; in fact, they think much better than human beings do. Take, for instance, the “logic grid” puzzles
that have been so popular for a while. For instance, based on a list of logical clues, one fills in a grid that shows the ages, hobbies and favorite colors of Casper, Rosaly, Emilie and Wilhelmine.
Logic grid puzzle
As this is a fairly simple algorithm, computers crack these puzzles in a millisecond. Moreover, they can do it on a much more abstract level than we can. Where we need real-world clues, like phone numbers, names and hobbies to imagine the logic, computers just need a label, and Rosaly as a label does just as fine as C2.
The What and How of Thinking
Others define thinking as a much more organic process, full of lateral steps, and also include thinking in terms of images, emotions, and so forth. When thinking includes imagination and inventive approaches to problems, this is where we human beings excel. I remember hearing a story that the American military was using pattern recognition software for visual processing. In particular, it was trying to create software that would recognize tanks, to avoid shooting at the wrong ones. The software was fed as many pictures as possible about American tanks, as well as other tanks, and through learning algorithms the software got better and better at recognizing them, until the software had to perform its capabilities based on live feeds instead of pictures. The software engineers later found out what had gone wrong. Instead of recognizing the patterns of the actual tanks in the pictures, the software started to recognize the resolution of the pictures. Pictures of American tanks had higher resolution than the pictures of foreign ones.
Thus it seems there are two ways to think about thinking.: the “what” way and the “how” way. Turing chooses the “what” way; he only focuses on the outcome, an intelligent result. If we follow this way of thinking, we cannot escape the conclusion that computers can think. In fact, they can think much better than we can. They can reason better, faster and deeper than human beings, with much more precision. Getting to the same broad level of thinking than humans can is simply a matter of time. Support for this thought comes from the analytical philosophers, a twentieth century school of thought led by Bertrand Russell (1872-1970) and Ludwig Wittgenstein (1889-1951). In their view, people can only think what they have words for. If there is no word for it, it cannot be thought. (Even if this is not true, the moment something new is thought, in order to express
it, it needs a word.) In essence, although we are not that far yet, you can codify all thought; and if it can be codified, it can be fed to a computer.
The “how” way presents a very surprising view. It seems there is a stronger tendency in research to explain human thinking in terms of computer science than the other way around. Many scientists and philosophers currently describe the world as consisting of matter only, subjected to the laws of nature. Applied to the human brain, according to some (not all) neurologists and psychologists, this means it is nothing more than an incredibly complex neurocomputer. Human behavior is simply the result of neuro stimuli. In fact, as the brain doesn’t have a central command center, but consists of many different pieces interacting with each other, one currently popular belief is that it the brain doesn’t even make most decisions. The body has decided already it wants to eat the lovely smelling food even before the brain has interpreted the scent. The body already reacts by withdrawing the hand from a hot surface fractions of a second before the information reaches the brain. Recent research shows that, in some cases, the brain gets involved only slightly
after the body gears towards action. As some put it in extreme terms, for a large part of our daily behavior, the brain is a “chatterbox” that rationalizes behavior and actions after the fact.
Seen this way, there is no reason to suggest that computers cannot think. Decision making is a distributed mechanism involving many centers in the brain. Thinking would be comparable to an internal dialogue. Seen this way, computers would again be better at it than human beings. In fact, this is exactly how the IBM Watson computer was programmed, with multiple algorithms for grammatical analysis, information retrieval, information comparison, and formulating answers. Even better than a human being, a computer could contain algorithms (bots) following different paradigms (whereas most human beings have trouble handling multiple, or even conflicting, paradigms at the same time). Such a distributed and diverse process could lead to much more balanced outcomes (or a much more serious version of schizophrenia, now that I come to think of it).
In many ways, defining the brain as a large and complex neurocomputer represents a full circle from the Age of Enlightenment, which was at its height in the 18th century.1
Philosophers aired an unshakable belief in the power of science. The world and the universe were seen as a machine – an incredibly complicated one, but a machine nevertheless. Our job, then, is simply to figure out the rules, as is the same with the brain today. It is an incredibly complicated neurocomputer, and it is up to us to figure out how it works.
Continuing this discussion, we’ll come to the anti-intuitive conclusion that a truly intelligent computer, the one we can really trust, is the one that can make mistakes.
As shown earlier in this article, computers can think, but somehow the conclusion doesn’t really satisfy me. It somehow feels
wrong that our human thinking can be reduced to pure reasoning.
I am more than happy to accept that computers can reason much better than we can, and infinitely faster. But there is a clue in the “logic grid” puzzles I described. Computers can do this in pure mathematical form, while human beings benefit from labels such as names and hobbies. Labels make us understand
what we are thinking about. Do computers have that understanding too? When a person understands something, its meaning is clear to him or her.
When is a meaning clear? Perhaps things, ideas or concepts can have inherent meaning, worth and significance in their own right. But I think it is more helpful to think of understanding and meaning as relations between objects or subjects in the world and ourselves. The moment we can relate to them, they start to have meaning. And if we can define the relationship we have, or can even predict the behavior of the object, we understand it. For example, I understand how to drive a car; I can see how my actions relate to the behavior of the car while driving it. However, my understanding is more limited than the understanding of a mechanic, who can relate to putting the various components together.
The keyword in all this is “ourselves.” Said another way, we need to be self-aware. Self-awareness means that we can be the objects of our own thoughts. We can reflect on our own being, characteristics, behaviors, thoughts and actions. We can step outside of ourselves and look at ourselves
. This can be very shallow when we look in the mirror and decide we don’t look that bad. Self-awareness can also go very deep, creating an understanding of who we truly are and what we believe in, and then we can consciously decide on how to behave. We have the will and power to stop intuitive reactions and behaviors and react the way we believe we should react, in a more appropriate manner. This is missing in this “can computers think” discussion so far.
So, can computers be self-aware? This has been an important theme in science fiction, at least. The Terminator
movies describe the war between humanity and Skynet, a computer network so advanced that it became self-aware. The system engineers who designed Skynet realized the consequences and tried to shut it down. Skynet saw this as a threat to its own existence (realizing your own existence, and being able to grasp the concept of death are key concepts of self-awareness), and struck back – Judgment Day. Or consider The Matrix
in which computers run the world and use comatose human bodies as batteries. It turns out that even Neo himself, escaping the dream world to live in the real world fighting the Matrix, is a product of the Matrix. The Matrix has the self-awareness to realize it needs an external stimulus to reinvent itself to become a better version of itself time and time again. In fact, every system that realizes it needs to renew itself in order to surive is self-aware.
In order to renew yourself, you need to be able to learn. And this is the argument that opponents always bring forward. A computer only does what it is told. The argument is easy to counter. IBM’s Deep Blue played better chess than its programmers ever could. It learned so much about chess that it beat Garry Kasparov, the reigning world champion. IBM’s Watson built so much knowledge that it beat the world champions on Jeopardy!
. Fraud detection systems contain self-learning algorithms; in fact, self-learning is a complete branch in an IT discipline called data mining. Learning is the cognitive process of acquiring skill or knowledge, and very much the domain of computers.
Can computers rewrite their own programming? This would have to be part of computers renewing themselves. In fact, there is an established term for it: metamorphic code. It is a technique used in computer viruses in order to remain undetected. Most computer viruses are recognized by a certain footprint, a combination of code. By continuously changing it, computer viruses become harder to detect. Every generation the virus reproduces itself, it reproduces a slightly different, but still functioning version. In principle, this is not different from human evolution. You could call the self-evolution of computers, as witnessed through viruses, an early stage of evolution. It’s far from the evolution the human race has experienced, but it is entirely conceivable computers will ultimately evolve to a similar or much more powerful organism than human beings. Given that the evolution takes place in the digital world, it is even likely this form of evolution goes infinitely faster than evolution in the real world.
In general, you can even argue computers can be self-aware in a much better way than human beings. Computers can make themselves the subject of their analysis completely dispassionately and objectively. They can run a self-diagnostic and report what they believe is malfunctioning in their system. Modern mathematics helps computers to judge the quality of their own program. Computers don't kid themselves like people do (when people are asked if they belong to the top 50% of students or drivers, invariably more than 50% rate themselves to be).
At the same time, dispassion is also the issue. How self-aware is the analysis then, if the analysis doesn't differentiate between itself and another computer in the outside world? It’s not. Furthermore, can computers self-reflect on their self-reflection? Maybe, if they are programmed to do so, there can be a diagnostic of diagnostics. A meta diagnostic is not hard to imagine. But let’s continue this. Could computers self-reflect on the self-reflection of their self-reflection?
Here we hit an interesting point. What does it mean to self-reflect on the self-reflection of your self-reflection? Most people would struggle with it, and that is exactly the point. This is why we human beings have invented the concept of the soul
. The soul is the “metalevel of being” that we don’t even grasp ourselves. So likewise, a computer doesn’t have to fully understand itself to still be self-aware. After all, do we? Our brains are not capable of fully fathoming themselves. We can map what happens in our brain during all kinds of activities, but it doesn’t mean we can truly understand it. By definition, we cannot step outside the paradigm in which we live. It is no use to theorize about what came before the Big Bang. The Big Bang created time, space and causality, and we need time, space and causality to think. Anything related to the absence of time, space and causality then is unthinkable. As such, a computer can’t think outside of its own universe either.
I Think, Therefore I Am
In trying to think this through, perhaps we are approaching the matter from the wrong angle. As human beings, we feel superior to computers. We have created computers, so we are the computer’s god. How could computers be better than we are? Every time we come to the conclusion, through reasoning, that conceptually computers are not very different from us –that they can think, and that they can be self-aware – we come up with a new reason why we are different. The killer argument is that computers do not create and invent things like we do. Computers haven’t created any true art simply because they felt like it. Computers haven’t displayed altruistic behavior. Computers don’t make weird lateral thinking steps and invent Post-it Notes when confronted with glue that doesn’t really stick, or invent penicillin by mistake.
And there we are… mistake. That is the keyword. We, human beings, are special because we are deeply flawed. We make mistakes, we don’t always think rationally, our programming over many, many years of evolution is full of code that doesn’t make any sense, and so forth. We are special because we are imperfect. In a paradoxical way, our superiority – today(!) – is in our imperfection. Because we don’t know anything for sure, we have to keep trying to come up with better ways and better ideas. As long as we keep doubting ourselves (which only a self-aware person can), we improve and sustain our state of being.
For computers to pass the Turing test and become superior (at least from a human point of view), they need to take uncertainty into account. Computers have no issue with probabilistic reasoning, but should rely more on fuzzy interpretation.3
To put it in provocative terms, they should become more imperfect. They should be able to doubt, be uncertain and reflect on their own thinking. From here it is only a small step to Descartes.
Rene Descartes (1596-1650), a French philosopher, tried to establish a fundamental set of principles of what is true. He looked at phenomena and the world around him and asked if there were different explanations possible, a way of testing whether the existing explanations were correct. The safest way of asking these questions is to have no preconceptions at all, to doubt absolutely everything. The only way to establish truth is to reach a certain sound foundation or, in other words, an ontology from which the rest can be derived.
Descartes eventually reached the conclusion that everything can be doubted, except doubt itself. You cannot doubt your doubt because that would mean all would be certain, and that is what you are doubting. The thought of doubt itself proves that doubt cannot be doubted. And because you cannot separate a person from his thoughts, therefore, cogito ergo sum: I think (doubt), therefore I am.
If you doubt things, it means you are not sure. You are aware of your shortcomings to grasp the truth. And the only thing you can do to evolve your understanding is to doubt what you think you know. For computers to learn organically, break free of their programming and evolve, become creative, and be able to deal with unknown, unprogrammed situations,4 computers need to become less perfect.
Turing would have loved the thought. Computers that can think, can doubt. So, computers that can truly think, at least in this definition, are to a certain extent unreliable. In fact, we can even take it a step further. To try to beat Deep Blue, Kasparov played a very intimidating game. Unfortunately, Deep Blue couldn’t be intimidated, and it had no effect or at least not the effect it would have had on a human being. A really smart computer would have been able to look beyond the chess board and interpret the behavior of the opponent. Interpretation is not an exact science. Sometimes interpretations are wrong. You could argue that only stupid computers win all the time. IBM’s Deep Blue would have been really
smart if it had been able to lose to Garry Kasparov, too.
Perhaps Google is actually a good example of the non-perfect computing paradigm. Google doesn’t claim to have a single version of the truth, or to possess the ultimate knowledge and wisdom. On the contrary, panta rhei
, as Heraclitus (535 – 475 BC) said, everything flows. Google’s data basis is continuously changing, and googling for something twice might very well lead to two different results. Google also gives multiple answers, a non-exact response to usually pretty non-exact questions. Still, it’s a pretty crude process. Some search engines use fuzzy logic and also search for information that is “round and about” what the user is asking for. If, for instance, you are looking for a second-hand Mercedes, preferably black, with not more than 50,000 miles on the odometer, the search engine may also return a dark blue Lexus with a mileage of 52,000.
Once we are the next generation down the road and the semantic Web becomes a reality, information retrieval and processing in general will become a bit more intelligent. On the semantic Web, computers will be able to understand the meaning of the information that flows around, based on ontological data structures. An ontology is a formal representation of knowledge as a set of concepts within a domain and the relationships between those concepts. If a human is formulating a search in an ambiguous way, search engines will be able to ask intelligent questions in order to provide a better result. Moreover, computers will be able to meaningfully process information without any human interaction or intervention.
Big Data and Big Process
So, congratulations, dear reader, for coming this far in this essay. Mostly it has been intellectual play. And I didn’t even dive into singularity thinking, which predicts far-reaching coalescence between humans and machines. Is there any practical value to the whole discussion of whether computers can think? The answer might be more obvious than you would think.
We have confirmed that computers can think. A thought a computer may have is nothing else but everything it derives from processing data, such as a correlation, a segmentation, or any other type of step towards calculating a result. We have even confirmed that computers can be self-aware. Let’s answer the following question: Can computers be individuals? Sure, they can be individualized, with all kinds of settings, but can computers have a mind of their own?
To get where I’d like to take this, we need to separate computer and content, which is impossible for human beings, but business as usual for computers operating in the cloud. For the purposes of determining if computers can be like individuals, we’ll focus on large data sets.
The overload of information is growing, and it is potentially growing faster than computers and Internet infrastructure. Big data
is one of the most significant trends in IT. If data sets become too big to be copied within reasonable time frames, you effectively cannot copy them anymore. They become unique. Data collections become individuals
in the literal sense of the word: They exist just once. Two collections of data may be similar or related, like siblings, but can never be identical. Furthermore, their complexity in terms of volume, variety and velocity is so high it cannot be understood by normal human beings. With a little bit of imagination, you can argue that data sets become person-like.5 They grow and mature over time. Data sets develop unique behaviors that they display when you interact with them. They could even develop dysfunctions and have disorders, being trained by the data and the analyses the systems perform.6
The complexity makes it so that we simply have to trust the answers the systems give because the moment we try to audit the answers, the data has already changed. Effectively, like people, systems just offer a subjective point of view that is sometimes hard to verify.
In this scenario, information managers are further away from the “one version of the truth” they strive for than ever before. Perhaps information managers should leave their Era of Enlightenment behind. Perhaps the idea that there is a single truth, and all that needs to happen is to discover it and roll it out, is not realistic. Perhaps it is time for a new wave – the days of “postmodern information management.”
Postmodernism, a term used in architecture, literature, and philosophy, has its roots in the late 19th century. Fronted by philosophers such as Martin Heidegger (1889–1976) and Michel Foucault (1926–1984), postmodernism has declared the “death of truth.” Postmodernism is a reaction to the “modernist” and “enlightened” scientific approach to the world. According to postmodernists, reality is nothing more than a viewpoint, and every person has a different viewpoint. This means there are many realities. Reality is not objective, but subjective. And realities may very well conflict (something we notice in practical life every day as we sit in meetings discussing problems and solutions).
Although debated (which school of thought isn’t?) and not the only trend in 20th century philosophy (analytic philosophers disagree fundamentally with postmodernists), I think it is safe to say that in the Western world, postmodernism is deeply entrenched in society. In a liberal and democratic world, we are all entitled to our opinions; and although some opinions are more equal than others, our individual voices are heard and have an influence in debates. (Except in information management.)
What would, for instance, postmodern “business intelligence” look like? If computers can think, even be self-aware, and if datasets can have a certain individuality, computers might as well express their opinions. Their opinions, as unique individuals, may differ from the opinions of another data source. Managers need to think for themselves again and interpret the outcome of querying different sources, forming their particular picture of reality – not based on “the numbers that speak for themselves” or on fact-based analysis, but based on synthesizing multiple points of view to construct a story.
Just what would postmodern “business process management” look like? It would not be possible to define and document every single process that flows through our organization. After all, every instance of every process would be unique, the result of a specific interaction between you and a customer or any other stakeholder. What is needed is an understanding that different people have different requirements, and structure those in an ontological approach. In a postmodern world, we base our conversations on a meta-understanding. We understand that everyone has a different understanding.
Of course, as we do today, we can automate those interactions as well. Once we have semantic interoperability between data sets, processes, systems and computers in the form of master data management, metadata management, and communication standards based on XML (in human terms: “language”), systems can exchange viewpoints, negotiate, triangulate and form a common opinion. Most likely, given the multiple viewpoints, the outcome would be better than one provided by the traditional “single version of the truth” approach.
Thinking this through, could it be that postmodern information management and postmodern process management is here today? Could that be the reason why most “single version of the truth” approaches have failed so miserably over the last twenty (or more) years? Did reality already overtake the old IT philosophy? One thing is clear: Before we are able to embrace postmodernism in IT, we need to seriously re-architect our systems, tools, applications and methodologies.
In my mind, perhaps the ultimate test of whether computers can think is a variation on Turing’s test. I pose the following question: Do computers have a sense of humor? This was the one thing Lt. Commander Data always struggled with in Star Trek. He had read everything about humor that was ever published, but still wasn’t able to interpret the simplest joke.End Notes:
- Also see my article “Medieval IT Best Practices” as published on the BeyeNETWORK.
- Arthur C. Clarke’s “The City and the Stars” (1956) is a story that describes exactly the same dynamic as told in The Matrix. Alvin, a “unique” as it is called, is created to leave the city of Diaspar and explore.
- Humans have a so-called mirror gene. If we see someone else cry, the center in our brain that controls crying is activated too. If someone else eats, we can become hungry too. In interpreting human behavior of others, we reach within ourselves. This is what computers don’t have. Intelligence does not have to be human, but inhuman intelligence will have trouble interpreting human behavior.
- Although I don’t really subscribe to this school of thinking, analytic philosophy comes to our aid again. There is an old story that tells how easy it was for the Europeans to conquer Native Americans. As the Native Americans did not have any concept for sailboats, and no words for it, they simply didn’t register the sailboars at the horizon. It shows humans don’t know how to deal with unprogrammed situations as much as we would like to believe we can.
- I’d like to recognize Roland Rambau, a colleague of mine when I worked at Oracle, for coming up with this idea.
- My career recommendation for the years to come is to become a “data therapist.”
Recent articles by Frank Buytendijk