“Customer service has never been as bad as since the advent of CRM systems.”
Recently I bought a train ticket at a machine, using my credit card. The procedure was aborted by the machine: “Unusable Card,” it said. Unusable? I had just bought a sandwich with it, and it was quite usable yesterday too, thank you very much. The machine should have said, “I am terribly sorry for failing in the single task I am actually supposed to do, but I can’t seem to read your card.”
Ironically, when I was sharing my frustration about the machine with someone on the phone, we got disconnected. A metallic voice told me, “Your call has been completed.” Excuse me? My call wasn’t completed at all. It was terminated. That is what the voice should have said.
Examples are all around us. In business, it seems like customer service has never been as bad as since the advent of CRM (customer relationship management) systems. The moment your case is not covered by the scripts and business rules locked into the system, customer self-service systems built for “straight-through processing” can’t help you anymore. Good luck. Try to find a phone number on the website somewhere. Even if you are lucky enough to speak with a real person on the phone, chances are that call center agent won’t be able to help you either, restricted by the same business rules and systems.
Both examples are typical results of engineering thinking
, reasoning from within the system. Everything outside of the rules of the system is seen as an anomaly, a.k.a. the real world. Much attention is being paid to user interfaces and interaction; but if the designers live within the confines of their own systems instead of in the world of the users, they will create silly responses that annoy users.
Are we in control of the technology we use, or are we already under control?
Engineering Thinking and Ambient Computing
I think we are getting used to it. We think in terms of how to work the system and how to beat the system. Users are reduced to operators of the system. Using our smartphones to organize our lives by organizing the systems that support us has become a second nature, our new reality. Even worse, the systems around us organize us. Advanced analytics decide which advertisements we see, try to predict which movies and what music we like, and which discount coupons we get in the supermarket to entice us to try a new type of salad dressing. Additionally, it is a self-reinforcing loop. Smart systems determining customer segmentations present people with choices within a particular segment. Because there is so much choice, chances are we rely on the choices presented, confirming the segmented picture the system has of us and strengthening our – in essence already predetermined – profile.
These types of advanced analytics represent the opposite of engineering thinking, in which we need to understand how a system works. The goal of ambient computing
is that you never even see the system. It just functions in the background. Although IT is still dominated by engineering thinking, ambient computing is the “new way to go.” On the infrastructure level, cloud computing is a good example. We really don’t have to care anymore where data and applications reside. We just use them on a device of choice.
Cloud computing is perhaps the easiest part of the equation because it is “only” infrastructure. We don’t really have to see it because it has no user interface. Computers still have significant issues interpreting our behavior and trying to adapt to it. Most users of the latest version of Microsoft Office become accustomed to the ribbons, where you only see the buttons associated with the type of work you do. That seems to work pretty well, but have you ever tried to make an Apple device do what you want it to do, slightly outside the normal routine? Impossible. User friendliness goes hand in hand with loss of control. Most advanced photocopiers have the same issue. These copiers recognize paper size automatically, so copying the upper half of a letter-sized paper sideways is simply not possible anymore. Or think of the autocorrect software in your smartphone that suggests words as shortcuts and the embarrassing results this can lead to.1
As much as engineering thinking, ambient computing leads to frustration in working with systems and computers.
So I ask, forgive me the gloomy thoughts, are we in control of our technology, or are we already under control of technology? Is technology liberating us from chores we don’t like to do, or has IT become the ultimate prison?
Technology and Real Life Have Blended
In any case, we have come to depend on technology in most aspects of our life. In fact, we feel uncomfortable without. How many telephone numbers do you still know without checking your phone? How many times do you use a navigation system, even if you are relatively confident you know how to get to your destination? Have you had a slight panic attack because you forgot your phone or your computer power cable on a small trip? And I keep wondering what would happen worldwide if Facebook would be down for a week…
The real world and the virtual world are interconnected to such an extent that sometimes it is difficult to distinguish where one begins and the other ends. Indeed, friendships would be different without Facebook. People find new meaningful relationships using dating sites. Young adults applying for jobs rightfully claim leadership skills based on their experience with World of Warcraft. Second Life became so big for a time that it had its own economy, and tax services were considering how to deal with that.
This is only the beginning. We already can imagine the next wave of medical technology evolution. Medical technology is currently in the engineering phase with external, visible technology such as hearing aids and internal, but self-sufficient technology such as pacemakers. What happens if medical technology goes ambient and becomes biotechnology? Imagine how technology will affect us if it starts to communicate with our brains.
Descartes (1596-1650) said that science should be a benefit to all and should serve progress. It should take care of menial tasks, making labor easier, and it should be useful in social life too. Substitute science with technology, and you have a very modern definition of what technology should do. Technology is meant to amplify human abilities,2
such as sight, strength, speed and so forth. Examples of this amplification through technology include the Internet (sight), drilling machines (strength) and cars (speed).
The more we use technology, the more we depend on it. This comes with some good cause for concern as well. For instance, what about our safety if something goes wrong? You could almost formulate a law here, the law of the constant impact
. It would go something like this: The more we rely on technology, and the more reliable technology becomes (which means the chance of technology breaking down decreases), the higher the impact should it break down. Probability times impact is a constant. As private persons, businesses and society as a whole, we rely on Internet connectivity and trust it will work when we need it. We simply cannot work in those (rare) cases it is not available. Many of us rely so heavily on our navigation systems that we don’t feel comfortable going somewhere without. Many countries relied on nuclear energy technology until the nuclear disaster in Japan in 2011, but are now questioning their continued reliance on this type of energy.
Another concern is environmental. It is wonderful we have so much technology at our disposal, but what do we do with the millions and millions of old phones and the billions of old sensors we’ll see appearing in the years to come? How will the environment be affected by the CO2 emissions from producing all the needed technology?
What about privacy? We care so much for our Facebook accounts that we allow Facebook to store and use everything it knows about us. We appreciate the ease of use of our navigation systems, even though the providers can sell aggregated data from these systems to the police to plan speed traps.
To Use or Not to Use – That is the Question
While technology is a benefit for all and does serve progress, there are serious concerns related to technology and its use. The consequentialist will take this relatively light-heartedly. The question of whether the benefits outweigh the concerns lies not in the technology itself, but in its use. Technology itself is amoral of nature. Universalists would be more interested to explore benefits and concerns prior to using a certain technology. They would feel the need to take a stance on what’s right and what’s wrong up front.
There’s something to be said for both views; it is easy to see both have a point. But both points of view have their challenges too. Take, for instance, the development of nuclear or biological weapons. It would be too easy for the consequentialists to say that whether having weapons like this is good or bad depends on what you do with them. The best thing you can say about weapons of mass destruction is that they would scare off any potential enemy, but is that sufficient justification? Furthermore, you can’t really undo knowledge. Once it is there, it is there and you’re going to have to live with it. All in all, not the strongest of value propositions.
Universalists would point out that these weapons are designed and produced with a true possibility of actually use. And if one would use them, this would trigger someone else to use them as well, triggering others to use them too. This would likely lead to end of humanity. This cannot be good; therefore, we should not have weapons of mass destruction.3
But can we stop technological progress? Should we stop technological and scientific research at some point because, for instance, biological warfare and stem cell research are deemed unethical? Is there knowledge we simply shouldn’t have? A good question to ask at face value, but I find the question a bit too simplistic. Technology innovation is seldom a linear path. Many elements of nuclear technology and biotechnology were invented without the idea of warfare in mind. So what’s there to decide up front?
Dilemmas like this may look larger than life, but we can see them on a smaller scale as well. Take, for instance, Apple’s decision to lock iPhones to work only with certain telecom service providers and to not allow Flash technology on the iPad. Or consider Microsoft integrating Internet Explorer so deeply into Windows that other browsers could not work effectively and efficiently on a Windows-based computer. If you consider business and the market to be amoral, there is nothing against decisions like this. It allows Apple to keep full control over both the business model and the technology in use, and present a consistent image to the market. Consequentialists will point out that the decision as such is not an issue unless Apple made those decisions with the express purpose to harm a competitor, without having the best interest of the customer in mind. Universalists may rightfully point out that this harmful effect is built into the design decision itself, leading to unethical consequences per definition.
Sometimes regulators and governments solve the issue. Microsoft was forced to allow users to choose a browser, and iPhone “jailbreaks” to unlock the phone are not illegal. Sometimes the market takes care of it. One of the competitive differentiators of Android-based tablets is that they support the Flash-world.
Slowly and surely, we’ve reached a few interesting conclusions. The first conclusion is that weighing the benefits and the risks of humanity’s dependence on technology is essentially an ethical debate. In other words, making sure that technology benefits all and serves progress is a guide for “doing the right thing.”
The second conclusion is that both the consequentialist and the universalist approach lead to important dilemmas. Dilemmas often come up as a result of constraints. It is difficult to offer the highest quality and also be the cheapest in the market. It is hard to offer an open technology platform while keeping full control over the user experience. It is hard to find a medical cure that doesn’t have any side effects. These are constraints that force us to make choices, when we would prefer to have both, at the same time.
How to solve this dilemma? We’ll discuss that in the second article of this series.
- www.damnyouautocorrect.com is full of embarrassing examples.
- Also see “Technology, What Have You Done for Me Lately?”
- This reasoning is an example of Kant’s “categorizal imperative.” This is a rational examination of the intentions behind actions. If you believe that everyone should do what you do, then an action can be called good. If you believe not everyone should do the same, think about your intended action again.
Recent articles by Frank Buytendijk