In the first article of this series, Technology: Can't Live with It, Can't Live without It, we discussed whether we control technology, or if technology already controls us. I introduced a number of dilemmas that modern technology presents us. But technology innovation also comes to the rescue.
Innovation can be defined as the art of lifting constraints. The Sony walkman lifted a constraint called “size.” It made it possible to pick up a tape recorder and make it portable. MP3 players, a few steps later in the same line of innovation, eliminated the capacity constraints so you don’t have to choose what music to bring. MP3 players also eliminated a stability constraint, allowing people to run while listening to music in high quality. In business, service-oriented architectures and model-driven applications eliminate the need to choose between standard functionality, while servicing your unique requirements. Internet technology dramatically reduced transaction costs, which allows organizations to outsource activities, improving both quality and price. No choice between the two is needed anymore.
These constraints are all material, physical constraints. It is only logical that technology conquers those first. But as we saw with the example of the MP3 player, eliminating or drastically pushing one constraint usually only reveals another. Once most practical physical constraints are lifted, presenting the question how to do things, there is a completely new level of dilemmas and constraints. If there are no barriers to using technology in terms of time or money, the question moves to whether or why we should use certain technologies, or functionalities and possibilities offered by technology. Both the consequentialist and universalist would agree that those are worthwhile and more fundamental questions. They would just disagree on when to answer them. Consequentialists would judge the situation based on whether the outcome is good; universalists would like to determine that up front, based on the intention.
The new set of dilemmas, constraints or barriers for technology innovation are not technical but ethical in nature. What should we do with technology and what not? Or should we do everything technology allows us to do, as this is what evolution suggests? What knowledge should we have and what not? Should the freedom for research be restricted because of possible unethical consequences?
I stated it before. You can’t undo knowledge, an issue for the consequentialists, and you can’t determine all intention up front, an issue for the universalists. The two instruments used most so far are regulation and transparency. Regulation is a top-down approach. Some types of research are (or have been) forbidden. Think, for instance, of stem cell research. Transparency is more of a peer-oriented mechanism. Research and research data should be public.1 This is pretty well established in the exact sciences, but is not on the level where it should be in many of the social sciences.
But rules and procedures, as valuable and needed as they are, can only do so much. The forces of curiosity, innovation, progress and evolution seem to find ways around it. Nobel-prize winner Manfred Eigen suggests that the answer is not in trying to regulate and restrict knowledge, but in accumulating even more knowledge to harness all the knowledge we already have to get a grip on our future. He essentially proposes to use the forces of progress themselves to control and steer them. In other words, the best way forward is even more forward.
But what additional knowledge do we need? Three areas come to mind: knowledge of the basics, usage feedback and contextual knowledge.
Understanding the Basics
In his 2008 article “Is Google Making Us Stupid?” in Atlantic Magazine, technology writer Nicholas Carr confesses his skill of deep reading is in danger. Is sitting down, reading an article, essay or book, really getting into the story, or trying to carefully follow a train of thought that is laid out middle-age mind rot? No, it’s the Internet. As the mind continuously reprograms itself, a different style of reading leads to a different style of thinking. And the Web is structured around a much more fragmented style of reading – little nuggets of information, linked together in countless ways, and surrounded by advertisements. The Web seems to be built for distraction, hopping from one small bit to the next, instead of the focus that traditional book reading invites the reader to have. Our reading strategies have changed from being effective, absorbing new knowledge into our own frame of mind, to being efficient, quickly finding what we are looking for. Is this different reading strategy a bad thing? Socrates, in the writings of Plato, argued that reading leads to rhetoric deficiency, compared to debating skills. In fact, I heard someone argue that if books would have been invented after video games, parents would have been worried because books don’t allow you to interact and don’t have the multi-sensory richness of games.
Regardless of whether Web reading is good or bad for our intelligence, it is good to have a choice – to be able to do proper deep reading on a subject, combined with quickly synthesizing information from various sources, representing multiple perspectives. Being a child of my time, I would suggest to learn deep-reading first before allowing technology to help you jump between sources quickly.
When we rely too heavily on technology and devices, we disengage from the world around us. You may argue that calculators allow us to focus on the logic of what we are trying to achieve, instead of losing ourselves in manual calculations. But where does the understanding of logic come from? Probably from being able to process arithmetic in our brains and with pen and paper as well.2
Professional programmers benefit from having coded Java before starting with more advanced environments; Java teaches logic better. Accountants benefit from doing bookkeeping by hand before using financial packages. It allows them to predict the inner workings better. Car drivers should learn how to navigate without a system before relying on a TomTom or other GPS device.
Having basic skills in areas where we depend on technology allows us to survive in case the technology fails. Granted, there are practical limitations. Unless you are a boy scout, not many feel the urge to understand how to purify water or practice making fire without matches or a lighter. But in the Western world, confidence levels about the supply of water are higher than the confidence people have in IT. A second reason why having basic skills is good is that they help you understand whether systems and technologies deliver the right output –to see the result of a calculation on the calculator or a destination on the navigation system and think “that doesn’t feel right.” Where did the feeling come from? It is the result of having built a good frame of reference first.
Even 25 years ago we had usability laboratories, where people could be observed using systems. This provided tremendously valuable input for the engineers designing the “user experience.” Within ambient computing, the user experience is either completely transparent (it is simply there and gets its input from invisible sensors) or manifests itself in many different ways, for instance on a range of devices depending on where you are and what you are doing, such as your tablet, smartphone, car or glasses. But essentially it is still driven by engineering thinking, where an optimized design is deciding how the system will look for the user. One-directional. But how does the user look like to the system? There’s no telling. Systems should be more open to taking feedback regarding use and unanticipated use.
Many large websites are not truly designed anymore. Rather, screens are generated based on specific user input, and on the templates and content in the web content management system. I think we’ve all had the experience of being stuck or running around in circles, trying to find a way out. Web servers can record every click, and analytics can help interpret where users give up, but the input is not very rich. It is not possible to see how the user is reacting through facial expressions, hitting the keyboard and shouting at the system. What if we could make usability laboratories more scalable? Think of using Microsoft Kinect style technology, where an application can watch us and adapt.3
It could suggest, for instance, what to do next based on the experience with other users. It could suggest the right help topics, or direct a user to a call center or web care team.
System design should also solicit unexpected feedback. Systems routinely offer users recommendations based on their preferences, customer segment and what other comparable users have done in the past. Although technically this leads to user preference feedback and creates a learning loop, these recommendations only reinforce the picture the system already had and lead to even more rigid recommendations the next time around. Different ways of suggesting the unexpected are needed, based on principles of serendipity – finding something useful you weren’t specifically looking for. Systems should be more like a shop in which you can roam around and be inspired by everything that’s there. Experiment with proximity of options and recommendations by systems, creating a virtual form of market basket analysis,4
or systematically ask for feedback on random recommendations. These strategies are suboptimal in nature and counterintuitive to the engineering approach. Perhaps systems (and their designers) shouldn’t try to be smarter all the time, trying to guess user preferences correctly.
Outliers are the first sign of change. Systems that do not recognize differences in use over time, or a different context in which they are used, run the risk of disconnecting from reality. Not good for systems we depend on.
Understanding the ContextZen and the Art of Motorcycle Maintenance
is the world’s most read book on the philosophy of technology. The core of the message is a description of two extreme views on the use of technology. There are some who see technology, their motorcycle, as something they use. They know how to operate it, but feel they don’t need to understand how it works. A technology is simply the sum of its parts; and if one part is broken, it needs to be fixed or replaced. That’s what highly skilled and trained mechanics are for. They have the experience, do nothing else all day and have all the right tools. The other group sees the beauty of the technology itself. They see a larger picture of how parts interact with each other and are influenced by the context in which they operate. One part may be broken, but it might have been caused by something else that is not working properly. And if you are driving on your motorcycle, weather conditions partly determine how smoothly the engine runs. There is no mechanic traveling with you to make tiny adjustments. And if a paperclip helps as a tool to fix something, by all means it should be used.
Although the book focused more on the need to understand technology, and become one with it, it makes a small point that I think is worth emphasizing. It is not enough to understand the technology as a sum of all parts, and not even enough to understand the technology as a sum greater than all parts. As weather conditions affect the performance of the engine, you can induce a general rule. For a technology to be successful, it is particularly important to understand the context in which the technology is used. This idea is supported by the definition of “wisdom,” which is the object of philosophy itself. Wisdom is not only understanding the matter at hand, but particularly the context in which it matters.5
For instance, let’s take a look at decision support systems that help judges in determining the right sentence. For the acceptance of a system, it is very important that the rules that drive a sentence recommendation be transparent and that every recommendation can be traced back. In many cases, the process can be automated, and perhaps technically it is not required to route the sentence through an actual judge. It would make the process better (more objective), more cost-effective, and much faster. Cost, quality and speed, the three pillars of an efficient operation, are all served at the same time. However, for such a system to be successful, it is equally important to understand how people will accept a sentence from a machine. Will it cause people to resist and overflow the system with appeals? This would certainly negatively affect the business case. What additional measures would be needed for people to buy into such a system?
If you build a recommendation engine for YouTube to predict what other video clips we’d like to see, we need to understand how the human mind jumps from one association to the other. How else would such a system be able to provide recommendations you wouldn’t think of yourself, but you’d like anyway?
If you build a business intelligence
(BI) system to help analyze complex strategic issues, it is not enough to understand the data structure and the statistical techniques used to come to analytical conclusions. BI
systems are already far more efficient than any human brains, but for such a system to be effective, we also need to understand human decision making – how people absorb and process information, weigh different factors, collaborate with others and eventually reach a conclusion. This sounds logical, but most business intelligence system designs do not take this into account at all and focus exclusively on the technical side of data structure and analysis.
As a last example, let’s consider the implementation of a business process management
system. Most business cases focus on operational excellence. If this means taking repetitive work out of the hands of the users, there are no immediate ethical consequences of using the technology. However, if the business case involves administrative professionals having to follow rigid rules and procedures, enslaving them to the system, the business case may be financially sound, but fail on ethical grounds. Human beings are motivated by factors such as autonomy, mastery and purpose. Most humans want to be able to plan and perform their duties the way they see fit for themselves, making every day a learning experience and seeing their contribution to the organizational goals.
If the goal of technology, that we increasingly depend on, is to augment human capability, we should have a clear understanding of human capabilities and how they vary per person. This is the context we should be looking for.End Notes:
- Science has become too complex and interconnected to do alone anyway. James Bond type structures in deserted places where scientific teams work on devices to destruct the world, funded by a rich villain, are not possible.
- I hate spreadsheets; they invite messy structures and are very error prone. But if you need to build analytical skills, it doesn’t harm to have to set up a decent spreadsheet or two before using more advanced statistical tools.
- Face recognition is being used for smartphones, replacing a password. I don’t see any acceptance problems here. The question, of course, is if users feel comfortable being watched and analyzed while interacting with their phones, tablets and computers, and how to address these issues.
- Market basket analysis tells you which items consumers typically buy together, like bread and butter, or trousers and socks.
- Also see my series on wisdom.
Recent articles by Frank Buytendijk