Singularity and the Consequences of One-Dimensional Futures for Decision-Making
Hot cheeks, twinkling eyes, and hands flinging about. During the Masterclass, the HR managers were intensely debating the future. Some feared a complete AI take-over, others said we would become cyborg, and the rest evangelized human empathy as the one weapon against digitalization. Who was right, they asked me. Could madame futurist please referee?
No, no, no. All scenarios were possible and for some markets and jobs probable even. Everybody was right and everybody was wrong. The future is not singular, it's plural. Good HR and good general management is about exploring all these futures, so that you can find the one positioning or offering that can withstand all the possible futures you can think of. Good decision-making is about shaping the future of your business, not responding to the futures that others try to impose on you.
Sorry. No easy, one-dimensional approach. Foresight is a messy process and despite AI and quantum computing, it will remain messy for a long time. Because it's back planning from an unknown future and not extrapolating historical data. It takes work, and intelligent discussion. This post is a plea for the sake of the future we make together with every decision we take.
Currently, one-dimensional futures are hip. Working with one-dimensional futures reduces uncertainty and makes managers feel safe. A very popular one is called Singularity. It has interesting assumptions that executives should include in their vision of the future. But it's a myth that Singularity is THE future.
Please read on if you want to find out what Singularity stands for and how you can use it in your strategic decision-making.
Singularity Is Not the Only Future
An influential group of Futurists think that human intelligence will merge with artificial intelligence, creating a single "breed" of advanced humans. This train of thought is called: "Singularity", and springs from the hope that technology will help us to transcend our biological limitations (like death). You may have heard from one of its influentials, Ray Kurzweil. Ray (February 12, 1948) is an American author, computer scientist, inventor and futurist, who has been working for Google since 2012. Interestingly, Ray based his views on the historical data points of a few tech trends and argued that linear thinkers (who view the future as a slightly improved present) ignore the exponential growth rates of these trends. He's right. But he's also wrong.
The future is not singular, but plural. Executives must explore and develop multiple options during long-term decision-making and not bet on one horse.
“Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments because they are based on what I call the “intuitive linear” view of history rather than the “historical exponential” view ― Ray Kurzweil, The Singularity is Near”
Singularity Is As Old As Our Culture
Singularity isn't new. It was coined by Stanislaw Ulam, a Polish-American scientist who is known for his work on nuclear weapons. In his obituary for his colleague John von Neumann in 1958, he commented on the speed of technological change. He said that its acceleration and societal impact seem to point towards an essential singularity. And the term was born.
And even before that, in 1908, the world's first futurist Filippo Tommaso Marinetti wrote a manifest on technological change. Marinetti was a poet and fascinated by the growing importance of industrialization. In the manifest, he rejected the past and celebrated technological progress. Marinetti and his friends insisted that literature will not be overtaken by progress; rather, it will absorb progress in its evolution.
The idea of progress as scientific advances stems from the enlightenment of the 16th and 17th centuries. French historian and philosopher Voltaire saw science and reason as drivers behind societal advancement. Before the enlightenment, other, divine, world views ruled. Even in those world views, people have envisioned futures enables by technology and science. Take for instance the concept of a robot. The first known reference to a robot was a bronze soldier, Talos, in 400 BC.
In our time, Futurist Ray Kurzweil has taken the concept further and predicts singularity to occur around 2045.
How to Reach Super Intelligence, That's the Question
The concept of singularity isn't new at all, and it's still debatable. The debate mainly revolves around the extent of super intelligence. Or rather: how exponential will artificial intelligence develop? Artificial systems can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child. Will its advances take five or five hundred years? Both sides, the enthusiasts and the skeptics alike, use trend analysis to prove their points. It's a matter of opinion, not of fact.
Unequal Distribution of Technology: the Single Singularity is Near to the Few
But there are other points to make in the argument. Consider the distribution of science and technology today. Its benefits and costs aren't even equally distributed within countries, let alone globally. On top of that, technological progress has contributed the most to widening income equality. Meaning that technological advances are reserved for the few, not the many. Hence, singularity may widen the gap even further, which makes it a trend for a super small elite.
'Singularity' Is Too Simple or Generic a Concept to Solely Base Decisions On
The impact of artificial intelligence on an individual can vary between absorption of technology, reduction of its impact, rejection of tech altogether, and avoidance of the issue. Personality and personal preferences, experience, and situational traits like culture determine how individual consumers will respond to smart machines and their impact on how we live, learn, work, and play. This is the field of psychologists, and the discipline studying artificial intelligence is called Artificial Psychology. This is what they say:
As of 2015, the level of artificial intelligence does not approach any threshold where any of the theories or principles of artificial psychology can even be tested, and therefore, artificial psychology remains a largely theoretical discipline.
In organizations (micro level), AI impacts how an organization is structured and what the work processes look like. Some of the effects of AI on organizations include: power shifts due to the change in ownership of knowledge; reassignment of decision-making responsibility; cost reduction and enhanced service; and personnel shifts and downsizing. This isn't a new insight either. To quote MIT:
Worries that rapidly advancing technologies will destroy jobs date back at least to the early 19th century, during the Industrial Revolution in England. In 1821, a few years after the Luddite protests, the British economist David Ricardo fretted about the “substitution of machinery for human labour.” And in 1930, during the height of the worldwide depression, John Maynard Keynes famously warned about “technological unemployment” caused by “our discovery of means of economizing the use of labour.” (Keynes, however, quickly added that “this is only a temporary phase of maladjustment.”)
Let Science Take over Fiction
So what about robots taking over our jobs? Scientists tend to agree that self-assembling nanobots who work together are most likely to do the heavy lifting.
1996 Nobel prize winner in Chemistry Richard Smalley told MIT that scientists are just beginning to understand the physics of the very small and learn how to control behavior in this realm. His careful words stand in stark contrast with Mr Drexler, chairman of the Palo Alto, CA, based Foresight Institute, who claims that in the future:
...self-replicating nanorobots that mechanically push atoms and molecules together to build a wide array of essential materials. Huge numbers of these nanorobots working together would supply the world’s materials needs at almost no cost, essentially wiping out hunger and ending pollution from conventional factories.
Again: predictions of AI's impact on jobs are a matter of opinion.
A new survey by the Pew Research Center’s Internet Project and Elon University’s Imagining the Internet Center found that, when asked about the impact of artificial intelligence on jobs, nearly 1,900 experts and other respondents were divided over what to expect 11 years from now.
Forty-eight percent said robots would kill more jobs than they create, and 52 percent said technology would create more jobs than it destroys.
Although future narratives like Drexler's help business leaders to envisage possible futures, scientific progress isn't that fast. For that reason, entrepreneurs would do better to use stories to inspire creativity and actual scientific progress for strategy.
We Don't Have Enough Data to See Patterns Yet
Also on this level of change, we can't generalize how AI will affect a business. Organizational characteristics like job and process design and corporate culture will influence the particular deployment of AI systems. The same system may be deployed differently in different companies and thus have different effects.
At the transactional level (meso), artificial intelligence will impact how products and services are delivered, and what strategic options an organization has. This is the level that many futurists talk about. For instance: AI will transform shops in to experience centers in the future of retail.
And then there is the contextual level (macro) consists of ‘global forces’, think about: economical developments, demographics, politics, technological developments and social developments. In this area even less is known. Science develops from observations and questions, which in turn form scientific theories that need to be questioned, and those theories put together form a field. At this point, we're only starting to observe.
The research on the effects of AI on all levels has only just begun. In December 2014, Stanford University announced a 100 year study. The study will look into everything from the impact artificial intelligence will have on law and democracy to economics and warfare. It also will focus on individual reactions to intelligent machines.
Strategize About the Future of Your Distinct Markets, Not About THE Future
During the centuries, people have had the same array of responses to change. The aggregated effects of all those individual responses differ per time period, region, and culture. In ancient times, the Greek invented the scientific method and started many modern scientific disciplines. Then came the dark ages for Europe, but other civilizations kept using a scientific method intensively.
Today is no different. Science and technology aren't and won't be equally dispersed in markets. Your response to change must depend on your business environment and market. The one thing to remember is that strategy should be dynamic. The business environment will change, and so should your strategy. AI will impact that environment, but when and how is yet to be seen.
Keep the business stable, monitor change, and use flexible growth strategies!