In March 2016, AlphaGo, a computer program developed by Google DeepMind in London, beat the longstanding world champion, Lee Sedol, in 4 out of 5 games of Go. In 2011, IBM’s Watson computer beat two former champions in multiple rounds of Jeopardy!
What do these events have in common? They both involve analytical computing systems able to interact with their human environment, whether it’s with a quiz show host giving pun-filled clues or battling an opponent in an abstract strategy game. This marks the advent of the third age of computing: the cognitive era, where computers are no longer tied to programmed algorithms that respond to a set of pre-defined questions, but rather are able to formulate ideas and hypotheses by understanding natural language and learning from past experience. In short, they process data more like humans than machines.
This is an enormous leap from the previous two eras: the tabulating era and the programming era, when computers stored data through mechanically inserted punch cards and then stored ever larger collections of data and completed much more complex calculations electronically.
Prior to the cognitive era, computer systems were limited by an ‘IF/ THEN’ mentality, where they searched for a single right answer. Today, the most advanced computing systems are able to look at both quantitative and qualitative data, and they can present a variety of solutions to a query ranked by probability of success. And thanks to constant improvements in hardware technology and telecoms networks over the last five decades, they can now do this anywhere in the world on a computer the size of three pizza boxes.
Following the demonstration of Watson’s capabilities on Jeopardy!, IBM began advancing a large number of real-world applications for cognitive computing. These include helping oncologists develop unique treatment strategies for cancer patients, matching patients to clinical trials, creating customer engagement advisors at help desks, and developing a more generic aggregator of data in almost any area of research.
Who’s Using Cognitive Computing?
Aside from the health care and customer services industries, key verticals making use of cognitive computing include insurance, IT and telecoms, and financial services. For example, the ability to analyze qualitative or unstructured data allows for more accurate detection of fraudulent activities and provides a greater understanding of consumer preferences. It is a powerful tool for tackling cybersecurity risks as the number of connected devices balloons with the emergence of the internet of things (IoT). With IT companies estimating that unstructured data, such as text, images, video and sounds, accounts for 80-90% of the entire digital universe, the ability to uncover insights from the data is imperative for a better understanding of consumer and business needs.
IBM, Google, Intel, and the host of other IT companies developing cognitive computing systems see the new technology as one that can transform global industries for the better. However, popular fiction over the last several decades has made many people wary of cognitive systems taking control of or manipulating humans to achieve their ends. Experts disagree and see this as merely a fictional depiction of the evolution of computing systems.
Cognitive computing systems are not designed to implement solutions, but rather to respond to queries with a selection of hypotheses and probabilities, allowing their human partners to use their enhanced knowledge to make a decision.
The initial use cases developed by leading technology companies are already being widely trialed and adopted today. Deloitte estimates that by the end of 2016, more than 80 of the world’s largest enterprise software companies by revenues will have integrated cognitive technologies into their products, and the consultancy firm expects this to rise to 95 out of 100 by 2020. Concerns that cognitive computing will severely disrupt the workforce, eliminating the need for jobs focusing on the collection and analysis of data, are therefore well-founded.
As with all industrial advances, the combination of cognitive computing and robotics will drastically change the shape of the global workforce over the next 30 years, potentially resulting in higher unemployment levels. This is worrying for governments, particularly in developed markets where rising dependency ratios are already putting a strain on budgets.
However, the transformation will be gradual and during this time, governments will also benefit from technological developments. Cognitive computing applications will enable governments to improve taxation systems, better understand how to deploy government funding to achieve the highest impact on citizen welfare, and enable less skilled professionals to perform higher-value roles. These advances will improve their ability to cope with social problems, including the risk of higher unemployment rates.
A 2013 study by Oxford University found that 47% of employment in the U.S. is at risk of being replaced by advances in AI, robotics and cognitive computing. But that is not to say that adoption of cognitive computing will actually eliminate nearly half of the jobs in the U.S. in the near future. For one, cognitive computing can address the shortage of skilled labor in the field of data analytics, allowing many more companies to more effectively leverage the data they already have, but have not monetized. Thus, as companies begin integrating cognitive computing into their operations, the initial goal is to gain deeper knowledge and new insights to make better decisions and develop new products.
In this sense, the opportunity for job creation around cognitive computing is at least as likely as the risk of job losses. As the adoption of cognitive computing becomes increasingly important for every company’s development and growth, it will become an entire segment of the IT market in its own right, contributing to the growth of IT sector employment. Meanwhile, though the number of jobs focused on data analysis and collection will decline, people will be essential in developing new products and services employing the insights gained from cognitive computing.
Risks of Cognitive Computing
Perhaps a more immediate risk than the transformation of the workforce is that companies will fail to properly implement cognitive computing applications into their workflows. Gartner forecasts 50% of business ethics violations will occur as a result of improper use of ‘big data’ analytics by 2018. These breaches will take many different shapes, from the poor execution of applications (failing to deliver on promised efficiency gains, resulting in wasted time and money) to improper storage or usage of data resulting in security or legal breaches, damaging the reputations and finances of the companies at fault.
This ties in closely with the issue of individuals’ privacy as companies begin to utilize much more of their data.
According to EMC, in 2014, two-thirds of data was created by individuals, but enterprises were liable or responsible for 85% of the digital universe. In some cases, regulators have already intervened to ensure that enterprises do not exploit advances in ‘big data’ analytics and other technologies. For instance, in 2009, the U.S. implemented legislation preventing health insurance providers from using data on genetic history to assess a potential customer’s risk profile.
However, consumer data will often be at the center of how cognitive computing and other scientific advances can transform health care and other sectors, and ensuring that data is securely stored and used in a way that respects privacy will be an important concern for enterprises and regulators.
Data creation in every field is growing exponentially, and it is impossible for the human brain — or a team of human brains — to absorb and retain it all. EMC estimates that if one byte of data were equal to one gallon of water, in 2014, it would have taken 10 seconds to fill an average sized house. By 2020, it will take only two seconds. Companies that wish to remain competitive over the long term must adopt cognitive computing in order to be able to leverage this data and improve customer services.
Today, leading tech companies such as Google and Facebook are the ones with access to the largest data sets and the financial muscle necessary to make big advances in the development of cognitive computing applications. Their greater ability to harness new technologies further entrenches their dominance in an advertising-focused internet market.
However, this process is quickly being democratized, in part through the ever falling cost of data storage thanks to cloud computing and initiatives like the IBM Watson Ecosystem, which sponsors cognitive computing start-ups. Free access to all IBM’s software components is enabling small start-ups and businesses to develop new cognitive computing applications to revolutionize business operations in the legal, fitness, housing and education sectors, among many others. IDC forecasts that by 2020, 50% of all business analytics software will include prescriptive analytics built on cognitive computing functionality, while DeepMind founder Demis Hassabis expects the technology to begin appearing in consumer devices such as smartphones within the next several years.
Despite the many risks, like previous leaps into new computing eras, the proliferation of cognitive computing across business and government operations will most likely be hugely beneficial. Governments will be better equipped to offer services, companies will be better able to tailor services to clients’ needs, doctors will be able to design better treatments for patients, and organizations will be better protected against cybersecurity risks.