The fear that the creations of people will one day become their masters, or the ones to pass and deliver their execution, is not something new. Nonetheless, these concerns have taken on a new meaning in the light of the voices raised by an eminent cosmologist, an entrepreneur of Silicon Valley and the Microsoft founder. As supercomputers become more and more abundant and robots overseeing every battlefield, one can hardly dismiss them as something as trivial as science-fiction as it would amount to only self-deception. However, worrying is quite natural and it is pertinent to worry wisely.
The first step is to come to acquaint oneself with the knowledge of what computers can do today and what they are probably going to be able to do in the future. Rising processing power and the growth of digital data have contributed to the boom in capabilities of Artificial Intelligence (AI). The AI of today exhibits intelligence via brute force of crunching numbers and data calculation without much interest in understanding the autonomy, desires, interests and emotions that are unique to a human mind. Computers today lack the ability to infer, decide or judge; things which are conventionally concomitant with intelligence.
The ability to monitor billions of conversations or to pick out any specific individual of choice from anywhere in the world by only using his voice or face poses for recognition poses a grave threat to the individual liberties and privacy.
However, AI is strong enough to dramatically alter the human life. It already has the potential to enhance human exertion by complementing people in what they do. Take the example of chess which is played better by computers than any person. However, the best chess players are not machines, rather ‘centaurs’: an amalgamation of humans and algorithms. Such collective efforts are set to become the norm in almost all fields. With the support of AI, the ability of doctors to diagnose in medical images will be enhanced. Speech-recognition algorithms on our smartphones will introduce the world of internet to the illiterates of the developing countries. Digital assistants will be able aid academics in their research endeavours by suggesting hypothesis.
But in the short run, not all the impacts of AI are set to be good. Think of the effect of AI on the security apparatus of the state. The ability to monitor billions of conversations or to pick out any specific individual of choice from anywhere in the world by only using his voice or face poses for recognition poses a grave threat to the individual liberties and privacy. In addition to his, there are many individuals who will be on the losing side despite huge collective societal gains. Originally, doing endless calculations was the job of the drudges, predominantly women. But just as they were soon replaced by transistors, the AI is likely to turf out contingents of white-collar workers from organizations. Yes, training and education will alleviate this problem and the wealth that is created with the assistance of AI will help in generating new jobs. However, a large number of workers are doomed to suffer from dislocations.
It is not surveillance or dislocations that worry people like Elon Musk or Bill Gates, or what inspires the torrent of Hollywood movies that predict a dark world ruled by AI. Their concern is far more reaching and quite apocalyptic, which is the threat of autonomous machines possessing superhuman cognition abilities and motives that run counter with the interests of the ‘Homo sapiens’.
Such robotic beings are, however, still a long way off or may never surface at all. Even though generations of social and natural scientists have prodded away at the human brain, they are no closer to understanding how to create a similar mind than they were centuries ago. And the case for businesses for general, limited form of intelligence—the one which has a degree of autonomy and some interests—is not so clear either. Indeed, a car which can drive better than its owner is a great thing; a car that can make its own mind on where and when to go is quite scary.
Nonetheless, even though the prospects of the ‘full’ AI world of Mr. Hawking is somewhat distant, it would be wise for us all to plan and come up with coping strategies.
Nonetheless, even though the prospects of the ‘full’ AI world of Mr. Hawking is somewhat distant, it would be wise for us all to plan and come up with coping strategies. That is easier than what it may look, not least because humans have been constructing self-ruling entities with superhuman abilities and diverging interests for a long time. Government bureaucracies, markets and armies can all perform those tasks which unassisted humans may not be able to. They all need a degree of autonomy to carry out their tasks, they can potentially take a life of their own, and they all can potentially do great damage if they are not set up correctly and not governed by any overarching laws and regulations.
These comparisons may pacify the fearful; they also provide a guidance for societies on how to develop AI in a safe manner. Just as civilian oversight is necessary for armies, regulation for markets, transparency and accountability for bureaucracies, scrutiny is necessary for AI in a similar manner. As it is almost impossible for the creators of the AI systems to foresee every possible circumstance, it is necessary to have an off-switch as well. Such constraints must be constructed without any compromise on progress. Yes, the menace of an eventual emergence of ‘autonomous non-human intelligence’ threatens to overshadow a conductive debate on AI progress. But as is the case with most things, perils that result from innovation are usually accompanied by huge benefits on the other hand and it is very important for us all to keep that in mind as the era of AI nears its dawn.
is a graduate of School of Economics of Quaid-i-Azam University, Islamabad. He has specialized in the field of development and political economics with additional non-credit courses of Environmental Economics and Monetary Policy. Currently, he works at the CSCR.