By Joe Grist
Blog Content Contributor
With the introduction of personal assistants like Google Home and Amazon Echo, intelligent machines are no longer limited to private research and computing but are instead becoming a part of our everyday lives. This common use of artificial intelligence (AI) brings many things into question. Namely, how intelligent should machines be, and if there is no limit to their intelligence, how should they be regulated? What kind of fail-safes will be put into place? Many leaders in the tech field have asked themselves these questions, and as technology progresses, they find these inquiries more and more critical not only for the safety of individuals but for the preservation of humanity as a whole.
A prime example of oversight—or maybe even intentional ill-will is China’s new facial recognition technology. According to co-founder of Yitu Technologies, Zhu Long, the company’s facial recognition algorithm has logged 1.8 billion faces and has already begun catching criminals across China.
“Our machines can very easily recognize you among at least 2 billion people in a matter of seconds which would have been unbelievable just three years ago,” said Zhu Long. “In 2015, AI had already beaten humans in face-verification tasks. Our algorithm is more accurate than customs officials at telling whether two images show the same person. It can even find a subject among millions of others using a 25- or 30-year-old image. And in the past two years, the performance of machines has increased by 1,000 times.”
Three hundred and twenty million of Yitu’s 1.8 billion photographs in its national database have come from visitors who have entered and left the country. Yitu also says that its platform is in service with more than 20 provincial public security departments and more than 150 municipal public security systems across the country. The program, dubbed “Dragonfly Eye,” has shown what it is capable of, leading to 567 suspected lawbreakers being caught within three months.
But here’s the kicker, China is known as one of the most heavily censored countries in the world and is also extremely notable for cracking down on dissent—political and otherwise. It is therein where the problem lies. Considering the fact that China Merchants Bank says that it foresees ATM customers withdrawing money by showing their faces, Xiaoshu (China’s Air BnB) will begin trialing smart locks that open by scanning customers’ faces, and cashier/cashless stores plan to use facial recognition as payment systems; things could get really difficult for those that China deems undesirable.
What stops China from not only stopping criminals but stopping political rivals, activists, or just plain proponents of free speech? The Dragonfly Eye could be a nightmare for human rights violations. Illegal arrests and even disappearances would be all but guaranteed. The fact that merchants and private companies are planning to use this facial recognition technology, and citizens can be so easily identified, the lives of people that China or its government deems undesirable could be severely limited.
And it only gets worse: Imagine the use of AI in military operations. If that doesn’t worry you, it should. Over 100 robotic and artificial intelligence experts, including Elon Musk, are worried for you, and they have begun to push for the ban of autonomous weapons.
“Lethal autonomous weapons threaten to become the third revolution in warfare,” stated Musk and 115 other experts in an open letter released to the public. “ Once developed, they will permit armed conflict to be fought at a greater scale than ever, and at times scales faster than humans can comprehend.”
Izumi Nakamitsu, head of the disarmament affairs office at the United Nations (U.N.,) released a report in June 2017 stating that regulation hasn’t been able to keep pace with the rapidly advancing technology.
“There are currently no multilateral standards or regulations covering military AI applications,” said Nakamitsu, “Without wanting to sound alarmist, there is a very real danger that without prompt action, the technological innovation will outpace civilian oversight.”
Human Rights Watch (HRW) also believes that autonomous systems “cross a moral threshold” and that without the ability to use human judgment “the humanitarian and security risks would outweigh any possible military benefit.”
“Critics dismissing these concerns depend on speculative arguments about the future of technology,” said HRW. “And the false presumption that technological advances can address the many dangers posed by these future weapons.”
To put it plainly, we don’t understand AI fully enough and we need to be careful before we begin implementing it into everyday life, much less military applications. There’s a reason that Cambridge, Oxford, and the Future of Humanity Institute released “The Malicious Use of Artificial Intelligence” this February. Education on the issue is important.
“Once there is awareness, people will be extremely afraid, as they should be,” said Musk. “AI is a fundamental risk to the future of human civilization.”
While I find it necessary to mention that this quote is a bit extreme for my taste, at least in the sense that it aims to induce some sort of panic, I think it is really telling that a man who is developing AI for his own company has decided to warn us about its dangers instead of trying to sell us false assurances of safety.
AI is going to be an everyday part of our lives, whether we want it to or not, and while I think it will be immensely beneficial in an endless amount of ways, I also think that we should push for regulation and encourage our governments to get a better understanding of the technology. Hopefully even developing a code of ethics to be built into each and every machine.
Because if we’re going to live smart, we need to live safe.
Featured photo by Claire Hansen.