Whether you want to talk science fiction or the possibilities of our real-world future, artificial intelligence is a hot topic right now. People of all types are talking about it, are interested in reading about it, and have questions about what is possible, what might be possible, and how AI might impact us humans just trying to live our lives.
Some of the conversations and research is really exciting! There are potential applications in health care that could have a big impact in multiple areas, and artificial intelligence could massively help doctors diagnose complex problems and ensure better safety by spotting worrisome potential drug interactions at the point of prescription. Robotics, in combination with artificial intelligence, automation, and our high-bandwidth networks, may allow surgeons with rare and specialized expertise to operate on patients located miles away whom they otherwise might not be able to get to in time.
There are lots of use cases beyond health care where AI and automation may have huge benefits. Jobs that are highly dangerous for people to perform are ideal candidates for automation, and there are many applications for clever robots in emergency response and search and rescue contexts. Apple’s Siri, Google Now, and smart thermostats like Nest are just a couple of examples of the helpful ways better and more intelligent solutions are coming to the mainstream consumer market. And who doesn’t love a Roomba?
There is a lot of conversation – both within AI research groups and outside them – about the ethics surrounding some of the places AI technology seems to be heading. Philosophers, technologists, humanitarians, politicians… people from all kinds of backgrounds and disciplines are talking about this, with wildly varying degrees of speculation, expertise, and alarmist language.
There are even ethical questions about the ethical questions: Who owns the AI and who should be responsible for making or programming these important ethical decisions? Should it be the creator/manufacturer? A regulatory body? The end-user? Or the machine itself?
When we look at ethical decision making in the context of health care, we seem to have agreed generally that patients should make the decisions about their own health and medical treatment. But when it comes to autonomous cars, we’re more conflicted. In a scenario where a choice has to be made to hit and kill a bystander or swerve out of the way into a barrier and kill the occupant of the vehicle, which life do we prioritize? Does it change if the bystander is elderly? Or if the occupant of the car is a child? Or if there are 3 bystanders and 4 occupants in the car? Should the designers and programmers of the vehicle decide, or should the occupant of the car have the final say?
And let’s not even get into the ethics of robots and artificial intelligence in war and combat situations.
There is a growing interest in trying to answer these questions. Some of the groups that are popping up – like the recently announced Leverhulme Centre for the Future of Intelligence, and the Centre for the Study of Existential Risk – are multidisciplinary teams organized to study these questions. Others are think tanks of private companies, who are allegedly creating ethics advisory panels to try and deal with these issues. There are no legal systems in place to regulate these emerging technologies, and there may not be laws in place for some time: for now, we’re entirely dependant upon the privately held companies who own the technology to make these ethical judgements, and hope that if their ethics don’t sync with ours that public outcry will be enough to balance it out.
How Will We Keep Control Of AI?
There are a lot of significant questions about whether or not we will ever get to a form of general artificial intelligence that matches what we humans can do with relative ease. Tasks like tidying up a house, for example, is something we do every day without thinking much about but presents a huge AI challenge.
But if we set aside, for a moment, the question of plausibility, there are pockets of thinkers who are expressing serious (and highly speculative) concern about whether or not we would be able to genuinely control an artificial intelligence operating at a general human level. There are research groups spending a considerable amount of time thinking about this. Nick Bostrom’s 2014 book “Superintelligence: Paths, Dangers, Strategies” became a best seller, and his TED Talk on the topic has been watched 1.6 million times. There is a long read at The New Yorker on Nick Bostrom, some of his ideas, and the context of his work that is well worth a read.
Whether you agree with Bostrom’s sentiments and logic or not (and many don’t), there is clearly a growing sentiment that our technology may be moving too far, too fast, and that we aren’t adequately considering whether or not the future we’re creating right now is a future we want. And this concern doesn’t always manifest itself in armageddon-style scenarios. Some of the concerns are far more tangible and immediate.
Are Robots Coming For Our Jobs?
The concerns about robots and artificial intelligence that are most relevant to us right now are the ones surrounding automation, and what that will do to the job market. This was expressed as a real concern by every member of our panel discussion, and is topical for people in the transportation industry right now given how close we’re getting to having autonomous vehicles join us on our roads.
The transportation industry employs a lot of people, and if those employees are displaced by autonomous options – that may possibly perform the job better and more safely, but the jury’s still out on this – then where is that large workforce going to go? There is much talk about retraining and the new jobs AI and automation will bring, but let’s be clear: for a 45 year old who’s driven a long haul truck all his life and hasn’t been to post secondary, his prospects in the white collar, highly specialized fields AI will require are pretty poor.
Our western culture is supremely ill-equipped to deal with a population who’s work is no longer valuable, and that’s a huge problem we need to solve. Ideally, we need to start solving it before those people start getting pink slips. If you need an example of the problem on a much smaller scale, look at how well we take care of people who can’t find work, or who are simply physically or mentally unable to participate in the workforce. We don’t look too kindly on people who aren’t working, snark about “excuses”, and talk an awful lot about the “welfare queen” problem we have zero evidence really exists.
What AI Topics Do You Want Us To Talk About?
This won’t be the last conversation about artificial intelligence – not even the last one you hear on Science for the People. There are a lot of complicated, nuanced questions we can’t even begin to consider in a one hour panel show.
What questions do you have about artificial intelligence and automation that you would like to hear about on Science for the People? Let us know in comments!
Featured photo is of Baxter, an interactive, teachable robot. Photo by Sarah Gerrity, United States Department of Energy. Both comics by XKCD.