Artificial Intelligence: Opportunities, Ethics, Control, and Jobs

Whether you want to talk science fiction or the possibilities of our real-world future, artificial intelligence is a hot topic right now. People of all types are talking about it, are interested in reading about it, and have questions about what is possible, what might be possible, and how AI might impact us humans just trying to live our lives.

Some of the conversations and research is really exciting! There are potential applications in health care that could have a big impact in multiple areas, and artificial intelligence could massively help doctors diagnose complex problems and ensure better safety by spotting worrisome potential drug interactions at the point of prescription. Robotics, in combination with artificial intelligence, automation, and our high-bandwidth networks, may allow surgeons with rare and specialized expertise to operate on patients located miles away whom they otherwise might not be able to get to in time.

There are lots of use cases beyond health care where AI and automation may have huge benefits. Jobs that are highly dangerous for people to perform are ideal candidates for automation, and there are many applications for clever robots in emergency response and search and rescue contexts. Apple’s Siri, Google Now, and smart thermostats like Nest are just a couple of examples of the helpful ways better and more intelligent solutions are coming to the mainstream consumer market. And who doesn’t love a Roomba?

Ethical Questions

There is a lot of conversation – both within AI research groups and outside them – about the ethics surrounding some of the places AI technology seems to be heading. Philosophers, technologists, humanitarians, politicians… people from all kinds of backgrounds and disciplines are talking about this, with wildly varying degrees of speculation, expertise, and alarmist language.

There are even ethical questions about the ethical questions: Who owns the AI and who should be responsible for making or programming these important ethical decisions? Should it be the creator/manufacturer? A regulatory body? The end-user? Or the machine itself?


When we look at ethical decision making in the context of health care, we seem to have agreed generally that patients should make the decisions about their own health and medical treatment. But when it comes to autonomous cars, we’re more conflicted. In a scenario where a choice has to be made to hit and kill a bystander or swerve out of the way into a barrier and kill the occupant of the vehicle, which life do we prioritize? Does it change if the bystander is elderly? Or if the occupant of the car is a child? Or if there are 3 bystanders and 4 occupants in the car? Should the designers and programmers of the vehicle decide, or should the occupant of the car have the final say?

And let’s not even get into the ethics of robots and artificial intelligence in war and combat situations.

There is a growing interest in trying to answer these questions. Some of the groups that are popping up – like the recently announced Leverhulme Centre for the Future of Intelligence, and the Centre for the Study of Existential Risk – are multidisciplinary teams organized to study these questions. Others are think tanks of private companies, who are allegedly creating ethics advisory panels to try and deal with these issues. There are no legal systems in place to regulate these emerging technologies, and there may not be laws in place for some time: for now, we’re entirely dependant upon the privately held companies who own the technology to make these ethical judgements, and hope that if their ethics don’t sync with ours that public outcry will be enough to balance it out.

How Will We Keep Control Of AI?

There are a lot of significant questions about whether or not we will ever get to a form of general artificial intelligence that matches what we humans can do with relative ease. Tasks like tidying up a house, for example, is something we do every day without thinking much about but presents a huge AI challenge.

But if we set aside, for a moment, the question of plausibility, there are pockets of thinkers who are expressing serious (and highly speculative) concern about whether or not we would be able to genuinely control an artificial intelligence operating at a general human level. There are research groups spending a considerable amount of time thinking about this. Nick Bostrom’s 2014 book “Superintelligence: Paths, Dangers, Strategies” became a best seller, and his TED Talk on the topic has been watched 1.6 million times. There is a long read at The New Yorker on Nick Bostrom, some of his ideas, and the context of his work that is well worth a read.


Whether you agree with Bostrom’s sentiments and logic or not (and many don’t), there is clearly a growing sentiment that our technology may be moving too far, too fast, and that we aren’t adequately considering whether or not the future we’re creating right now is a future we want. And this concern doesn’t always manifest itself in armageddon-style scenarios. Some of the concerns are far more tangible and immediate.

Are Robots Coming For Our Jobs?

The concerns about robots and artificial intelligence that are most relevant to us right now are the ones surrounding automation, and what that will do to the job market. This was expressed as a real concern by every member of our panel discussion, and is topical for people in the transportation industry right now given how close we’re getting to having autonomous vehicles join us on our roads.

The transportation industry employs a lot of people, and if those employees are displaced by autonomous options – that may possibly perform the job better and more safely, but the jury’s still out on this – then where is that large workforce going to go? There is much talk about retraining and the new jobs AI and automation will bring, but let’s be clear: for a 45 year old who’s driven a long haul truck all his life and hasn’t been to post secondary, his prospects in the white collar, highly specialized fields AI will require are pretty poor.

Our western culture is supremely ill-equipped to deal with a population who’s work is no longer valuable, and that’s a huge problem we need to solve. Ideally, we need to start solving it before those people start getting pink slips. If you need an example of the problem on a much smaller scale, look at how well we take care of people who can’t find work, or who are simply physically or mentally unable to participate in the workforce. We don’t look too kindly on people who aren’t working, snark about “excuses”, and talk an awful lot about the “welfare queen” problem we have zero evidence really exists.

What AI Topics Do You Want Us To Talk About?

This won’t be the last conversation about artificial intelligence – not even the last one you hear on Science for the People. There are a lot of complicated, nuanced questions we can’t even begin to consider in a one hour panel show.

What questions do you have about artificial intelligence and automation that you would like to hear about on Science for the People? Let us know in comments!

Featured photo is of Baxter, an interactive, teachable robot. Photo by Sarah Gerrity, United States Department of Energy. Both comics by XKCD.

Rachelle Saunders

Rachelle is the producer and one of the hosts of "Science for the People", a syndicated radio show and podcast that broadcasts weekly across North America. It explores the connections between science, pop culture, history, and politics. By day she slings code as a web developer and listens to an astonishing number of podcasts.

Related Articles


  1. End the stigma around people not in the workplace, give every adult citizen a guaranteed income (minor citizens would have some money go to their parents and some go to a trust presumably, on a sliding scale based on number of children in a family), let the robots take the jobs which are unhealthy for humans and which they do better.

    “I must study Politicks and War that my sons may have liberty to study Painting and Poetry Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.”
    – John Adams

  2. In manufacturing, we have seen several waves of this displacement of jobs already – increases in automation, an increase in value of waste reduction. I am a manufacturing engineer, and I know I’ve been in situations where we have to be really careful about how we promote savings, especially when I worked in a union environment (in the late 90s), emphasizing hours saved, but not specifically stating “positions eliminated”. It doesn’t matter how you talk about it though, the net result is fewer human hours required for the same throughput. These are low skill jobs, but usually paid above minimum wage.

    the next levels of “opportunity” with similar low skill requirements seem to be service sector jobs, which pay less. This has been a problem since the beginning of automation though, AI is just another wave of it. AI improvements replace a slightly higher skill set when used to automate jobs done by humans today – a higher skill set than the automations replaced in the 60’s – 90’s. People talk about “what to do about this workforce”, but I don’t see much action.

  3. We really need to be talking seriously about a universal income.

    No force can stop technology increasing, and as it does capital will continue to concentrate in the hands of those who don’t necessarily deserve it. We need to accept that human labor is becoming obsolete, that hard work and effort isn’t enough, and live in a world where we have a right to a standard of care.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button