‘The Good Place’ is a half hour sitcom about four people that have died and think they’ve made it to ‘the good place’ only to discover that (*spoiler alert*) they have not. In one scene, one of the four - Chidi, a professor of moral philosophy and ethics - is explaining the trolley problem. This century-plus-old thought experiment requires individuals to choose between two terrible outcomes - kill one, or kill five.
Beneath the light-touch sitcom antics is a very relevant, modern but arguably timeless question - when choosing between the lesser of two evils, how do we choose? And, what should be the consequences of those choices?
Answering such a question requires a framework of principals, morals and ethics. In our last event for 2018, we looked at what frameworks were being applied to emerging technologies and what the implications are for failing to integrate critical and ethical thinking.
What ethics are
The ethical implications of emerging technologies
The influence of data
The role of government
The role of industry
James Wilson - CEO, Eliiza
Tim Miller - Associate Professor in Computer Science, University of Melbourne
Katherine Bailey - Artificial Intelligence Senior Principal, Accenture
Andrew Ethell - Executive Director, Amalgam Strategic and Board Member, Infrastructure Australia
Think of ethics as the ‘should’ questions rather than the credential reasoning of ‘can’. They allow you to critically assess options, or actions, before you determine the way forward according to your (individual or collective) morals and values.
Emerging technologies are moving out of low-stakes daily scenarios, like predicting what we should watch or buy, and into high-stakes situations like welfare and criminal justice, where they shape society and communities.
Some level of bias is required otherwise algorithms can’t select or decide. Take the machine learning models that power Netflix predictions.
To assess whether bias is ethical we ask if it is FAT – Fair. Accountable. Transparent.
To empower accountability, there needs to be explainability - end users need to understand how algorithms have been build.
Think about the problem you’re solving. Starting with the problem, rather than the technical application, can ensure the solution is accountable.
Emerging technology consultants and developers have a responsibility to inform and guide their clients, ensuring the client understands how the program is developed, what data is used, etc - empowering them in the process.
Our education system needs to change from its current silo model. Creating more inroads between the two dominant areas of study - Humanities and Social Sciences (HASS) and STEM - may be the key to equipping future generations with the critical skills required to navigate, create and invent ethical technology.
Start thinking of technology as core business to get the buy-in you need. Reframing tech according to the problem that you’re trying to solve can help you position it as core business e.g. it’s not a ‘tech problem’, it’s a ‘service issue’.
It’s a common myth that machine learning programs continue to learn, but many don’t. Models need to be retrained, and data updated. Regular maintenance should be considered part of the implementation of any technological solution.