Listen to archived HMI Content

ABC Radio Overnights program with Rod Quinn: AI Ethics

 

From 16 October 2020

Making sure our artificial intelligence doesn't turn evil is a big task. But whose values should we base the programming of these machines on? Dr. Claire Benn talks to Rod Quinn about how we make sure we end up with safe, ethical AI.

TypeHuman Podcast: Affordance Thinking and Responsible Technologies with Jenny L Davis

 

From 21 October 2020

Jenny Davis was Interviewed by Nick Byrne from the consulting and media company TypeHuman about issues of ethics in technological design. The interview was driven by ideas from Jenny’s book "How Artifacts Afford: The Power and Politics of Everyday Things". They discussed the political and social values that are built into technologies and how an affordance framework can aid processes of intentional, equitable design.

Policy Forum Podast: Can policymakers detoxify social media? Bots, trolls, hate speech and sexism in the social media cesspit

 

From 21 August 2020

There’s little doubt social media can, at times, become very unpleasant. From run of the mill rudeness all the way to hate speech, there is no shortage of social media horror stories from users. Women and people from diverse ethnic and religious backgrounds – especially those in the public eye – are often subject to vile abuse online. But does it have to be this way? Can policymakers and the social media platforms do more to encourage greater civility and ensure people’s safety? And what can governments do to tackle hate speech and coordinated disinformation campaigns? On this episode of Policy Forum Pod, our expert panel – Dr Jenny Davis, Dr Jennifer Hunt, and Yun Jiang – join us to discuss what we can do to make social media platforms safer, more respectful spaces.

AI and Moral Intuition for ABC

 

From 29 March 2020

Interview with Claire Benn and Seth Lazar 

Artificial intelligence is helping us to make all sorts of decisions these days - sometimes this can be fun, convenient and a real time saver. But what about moral decisions, or decisions with moral consequences - in law enforcement, say, or judicial proceedings, or public surveillance? The potential consequences for human rights are troubling. And if we outsource our moral intuition to AI, do we risk becoming morally de-skilled?