The design of ethical algorithms in autonomous machines
May 24, 2018
Self-driving cars, digital companions, domestic robots and even AI-powered sex automatons will share the physical world with us very soon. It will have far-reaching consequences for our societies. If we hope to live safely with autonomous machines, they can’t be a threat to us, and they certainly must comply with ethical requirements. Machine learning techniques are only as good as the data we feed to them, but the expectations for autonomous machines are remarkably high. Their decisions should be without errors, explicable, fair, free of biases and discrimination. But do our societies have universal, moral standards that are possible to codify? Do our ethical rules precise enough for programming? Can the “black boxes” be transparent in the reasoning? Information architects have methods to exploit relationships in unstructured data and gain insights. Can they also alongside other professionals help to mitigate ethical issues that arise during the design of the algorithms?
Sławomir Molenda is a user experience designer and information architect at Sii, one of the biggest IT and engineering service providers in Poland. For more than ten years he worked on delivering UX expertise for a variety of enterprises and industries with a focus on creating a thoughtful and communicative design. Sławomir has MSc in Computer Science and is passionate about cognitive sciences, philosophy of mind and human-centred artificial intelligence.