The Coronavirus pandemic has intensified two pre-existing trends: our dependence on technology and our suspicion towards it and its purveyors. The question of technology ethics has moved from a luxury indulgence to a central theme.
The AI community had already seized upon this zeitgeist in discussions of transparency, fairness, data-debiasing, and, in particular, explainability. Calls for additional research into these areas have been met, and corporations have trumpeted their support, which has been met with accusations of ‘ethics-washing’.
Organisations like Zoom have come under attack for poor security and privacy features, and Google were condemned for providing coronavirus testing only in return for access to users’ health data. Fake news caused ill-informed vigilantes to vandalise 5G equipment in the United Kingdom claiming it was being used to spread the disease.
The overwhelming majority of people accepted the restriction of their constitutional freedoms during the global lockdowns, and yet – a seemingly no-brainer solution to end the lockdown, such as contact tracing, and symptom tracking, and digital immunity certification was met with scepticism, concern, and low adoption.
In short, our relationship with tech is broken.
We have developed a method to rebuild trust taking inspiration from the climate change lobby in the simple and transparent communication of energy efficiency. As the European Union moves to regulate high-risk applications of AI, we believe an additional, voluntary, layer of activity is required to build consumer trust. A rating system will enable consumers to choose between products and services that have been independently verified as meeting high levels of trust and quality of governance.
This talk will be a practical guide to how to prepare for Digital Ethics ratings and how to implement them. It should be useful to delegates from multiple disciplines, in particular those with strategic or technical oversight.