Recently, the Dutch philosopher Maxim Februari warned that the rule of law in democratic states was in jeopardy, and technology, specifically AI, was to blame. He argues that outsourcing of law enforcement to technology may seem efficient, but that it also poses a significant threat. Rules are then enforced without considering circumstances.
Algorithms do not make moral judgments, while their decisions do have moral implications. For example, magic shields that automatically impose speed limits on highways may seem a smart solution, but choosing to drive too fast to take a child with an allergic reaction to the hospital will become impossible. Two things are missing here: the discretionary power of enforcers to allow for speeding when moral judgments dictate so; and the citizen’s ability to make an autonomous choice to break rules and plead their case in court, if apprehended. These are critical features of democratic states with sound legal systems.
Februari calls on people to help him save the world from AI’s unwavering enforcement of regulations. We propose that designers of AI’s interfaces be those superheroes: we should rush to the rescue and save our citizens in distress. The most important contribution that designers can make is to design interfaces that include opportunities to argue with the AI enforcer of rules.
We call those bridges between AI and people ‘algorithmic affordances’: means of tangibly influencing the algorithms, for instance by providing the option to disable them or the ability to adjust the parameters to influence the outcome of the calculations. These options should be embedded in the algorithm, but also clearly communicated in the interface, by good, and clean and convincing design. Algorithmic affordances increase transparency and control, and enhance the user’s experience. And algorithmic affordances can protect the rule of law in democratic states. All designers should be familiar with these tools.