Access ImageNet, currently the dataset most widely used in computer vision applications, like those of self-driving cars, 14 million images and 21k categories strong, and you’ll find yourself staring at a curious top-level taxonomic structure that orders the world into plants, geological formations, natural objects, sports, artifacts, fungi, persons, animals, and miscellaneous.
Below these, there exists a strange wondrous caravanserai of categories that distinguish between apples, apple butter, apple dumplings, apple geraniums, apple jelly, apple juice, apple maggots, apple rust, apple trees, apple turnovers, apple carts, and applesauce. But also between human bodies. A human body is a subclass of “body”, which is a subclass of “natural object”. Pay attention to that “natural”. Under body we find “person”, “juvenile body”, “adult body”, “male body”, and “female body.” Under “adult body” we have “adult female body” and “adult male body”, and the implicit assumption that these are the only “natural” bodies.
Out of this and other similar dataset architectures we get self-driving cars, delivery by drone, or face recognition. But also misdiagnoses, discrimination, possibly racism. Tech-driven narratives assume this to be a computationally solvable problem: either the algorithm can be tweaked, or more data is needed. Information architects know better: this is a wicked problem, this is political, and it requires more than just mathematics.
Join me to discuss the structurative, agentive, and evaluative nature of AI and the role that information architecture and experience design should play in making AI better.