Posts

Finding data outside of my own reality bubble

I previously wrote about how our environments, biology and interests shape our understanding of the world. I also discussed how this understanding makes us oblivious to certain data needs for fair and safe AI systems.

This led me to wonder: how does my understanding of the world make me oblivious to certain data needs? And, more importantly, how can I become more conscious of these data needs?

Starting my own experiments

To answer the above two questions, I’m starting some of my own experiments. In these experiments, my goal is to move outside of my own reality bubble in order to encounter data I am unaware of.

I’m not quite sure yet what data I’ll explore exactly, but let’s figure that out along the way. The important part of the experiments is to get outside of my own reality bubble.

People-centered

Getting outside of my own reality bubble is not something I can do on my own. Within the field of AI, getting in touch with other people’s perspectives – either through colleagues or end-users – is essential. I will follow this tradition in my experiments.

Ultimately, my aim is to learn about other people’s realities. Obviously, I can never truly leave my own reality bubble behind.

Nonetheless, I hope my experiments give a voice to the people who’s data would otherwise not have been considered when building AI systems.

My first experiment will be Email Hitchhiking.

How “reality bubbles” blind us to unknowns

Imagine a man’s head and a woman’s head. Now, visualize what that man and woman are wearing. If you saw the man wearing a suit and the woman a bikini, you might be an AI system.

A too narrow perspective

Of course, assuming men wear suits and women wear bikinis is very stereotypical. But why did the AI system come to this conclusion? According to Karen Hao from the MIT tech review, the AI system learned by example from the internet, which is filled with “scantily clad women and other often harmful stereotypes.”

Because the AI developers didn’t have the time, money, knowledge or interest, they failed to see their example data as fundamentally lacking. Not examining their own perspective, they (unintentionally) reaffirmed a world where woman don’t really wear suits (and men don’t really wear bikinis).

Reality bubbles

We might say that woman commonly wear suits too, therefore the AI systems should depict women as such. That’s true, but then we still maintain a Western perspective. Why doesn’t the system imagine, for example, the woman wearing a sari (a common women’s garment from the Indian subcontinent)?

In her book The Reality Bubble, Zia Tong writes that humans all live inside “a psychological [bubble] that shapes our ideas about the everyday world.” In other words, we each experience and understand the world in our own, limited way.

By growing up in particular environments, with a particular body, in a particular society, we get exposed to particular things and ideas. For some, suits and bikinis might come to mind, for others saris are more obvious. It all depends on their “reality bubble”.

Moving beyond our own reality bubbles.

Unfortunately, we often fail to go outside of our reality bubbles. And while the internet connects people and ideas like never before, our digital environments also often reflect certain world views instead of others and echo already familiar things and ideas back to us.

So even in this connected world, it’s still hard to get outside of our own reality bubbles.

Nonetheless, we don’t want AI systems to echo stereotypes or do harm, so we need to find ways to get outside of our bubbles. Unfortunately, that’s easier said than done.

The need for datasets with unknowns

We hear a lot about artificial intelligence (AI) these days, but according to four Facebook researchers not everyone might reap AI’s benefits. One of the biggest causes of this? The problem of unknowns.

Images from around the world

In the article Does Object Detection Work for Everyone?, four Facebook researchers describe how they collected images of common household items (soap, spices, etc.) from families around the world. Important to know: the families their income varied widely, ranging from anywhere between $25 to $20 000 a month.

When the four researchers tested the images on five popular image-recognition systems, they came to an interesting conclusion: the systems performed considerably worse on images from countries with lower household incomes.

The idea that image-recognition systems perform worse for some people is already widely documented. But what’s special about the article is how the researchers collected so many images the systems failed to recognize (or, in other words, were unknown to the systems). So how did they do this?

The Dollar Street dataset

The researchers obtained their images from Dollar Street, a non-AI project that aims to counter prejudices based on location and income. By portraying images of simple household items from around the world, people can see how others really live. For example, have you ever considered that spices can be kept in glass containers, empty plastic bottles with a corncob, plastic bags, or spice boxes?

So why did the Dollar Street images uncover so many unknowns? Well, image-recognition systems are typically tested (and trained) on images scraped from the internet. Of course, the internet disproportionally contains images from people with cameras, computers and internet access. People with low incomes and from certain geographies are thus unevenly represented.

The researchers conclude that the systems’ poor performances on certain images is primarily due to changes in the appearance of the items and their environments. Considering many objects and environments in the test images were not commonly available on the internet, the AI systems had never really seen them before and thus could not recognized them. They were unknown to the systems.

What we can learn

So what can we learn from the article discussed above? First, we need to acknowledge that unknowns are still an enormous problem in the field of AI. Second, to find these unknowns, we need to validate AI systems with images from outside our common datasets as well as outside our own life worlds.

In other words, we have a responsibility to consider which contexts – such as income and geography – impact the performance of AI systems, and then go out in the wild to collect images at the corners and edges of those contexts.

The problem of unknowns in Artificial Intelligence

In 2017, The Guardian reported on how Volvo’s self-driving cars failed to avoid kangaroos during road tests in Australia. Volvo – which is headquartered in Sweden – initially tested the technology in a local setting. As a result, their cars recognized animals native to Sweden (such as moose). However, once the cars were driving in a new setting like Australia, unforeseen elements started popping up, such as kangaroos.

The problem of unknowns

The above Volvo anekdote illustrates a major problem with current artificial intelligence (AI): the problem of unknowns. Typically, AI systems* learn to recognize things by looking at a lot of examples. For instance, if we want a system to recognize a cat, it needs to be shown thousands if not millions of images of cats. Afterwards the system is able – at least hopefully – to recognize images of cats it has never seen before.

In the case of Volvo, the AI system probably looked at a lot of images and movement patterns of moose. Then, while driving on actual Swedish roads, the cars were able to recognize and effectively respond to moose. Of course, because kangaroos were initially not on the minds of Volvo’s R&D team based in Sweden, they did not show their system enough examples of Kangaroos. Consequently, kangaroos were quite literally unknown to the cars.

The problem of unknowns is certainly not unique to Volvo, it is in fact quite common in the field of AI. For example, Facebook researchers reported that five of the most popular public image recognition systems failed to recognize common household objects (for example, soap and spices) from non-western countries. Google Health software that screens people for diabetic retinopathy performed considerably worse because of unknown light conditions in screening rooms in Thailand. Similar systems also performed worse with less drastic switches in environment, such as between Seattle and Atlanta. In another case, a New Zealand man of Asian descent’s passport renewal picture got rejected because the AI system was not familiar enough with the eyes of people of Asian descent.

Tackling the problem of unknowns

Tackling the problem of unknowns in AI is very hard. The world is extremely vast and diverse, so finding all relevant unknowns with limited time and resources requires a lot of expertise. The question therefore becomes: how do AI companies efficiently and thoroughly search for these unknowns? That question is the central theme of this blog.

* When I talk about AI systems on this blog, it’s about supervised learning systems with at least some labeling involved (unless further specified).