What Gender is AI?
Inclusive Design for Our Future Assistants, Companions, and Co-Workers
Close your eyes. Imagine: you’re 30 years in the future, and you’re visiting the doctor’s office. The receptionist is a helpful robot that asks you the reason for your visit, cross-references your symptoms with your medical history, prepares a brief for your doctor, and verifies your insurance information and payment- all behind the scenes, while chatting with you about how you’re feeling and even simultaneously entertaining your young kids on another screen with a game teaching them about nutrition!
Open your eyes. Was the robot male or female?
If you’re like most people, you probably envisioned a female AI acting as the perfect receptionist, greeter, diagnostician, and child care attendant. And why not? Before smartphones, the Internet, and meaningful advancement in AI research, most anthropomorphic machine intelligence in fiction and sci-fi was given expressly female identities- except for a few noteworthy male-identified villains.
The way we gender social AI (machine intelligence designed to interact with humans socially, like the receptionist in the thought exercise or the chatbot that pops up while you’re shopping or seeking professional information) is bound up in our biases, preconceptions, and stereotypes of human gender, which are nearly always culturally constructed- which should already be a good indicator to our regular readers that this is bad inclusive design.
Companies assign a female gender, especially a feminine voice, to digital assistance and apps like navigation programs for one big reason- we consistently prefer it that way in testing. We ascribe traits like “helpful,” “attentive,” “nonthreatening,” and “kind” to female voices- the kind of personality traits we’d like the people (or machines) helping us to manifest. There’s a less charitable way of looking at it, too- we expect females to be subordinate and available and males to be authoritative and in control. These stereotypes are why we imagine a feminine robot serving us dinner and a masculine robot serving us the end of humanity.
This is a feedback loop of non-inclusive design AND social injustice: it reinforces harmful, limiting, and generally negative gender attributions to the humans that see and hear them- like that females “should” communicate tentatively and submissively, and males “should” be dominating leaders. Imagine the understanding of gender developed by a young child of either gender watching their parent order around Siri and Alexa, who cheerfully carries out their every request without a “please” or “thank you”- not to mention entertain themselves by sexually harassing a female-gendered avatar unable to criticize its human masters (though this is slowly changing).
But there’s more to this- as this kind of stereotyping narrows our relationships to AI, and indeed serves to direct the future of what kinds of interactions we’ll have with them, it also shapes the future of our gendered relationships with each other. Intrinsically, AI, robots, computers, and any technology has no gender; it’s an overlay we apply to tech to anthropomorphize it, sell it, and make it more comfortable and innocuous-seeming. We will make social AI look and act the way we expect and prefer– for better or worse.
The cornerstone of inclusive design is about options and range of function; in this case, we argue, it should be considered a priority that users can select the persona of social AI for themselves- and that these be based on healthy, progressive notions of gender and other dimensions. What’s the best way to do that? Simple- as with any other inclusive design challenge, include stakeholders in design research.