Artificial intelligence study: are you gay or what?

A study says a computer can tell a person’s sexual orientation from their face. Is it really that simple?

I wonder if the rainbow is also scanned? Photo: dpa

Sexual orientation is written all over a person’s face – or at least that’s what you might think if you take a cursory glance at a recent Stanford University study. In it, computers examined photos of more than 35,000 faces with the aim of finding out which of the people pictured is homosexual and which is heterosexual.

The result sounds fascinating: based on a portrait photo, the algorithm was able to distinguish between homosexual and heterosexual men with an 81 percent hit probability; for women, the rate was 71 percent. With five photos per person, the probability even rises to percent, respectively. This sounds like a lot of certainty – but it is quite misleading.

The scientists used photos of (mainly white) users of a dating website, so they knew which sexual orientation was indicated in the respective profiles. The program analyzed photos of one homosexual and one heterosexual person and the algorithm had to decide: Who of the two is gay or lesbian and who is not.

Homosexual people are said to have more "gender-atypical" facial features – and not just in terms of selected attributes such as hairstyle or eyebrows. For example, the study cites narrower jaws, longer noses and higher foreheads for gay men compared to heterosexual men, and broader jaws and smaller foreheads for lesbian women.

Proof that sexual orientation is innate, the study’s authors now proclaim – and thus see the highly controversial prenatal hormone theory (PHT) confirmed. According to this theory, a person’s sexual orientation is dependent on the amount of androgenic hormones to which he or she was exposed in utero. The same hormones are also responsible for sex differences in the face. However, this theory has not yet been scientifically proven.

Methodology of the study not questioned

In many media – from the youth portals Bento and Noizz to Spiegel Online and the Economist – the study has been discussed excitedly in recent days. The British Guardian, for example, writes under the headline "New artificial intelligence can guess whether you’re gay or straight from a photo" that the result of the study opens up "tricky ethical questions." For example, such technology could be used against homosexuals in countries where they are persecuted. Forced outings would become a light matter.

This sounds critical, but – like the other texts – does not question the methodology of the study at all, and thus also not that the prenatal hormone theory would be confirmed by this study. But this would be necessary.

It is enough to look at the method to become suspicious.

A glance at the method is enough to make you wonder. If you were to put two men in front of me and say: One is gay and one is straight, and I would point to one of them with my eyes closed and say: "That one is gay," then my hit rate would be, as with any random number generator: 50 percent.

The algorithm achieves 81 percent – but we should not forget that it also includes self-selected attributes such as haircut and hairstyle, and that these are photos from a dating platform. The people pictured therefore want to appeal to a specific target group.

Above all, however, the test arrangement – two people with definitely different sexual orientations – exists only in the laboratory. It would be much more interesting if a software could find out from a single face whether the corresponding person is homosexual or heterosexual. But then suddenly instead of two possibilities – right or wrong – there are four options: The software says a man is gay – and it’s true or it’s not. Or it says he is straight – and it is right or wrong. The authors Michal Kosinski and Yilun Wang themselves say that the hit rate in such a constellation is significantly lower.

One of the study’s authors has caused a stir before

Michal Kosinski, by the way, is the man who made a name for himself after the election of Donald Trump: as the inventor of the very technology that a company called Cambridge Analytica allegedly used to engineer Trump’s election as president of the United States. At the time, Kosinski told the Swiss magazine that his technology could determine various characteristics of a person based on their Facebooklikes, in some cases better than the person’s partner could.

The study caused a stir – not least because many gratefully accepted an explanation for what had just happened. As time went by, however, those voices also emerged that warned against regarding the method as all too powerful.

The numerous data that circulate in the world about us for all the public to see do indeed reveal a lot about us. Kosinski says he wants to use the publication of the current study primarily to draw attention to the dangers – and we should be attentive.

But that also applies to the results of studies – and their presentation in the media. The results by no means constitute proof of the prenatal hormone theory – and a hit probability of 81 percent is not quite as high as it may sound.