Artificial intelligence (AI) can reportedly guess whether a person is gay or straight judging by photos of their faces, as a new research from Standfo
Artificial intelligence (AI) can reportedly guess whether a person is gay or straight judging by photos of their faces, as a new research from Standford University pointed out.
According to the authors of the research, Michal Kosinski and Yilun Wang:
“We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation.”
“Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person.”, as they explained.
As the Guardian (theguardian.com) reported, the study raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the prospect of this software violating privacy.
The researchers` program studied 130,741 images of 36,630 men and 170,360 images of 38,593 women that were downloaded from a popular American dating website, as The Economist (economist.com) noted in their publication.
The images were selected by the use of basic facial detection technology and then a software called VGG-Face was applied to them, as the article added.
In the Standford study, the authors pointed out that artificial intelligence could be used to explore links between facial features and other phenomena like political views, psychological features or personality, as the Guardian wrote in their article.
“AI can tell you anything about anyone with enough data.”, said Brian Brackeen, CEO of Kairos, a face recognition company.
“The question is as a society, do we want to know?“, he said.
The Human Rights Campaign (HRC) and Glaad, two of the leading LGBTQ organization in the USA described the study as “dangerous and flawed … junk science”, as the Guardian reported earlier.
Michal Kosinski, co-author of the study and assistent professor at STandford, told the newspaper that he was perplexed by the critics, stating that the study aimed to expose the possible dangerous applications of AI and call for regulations.