Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • Learning
  • About
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • Learning
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • Learning
  • About
  • Artificial Intelligence
  • Machine Learning

Algorithms Trace How Stereotypes Have Changed

  • January 11, 2020
  • admin

Word embeddings—an algorithmic technique that can map relationships and associations between words—can measure changes in gender and ethnic stereotypes over the past century in the United States.

Researchers analyzed large databases of American books, newspapers, and other texts and looked at how those linguistic changes correlated with actual US Census demographic data and major social shifts such as the women’s movement in the 1960s and the increase in Asian immigration, according to the research.


Partner with liwaiwai.com
for your next big idea.
Let us know here.



From our partners:

CITI.IO :: Business. Institutions. Society. Global Political Economy.
CYBERPOGO.COM :: For the Arts, Sciences, and Technology.
DADAHACKS.COM :: Parenting For The Rest Of Us.
ZEDISTA.COM :: Entertainment. Sports. Culture. Escape.
TAKUMAKU.COM :: For The Hearth And Home.
ASTER.CLOUD :: From The Cloud And Beyond.
LIWAIWAI.COM :: Intelligence, Inside and Outside.
GLOBALCLOUDPLATFORMS.COM :: For The World's Computing Needs.
FIREGULAMAN.COM :: For The Fire In The Belly Of The Coder.
ASTERCASTER.COM :: Supra Astra. Beyond The Stars.
BARTDAY.COM :: Prosperity For Everyone.

Artificial intelligence systems and machine-learning algorithms have come under fire recently because they can pick up and reinforce existing biases in our society, depending on what data they are programmed with.

“Word embeddings can be used as a microscope to study historical changes in stereotypes in our society,” says paper coauthor James Zou, an assistant professor of biomedical data science at Stanford University. “Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic, and sociological analyses of biases.”

“This type of research opens all kinds of doors to us,” says coauthor Londa Schiebinger, history professor at Stanford. “It provides a new level of evidence that allow humanities scholars to go after questions about the evolution of stereotypes and biases at a scale that has never been done before.”

Words, Words, Words

A word embedding is an algorithm researchers can use, or train, on a collection of text. The algorithm then assigns a geometrical vector to every word, representing each word as a point in space. The technique uses location in this space to capture associations between words in the source text.

Read More  Inside Meta's AI Optimization Platform For Engineers Across The Company

“Embeddings are a powerful linguistic tool for measuring subtle aspects of word meaning, such as bias,” says coathor Dan Jurafsky, a professor of linguistics and computer science.

Take the word “honorable.” Using the embedding tool, previous research found that the adjective has a closer relationship to the word “man” than to the word “woman.”

In their new research, the researchers used embeddings to identify specific occupations and adjectives that were biased toward women and particular ethnic groups by decade from 1900 to the present.

The researchers trained those embeddings on newspaper databases and also used embeddings previously trained by computer science graduate student Will Hamilton on other large text datasets, such as the Google Books corpus of American books, which contains over 130 billion words published during the 20th and 21st centuries.

The researchers compared the biases found by those embeddings to demographical changes in the US Census data between 1900 and the present.

Changing Stereotypes

The research findings showed quantifiable shifts in gender portrayals and biases toward Asians and other ethnic groups during the 20th century.

One of the key findings to emerge was how biases toward women changed for the better—in some ways—over time.

For example, adjectives such as “intelligent,” “logical,” and “thoughtful” were associated more with men in the first half of the 20th century. But since the 1960s, the same words have increasingly been associated with women with every following decade, correlating with the women’s movement in the 1960s, although a gap remains.

The research also showed a dramatic change in stereotypes toward Asians and Asian Americans.

Read More  Moving First On AI Has Competitive Advantages and Risks

For example, in the 1910s, words like “barbaric,” “monstrous,” and “cruel” were the adjectives most associated with Asian last names. By the 1990s, those adjectives were replaced by words like “inhibited,” “passive,” and “sensitive.” This linguistic change correlates with a sharp increase in Asian immigration to the United States in the 1960s and 1980s and a change in cultural stereotypes, the researchers says.

Overall, the researchers demonstrated that changes in the word embeddings tracked closely with demographic shifts measured by the US Census.

“The starkness of the change in stereotypes stood out to me,” says electrical engineering graduate student Nikhil Garg, who is lead author of the study. “When you study history, you learn about propaganda campaigns and these outdated views of foreign groups. But how much the literature produced at the time reflected those stereotypes was hard to appreciate.”

The researchers report their findings in the Proceedings of the National Academy of Sciences.

Source: Stanford University

Original Study DOI: 10.1073/pnas.1720347115

 

 

Alex Shashkevich

This article originally appeared in Futurity.


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

admin

Related Topics
  • Algorithms
  • Bias
  • Ethnicity
  • Gender
  • Languages
  • Race
  • Stereotypes
You May Also Like
OpenAI
View Post
  • Artificial Intelligence
  • Platforms

How We Interact With Information: The New Era Of Search

  • September 28, 2023
View Post
  • Artificial Intelligence
  • Engineering
  • Machine Learning
  • Platforms

Bring AI To Looker With The Machine Learning Accelerator

  • September 28, 2023
View Post
  • Artificial Intelligence
  • Technology

Microsoft And Mercy Collaborate To Empower Clinicians To Transform Patient Care With Generative AI

  • September 27, 2023
View Post
  • Artificial Intelligence
  • Technology

NASA’s Mars Rovers Could Inspire A More Ethical Future For AI

  • September 26, 2023
View Post
  • Artificial Intelligence
  • Platforms

Oracle CloudWorld 2023: 6 Key Takeaways From The Big Annual Event

  • September 25, 2023
View Post
  • Artificial Intelligence

3 Ways AI Can Help Communities Adapt To Climate Change In Africa

  • September 25, 2023
Robotic Hand | Lights
View Post
  • Artificial Intelligence
  • Technology

Nvidia H100 Tensor Core GPUs Come To Oracle Cloud

  • September 24, 2023
View Post
  • Artificial Intelligence
  • Engineering
  • Technology

AI-Driven Tool Makes It Easy To Personalize 3D-Printable Models

  • September 22, 2023
A Field Guide To A.I.
Navigate the complexities of Artificial Intelligence and unlock new perspectives in this must-have guide.
Now available in print and ebook.

charity-water



Stay Connected!
LATEST
  • OpenAI 1
    How We Interact With Information: The New Era Of Search
    • September 28, 2023
  • 2
    Bring AI To Looker With The Machine Learning Accelerator
    • September 28, 2023
  • 3
    3 Questions: A New PhD Program From The Center For Computational Science And Engineering
    • September 28, 2023
  • 4
    Microsoft And Mercy Collaborate To Empower Clinicians To Transform Patient Care With Generative AI
    • September 27, 2023
  • 5
    NASA’s Mars Rovers Could Inspire A More Ethical Future For AI
    • September 26, 2023
  • 6
    Oracle CloudWorld 2023: 6 Key Takeaways From The Big Annual Event
    • September 25, 2023
  • 7
    3 Ways AI Can Help Communities Adapt To Climate Change In Africa
    • September 25, 2023
  • Robotic Hand | Lights 8
    Nvidia H100 Tensor Core GPUs Come To Oracle Cloud
    • September 24, 2023
  • 9
    AI-Driven Tool Makes It Easy To Personalize 3D-Printable Models
    • September 22, 2023
  • 10
    Huawei: Advancing a Flourishing AI Ecosystem Together
    • September 22, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • Coffee | Laptop | Notebook | Work 1
    First HP Work Relationship Index Shows Majority of People Worldwide Have an Unhealthy Relationship with Work
    • September 20, 2023
  • 2
    Huawei Connect 2023: Accelerating Intelligence For Shared Success
    • September 20, 2023
  • 3
    Applying Generative AI To Product Design With BigQuery DataFrames
    • September 21, 2023
  • 4
    Combining AI With A Trusted Data Approach On IBM Power To Fuel Business Outcomes
    • September 21, 2023
  • Microsoft and Adobe 5
    Microsoft And Adobe Partner To Deliver Cost Savings And Business Benefits
    • September 21, 2023
  • /
  • Artificial Intelligence
  • Explore
  • About
  • Contact Us

Input your search keywords and press Enter.