Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • Learning
  • About
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • Learning
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • Learning
  • About
  • Machine Learning

Empowering Social Media Users To Assess Content Helps Fight Misinformation

  • November 18, 2022
  • liwaiwai.com

When fighting the spread of misinformation, social media platforms typically place most users in the passenger seat. Platforms often use machine-learning algorithms or human fact-checkers to flag false or misinforming content for users.

“Just because this is the status quo doesn’t mean it is the correct way or the only way to do it,” says Farnaz Jahanbakhsh, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).


Partner with liwaiwai.com
for your next big idea.
Let us know here.



From our partners:

CITI.IO :: Business. Institutions. Society. Global Political Economy.
CYBERPOGO.COM :: For the Arts, Sciences, and Technology.
DADAHACKS.COM :: Parenting For The Rest Of Us.
ZEDISTA.COM :: Entertainment. Sports. Culture. Escape.
TAKUMAKU.COM :: For The Hearth And Home.
ASTER.CLOUD :: From The Cloud And Beyond.
LIWAIWAI.COM :: Intelligence, Inside and Outside.
GLOBALCLOUDPLATFORMS.COM :: For The World's Computing Needs.
FIREGULAMAN.COM :: For The Fire In The Belly Of The Coder.
ASTERCASTER.COM :: Supra Astra. Beyond The Stars.
BARTDAY.COM :: Prosperity For Everyone.

She and her collaborators conducted a study in which they put that power into the hands of social media users instead.

They first surveyed people to learn how they avoid or filter misinformation on social media. Using their findings, the researchers developed a prototype platform that enables users to assess the accuracy of content, indicate which users they trust to assess accuracy, and filter posts that appear in their feed based on those assessments.

Through a field study, they found that users were able to effectively assess misinforming posts without receiving any prior training. Moreover, users valued the ability to assess posts and view assessments in a structured way. The researchers also saw that participants used content filters differently — for instance, some blocked all misinforming content while others used filters to seek out such articles.

This work shows that a decentralized approach to moderation can lead to higher content reliability on social media, says Jahanbakhsh. This approach is also more efficient and scalable than centralized moderation schemes, and may appeal to users who mistrust platforms, she adds.

“A lot of research into misinformation assumes that users can’t decide what is true and what is not, and so we have to help them. We didn’t see that at all. We saw that people actually do treat content with scrutiny and they also try to help each other. But these efforts are not currently supported by the platforms,” she says.

Jahanbakhsh wrote the paper with Amy Zhang, assistant professor at the University of Washington Allen School of Computer Science and Engineering; and senior author David Karger, professor of computer science in CSAIL. The research will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing.

Read More  ChatGPT 4.0 Finally Gets A Joke

Fighting misinformation

The spread of online misinformation is a widespread problem. However, current methods social media platforms use to mark or remove misinforming content have downsides. For instance, when platforms use algorithms or fact-checkers to assess posts, that can create tension among users who interpret those efforts as infringing on freedom of speech, among other issues.

“Sometimes users want misinformation to appear in their feed because they want to know what their friends or family are exposed to, so they know when and how to talk to them about it,” Jahanbakhsh adds.

Users often try to assess and flag misinformation on their own, and they attempt to assist each other by asking friends and experts to help them make sense of what they are reading. But these efforts can backfire because they aren’t supported by platforms. A user can leave a comment on a misleading post or react with an angry emoji, but most platforms consider those actions signs of engagement. On Facebook, for instance, that might mean the misinforming content would be shown to more people, including the user’s friends and followers — the exact opposite of what this user wanted.

To overcome these problems and pitfalls, the researchers sought to create a platform that gives users the ability to provide and view structured accuracy assessments on posts, indicate others they trust to assess posts, and use filters to control the content displayed in their feed. Ultimately, the researchers’ goal is to make it easier for users to help each other assess misinformation on social media, which reduces the workload for everyone.

The researchers began by surveying 192 people, recruited using Facebook and a mailing list, to see whether users would value these features. The survey revealed that users are hyper-aware of misinformation and try to track and report it, but fear their assessments could be misinterpreted. They are skeptical of platforms’ efforts to assess content for them. And, while they would like filters that block unreliable content, they would not trust filters operated by a platform.

Read More  Why Artificial Neural Networks Have A Long Way To Go Before They Can ‘See’ Like Us

Using these insights, the researchers built a Facebook-like prototype platform, called Trustnet. In Trustnet, users post and share actual, full news articles and can follow one another to see content others post. But before a user can post any content in Trustnet, they must rate that content as accurate or inaccurate, or inquire about its veracity, which will be visible to others.

“The reason people share misinformation is usually not because they don’t know what is true and what is false. Rather, at the time of sharing, their attention is misdirected to other things. If you ask them to assess the content before sharing it, it helps them to be more discerning,” she says.

Users can also select trusted individuals whose content assessments they will see. They do this in a private way, in case they follow someone they are connected to socially (perhaps a friend or family member) but whom they would not trust to assess content. The platform also offers filters that let users configure their feed based on how posts have been assessed and by whom.

Testing Trustnet

Once the prototype was complete, they conducted a study in which 14 individuals used the platform for one week. The researchers found that users could effectively assess content, often based on expertise, the content’s source, or by evaluating the logic of an article, despite receiving no training. They were also able to use filters to manage their feeds, though they utilized the filters differently.

“Even in such a small sample, it was interesting to see that not everybody wanted to read their news the same way. Sometimes people wanted to have misinforming posts in their feeds because they saw benefits to it. This points to the fact that this agency is now missing from social media platforms, and it should be given back to users,” she says.

Users did sometimes struggle to assess content when it contained multiple claims, some true and some false, or if a headline and article were disjointed. This shows the need to give users more assessment options — perhaps by stating than an article is true-but-misleading or that it contains a political slant, she says.

Read More  How Machine Learning Can Support Historians Decipher Ancient Inscriptions

Since Trustnet users sometimes struggled to assess articles in which the content did not match the headline, Jahanbakhsh launched another research project to create a browser extension that lets users modify news headlines to be more aligned with the article’s content.

While these results show that users can play a more active role in the fight against misinformation, Jahanbakhsh warns that giving users this power is not a panacea. For one, this approach could create situations where users only see information from like-minded sources. However, filters and structured assessments could be reconfigured to help mitigate that issue, she says.

In addition to exploring Trustnet enhancements, Jahanbakhsh wants to study methods that could encourage people to read content assessments from those with differing viewpoints, perhaps through gamification. And because social media platforms may be reluctant to make changes, she is also developing techniques that enable users to post and view content assessments through normal web browsing, instead of on a platform.

This work was supported, in part, by the National Science Foundation.

“Understanding how to combat misinformation is one of the most important issues for our democracy at present. We have largely failed at finding technical solutions at scale. This project offers a new and innovative approach to this critical problem that shows considerable promise,” says Mark Ackerman, George Herbert Mead Collegiate Professor of Human-Computer Interaction at the University of Michigan School of Information, who was not involved with this research. “The starting point for their study is that people naturally understand information through the people they trust in their social network, and so the project leverages trust in others to assess the accuracy of information. This is what people do naturally in social settings, but technical systems currently do not support it well. Their system also supports trusted news and other information sources. Unlike platforms with their opaque algorithm, the team’s system supports this kind of information assessment that we all do.”

 

By Adam Zewe
Source MIT CSAIL


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

liwaiwai.com

Related Topics
  • CSAIL
  • Fake news
  • Misinformation
  • MIT
  • Social Media
You May Also Like
Microsoft and Adobe
View Post
  • Artificial Intelligence
  • Machine Learning
  • Platforms

Microsoft And Adobe Partner To Deliver Cost Savings And Business Benefits

  • September 21, 2023
Data
View Post
  • Artificial Intelligence
  • Machine Learning
  • Technology

UK Space Sector Has Sights Set On Artificial Intelligence And Machine Learning Professionals

  • September 15, 2023
Data
View Post
  • Artificial Intelligence
  • Engineering
  • Machine Learning
  • Platforms

How Verve Group Transforms Customer Experiences With Google Cloud Vertex AI

  • September 11, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Technology

ListenField Enables Farmers To Harvest The Benefits Of AI And Machine Learning

  • September 7, 2023
View Post
  • Artificial Intelligence
  • Hybrid Cloud
  • Machine Learning
  • Platforms

Red Hat OpenShift Now Available In AWS Marketplace For The U.S. Intelligence Community

  • September 6, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Software
  • Technology

Series Of Events Will Highlight Generative AI Use Cases Powered By Open Source Software

  • September 6, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Platforms

Introducing Duet AI In Apigee API Management And Application Integration

  • September 1, 2023
View Post
  • Artificial Intelligence
  • Data
  • Machine Learning
  • Platforms

IBM Introduces ‘Watsonx Your Business’

  • August 28, 2023
A Field Guide To A.I.
Navigate the complexities of Artificial Intelligence and unlock new perspectives in this must-have guide.
Now available in print and ebook.

charity-water



Stay Connected!
LATEST
  • 1
    Oracle CloudWorld 2023: 6 Key Takeaways From The Big Annual Event
    • September 25, 2023
  • 2
    3 Ways AI Can Help Communities Adapt To Climate Change In Africa
    • September 25, 2023
  • Robotic Hand | Lights 3
    Nvidia H100 Tensor Core GPUs Come To Oracle Cloud
    • September 24, 2023
  • 4
    AI-Driven Tool Makes It Easy To Personalize 3D-Printable Models
    • September 22, 2023
  • 5
    Applying Generative AI To Product Design With BigQuery DataFrames
    • September 21, 2023
  • 6
    Combining AI With A Trusted Data Approach On IBM Power To Fuel Business Outcomes
    • September 21, 2023
  • Microsoft and Adobe 7
    Microsoft And Adobe Partner To Deliver Cost Savings And Business Benefits
    • September 21, 2023
  • Coffee | Laptop | Notebook | Work 8
    First HP Work Relationship Index Shows Majority of People Worldwide Have an Unhealthy Relationship with Work
    • September 20, 2023
  • 9
    Huawei Connect 2023: Accelerating Intelligence For Shared Success
    • September 20, 2023
  • 10
    Document AI Workbench Is Now Powered By Generative AI To Structure Document Data Faster
    • September 15, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • Intel Innovation 1
    Intel Innovation 2023
    • September 15, 2023
  • 2
    Microsoft And Oracle Expand Partnership To Deliver Oracle Database Services On Oracle Cloud Infrastructure In Microsoft Azure
    • September 14, 2023
  • 3
    Real-Time Ubuntu Is Now Available In AWS Marketplace
    • September 12, 2023
  • 4
    IBM Brings Watsonx To ESPN Fantasy Football With New Waiver Grades And Trade Grades
    • September 13, 2023
  • Data 5
    UK Space Sector Has Sights Set On Artificial Intelligence And Machine Learning Professionals
    • September 15, 2023
  • /
  • Artificial Intelligence
  • Explore
  • About
  • Contact Us

Input your search keywords and press Enter.