Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence
  • Data
  • Data Science
  • People

The Implications Of Open-Source AI: Should You Release Your AI Source Code Publicly?

  • March 20, 2021
  • admin

As humanity continues to implement new AI-solutions into many industries, the question of these projects’ usefulness and ethics is still often debated. Experts emphasize that developers of any new AI-tool must check its for goodness’, be responsible for its safety and consider its potential risk of harm. Still, the question of who should control the creation and use of new AI-technologies, specifically in synthetic media and deepfake tools, remains open. And secondly, is there a risk of making the code of new tools available to the general public?

In this article, I will share my thoughts on why it’s better and safer to bring the new AI tech into the hands of business rather than release it into the wild.

 

Why the full-body swap technology shouldn’t be open-source

In 2018, I started working on creating a digital fitting room running on machine learning. It required overcoming a couple of new issues that remain unsolved in machine learning till now. There was no available information on how to change the body proportions according to specific parameters, so no one knew how to make bodies appear thicker or slimmer with AI. Even now, this issue is not 100% resolved.

At first sight, what is so dangerous about the technology changing the image of one person to another in a few seconds? This feature underlies all digital fitting rooms and webshops. In 2018, I had no idea my technology would later be called a “full-body swap” or deepfake tool.

Only after I published a demo in August 2020 and different shady organizations started approaching me, I realized that AI-body-swap is a bomb. A bomb in both senses – as a technological breakthrough and as something dangerous if its code gets into the wrong hands.

Read More  Just Nine Out Of 116 AI Professionals In Key Films Are Women, Study Finds

If the source code of a full-body swap technology leaked (or if I published it myself), it would still hardly become a mass technology. It would more likely go to those who don’t advertise their work on deepfakes – like creators of fake porn and political blackmail materials. Deepfake tech is just too appealing for them not to expropriate and use it with evil intentions.

A team of high-class ML-pros would find a way around my code in about a week. In roughly six months, they’d have a production-grade full-body swap technology. After seeing the feedback and the level of interest in my technology when the demos got published, I realized that the only way to guarantee that it ends up in the right hands was to start collaborating with businesses whose actions are public and regulated by law.

Why is it safer to cooperate with businesses?

Keeping AI and deepfake imagery under control is possible only under the condition of total transparency and public oversight. The difference between business ownership of deepfake technology and uncontrolled public access to it is whether we know who to prosecute in the case of misuse and whether we have the opportunity to provide tools to avoid malicious use.

In public access, the technology is usually used anonymously and without any community rules. While business companies can regulate it at all levels: users filtering, moderation of content, banning systems, and the most important is tracking of their own synthesized content.

Detection of synthesized content is one of the fundamentally critical next steps in developing the whole AI-industry. Businesses that own such tools must provide detectors for identifying them.

 

Read More  Google Cloud Next 2019 | API Monitoring for Connected Customer Experiences

What can be deepfake detectors?

For instance, it can be neuro watermarks on videos that act as a watermark on a banknote. The imagery stores certain information (undetectable by a human eye) that proves its AI-synthesis with XXX tech, even if it was compressed, resized, or some of its parts have been cut out.

However, watermarks on video are a very long and resource-intensive technology, and deepfakes spread already now. Neuro watermarks only allow us to know that content was AI-generated but not who had generated it. Who to prosecute in case of fake news spreading? It should be a person, not the company that owns the technology. After all, no one is suing Photoshop for submissive content someone created because everyone understands that Photoshop is just a tool.

Now we at Reface are creating a faster and more convenient alternative to video watermarks. Each synthesized video made with our application will have its own footprint (hash). The user can delete the video from the database, but the hash will be saved anyway.

Therefore, even if someone creates a deepfake, removes it from the platform, and uploads it to YouTube, the company will compare this video’s footprint with that one in the database and lead to a specific user.

Making a model for generating a video hash is easier and faster than patching watermarks. Later, the same footprint detector can be released as an antivirus or browser extension, which warns the user that this web page has video-generated content using neural network + company logo.

The development of AI technology and the search for new ML solutions is impossible to stop now. Any motivated developer/machine learner can create a valid approach with no extra help, given time.

This is why it’s better to offer such solutions to businesses with at least some ethical code than to ride a short wave of hype on Github.

Companies that work with AI and create synthetic media get a lot of backlash from the public and the mass media. However, the companies that honestly admit to using such technologies are also those who play fair. They go out of their way to make it impossible (or at least as hard as possible) to misuse the resulting products. We need to make a pact, a deal that all the companies that create synthetic media must include watermarks, footprints, or provide other detectors for identifying it. Then we will always be able to locate the source of “official” deep fakes.

Read More  7 Tech Trends Reshaping the Retail Industry

About the Author

Alexey Chaplygin is VP of Product Delivery at Reface (AI-app to swap faces on videos, GIFs, and images). Alexey is responsible for product development, the implementation of new ML-tools, and synthetic media’s regulated use.

This article is republished from hackernoon.com

admin

Related Topics
  • AI
  • Artificial Intelligence
  • Deepfake
  • Open Source
  • Technology
You May Also Like
View Post
  • Artificial Intelligence
  • Software
  • Technology

Bard And ChatGPT — A Head To Head Comparison

  • March 31, 2023
View Post
  • Artificial Intelligence
  • Platforms

Modernize Your Apps And Accelerate Business Growth With AI

  • March 31, 2023
View Post
  • Big Data
  • Data
  • Design

From Raw Data To Actionable Insights: The Power Of Data Aggregation

  • March 30, 2023
View Post
  • Data
  • Design
  • Engineering

Effective Strategies To Closing The Data-Value Gap

  • March 30, 2023
View Post
  • Artificial Intelligence
  • Technology

Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Technology

Try Bard And Share Your Feedback

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Data
  • Data Science
  • Machine Learning
  • Technology

Google Data Cloud & AI Summit : In Less Than 12 Hours From Now

  • March 29, 2023
View Post
  • Artificial Intelligence
  • Technology

Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles

  • March 28, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    Bard And ChatGPT — A Head To Head Comparison
    • March 31, 2023
  • 2
    Modernize Your Apps And Accelerate Business Growth With AI
    • March 31, 2023
  • 3
    Why Your Open Source Project Needs A Content Strategy
    • March 31, 2023
  • 4
    From Raw Data To Actionable Insights: The Power Of Data Aggregation
    • March 30, 2023
  • 5
    Effective Strategies To Closing The Data-Value Gap
    • March 30, 2023
  • 6
    Unlocking The Secrets Of ChatGPT: Tips And Tricks For Optimizing Your AI Prompts
    • March 29, 2023
  • 7
    Try Bard And Share Your Feedback
    • March 29, 2023
  • 8
    Google Data Cloud & AI Summit : In Less Than 12 Hours From Now
    • March 29, 2023
  • 9
    Talking Cars: The Role Of Conversational AI In Shaping The Future Of Automobiles
    • March 28, 2023
  • 10
    Document AI Introduces Powerful New Custom Document Classifier To Automate Document Processing
    • March 28, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    Introducing GPT-4 in Azure OpenAI Service
    • March 21, 2023
  • 2
    How AI Can Improve Digital Security
    • March 27, 2023
  • 3
    ChatGPT 4.0 Finally Gets A Joke
    • March 27, 2023
  • 4
    Mr. Cooper Is Improving The Home-buyer Experience With AI And ML
    • March 24, 2023
  • 5
    My First Pull Request At Age 14
    • March 24, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.