Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Artificial Intelligence
  • Machine Learning

Designing Artificial Brains Can Help Us Learn More About Real Ones

  • September 3, 2020
  • admin

Despite billions of dollars spent and decades of research, computation in the human brain remains largely a mystery. Meanwhile, we have made great strides in the development of artificial neural networks, which are designed to loosely mimic how brains compute. We have learned a lot about the nature of neural computation from these artificial brains and it’s time to take what we’ve learned and apply it back to the biological ones.

Neurological diseases are on the rise worldwide, making a better understanding of computation in the brain a pressing problem. Given the ability of modern artificial neural networks to solve complex problems, a framework for neuroscience guided by machine learning insights may unlock valuable secrets about our own brains and how they can malfunction.

Our thoughts and behaviours are generated by computations that take place in our brains. To effectively treat neurological disorders that alter our thoughts and behaviours, like schizophrenia or depression, we likely have to understand how the computations in the brain go wrong.

However, understanding neural computation has proven to be an immensely difficult challenge. When neuroscientists record activity in the brain, it is often indecipherable.

In a paper published in Nature Neuroscience, my co-authors and I argue that the lessons we have learned from artificial neural networks can guide us down the right path of understanding the brain as a computational system rather than as a collection of indecipherable cells.

Brain network models

Artificial neural networks are computational models that loosely mimic the integration and activation properties of real neurons. They have become ubiquitous in the field of artificial intelligence.

Read More  How To Use ChatGPT For Marketing

To construct artificial neural networks, you start by first designing the network architecture: how the different components of the network are connected to one another. Then, you define the learning goal for the architecture, such as “learn to predict what you’re going to see next.” Then, you define a rule that tells the network how to change in order to achieve that goal using the data it receives.

What you do not do is specify how each neuron in the network is going to function. You leave it up to the network to determine how each neuron should function to best accomplish the task. I believe the development of the brain is probably the product of a similar process, both on an evolutionary timescale and at the timescale of an individual learning within their lifetime.

Neuroscientists have mapped out the various regions of the brain, but how it computes remains a mystery. (Shutterstock)

Assigning neuron roles

This calls into question the usefulness of trying to determine the functions of individual neurons in the brain, when it is possible that these neurons are the result of an optimization process much like what we see with artificial neural networks.

The different components of artificial neural networks are often very hard to understand. There’s no simple verbal or simple mathematical description that explains exactly what they do.

In our paper, we propose that the same holds true for the brain, and so we have to move away from trying to understand the role of each neuron in the brain and instead look at the brain’s architecture, that is its network structure; the optimization goals, either at the evolutionary timescale or within the person’s lifetime; and the rules by which the brain updates itself — either over generations or within a lifetime — to meet those goals. By defining these three components, we may get a much better understanding of how the brain works than by trying to state what each neuron does.

Read More  4 Steps To Using AI In An Environmentally Responsible Way

Optimizing frameworks

One successful application of this approach has shown that the dopamine releasing neurons in the brain appear to encode information about unexpected rewards, e.g. unexpected delivery of some food. This sort of signal, called a reward prediction error, is often used to train artificial neural networks to maximize the rewards they get.

For example, by programming an artificial neural network to interpret points received in a video game as a reward, you can use reward prediction errors to train the network how to play the video game. In the real brain, as in the artificial neural networks, even if we don’t understand what each individual signal means, we can understand the role of these neurons and the neurons that receive their signals in relation to the learning goal of maximizing rewards.

Neurological disorders are the second leading group cause of deaths in the world; artificial neural networks may help to understand their causes. (Shutterstock)

While current theories in systems neuroscience are beautiful and insightful, I believe a cohesive framework founded in the way in which evolution and learning shape our brain could fill in a lot the blanks we have been struggling with.

To make progress in systems neuroscience, it will take both bottom-up descriptive work, such as tracing out the connections and gene expression patterns of cells in the brain, and top-down theoretical work, using artificial neural networks to understand learning goals and learning rules.

Given the ability of modern artificial neural networks to solve complex problems, a framework for systems neuroscience guided by machine learning insights may unlock valuable secrets about the human brain.

The Conversation

Blake Richards, Assistant professor, Montreal Neurological Institute and the School of Computer Science, McGill University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More  Evolution Of Learning Is Key To Better Artificial Intelligence
admin

Related Topics
  • Artificial Intelligence
  • Artificial neural networks
  • Brain
  • Machine Learning
  • Neurons
  • Neuroscience
You May Also Like
View Post
  • Artificial Intelligence
  • Technology

Limits To Computing: A Computer Scientist Explains Why Even In The Age Of AI, Some Problems Are Just Too Difficult

  • March 17, 2023
View Post
  • Artificial Intelligence
  • Machine Learning
  • Platforms
  • Technology

Using ML To Predict The Weather And Climate Risk

  • March 16, 2023
View Post
  • Artificial Intelligence
  • Platforms
  • Technology

Google Is A Leader In The 2023 Gartner® Magic Quadrant™ For Enterprise Conversational AI Platforms

  • March 16, 2023
View Post
  • Artificial Intelligence
  • Technology

The Future Of AI Is Promising Yet Turbulent

  • March 16, 2023
View Post
  • Artificial Intelligence
  • Data
  • Machine Learning
  • Technology

ChatGPT: How To Prevent It Becoming A Nightmare For Professional Writers

  • March 16, 2023
View Post
  • Artificial Intelligence

AI Tokens Are Gaining Momentum In 2023

  • March 14, 2023
View Post
  • Artificial Intelligence
  • Technology

How Bootstrapped Saas Businesses Can Use ChatGPT For Marketing

  • March 14, 2023
View Post
  • Artificial Intelligence
  • Automation

Can Businesses Help Build Trustworthy And Accurate Generative AI?

  • March 14, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    How Osmo Is Digitizing Smell With Google Cloud AI Technology
    • March 20, 2023
  • 2
    Built With BigQuery: How Sift Delivers Fraud Detection Workflow Backtesting At Scale
    • March 20, 2023
  • 3
    Building The Most Open And Innovative AI Ecosystem
    • March 20, 2023
  • 4
    Understand And Trust Data With Dataplex Data Lineage
    • March 17, 2023
  • 5
    Limits To Computing: A Computer Scientist Explains Why Even In The Age Of AI, Some Problems Are Just Too Difficult
    • March 17, 2023
  • 6
    The Benefits And Core Processes Of Data Wrangling
    • March 17, 2023
  • 7
    We Cannot Even Agree On Dates…
    • March 17, 2023
  • 8
    Financial Crisis: It’s A Game & We’re All Being Played
    • March 17, 2023
  • 9
    Using ML To Predict The Weather And Climate Risk
    • March 16, 2023
  • 10
    Google Is A Leader In The 2023 Gartner® Magic Quadrant™ For Enterprise Conversational AI Platforms
    • March 16, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    The Future Of AI Is Promising Yet Turbulent
    • March 16, 2023
  • 2
    ChatGPT: How To Prevent It Becoming A Nightmare For Professional Writers
    • March 16, 2023
  • 3
    Midjourney Selects Google Cloud To Power AI-Generated Creative Platform
    • March 8, 2023
  • 4
    A Guide To Managing Your Agile Engineering Team
    • March 15, 2023
  • 5
    10 Ways Wikimedia Does Developer Advocacy
    • March 15, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.