Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
Liwaiwai Liwaiwai
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
    • Architecture
    • Design
    • Software
    • Hybrid Cloud
    • Data
  • About
  • Machine Learning

A Language Learning System That Pays Attention — More Efficiently Than Ever Before

  • February 11, 2021
  • relay
MIT researchers developed a hardware and software system that could reduce the computing power, energy, and time required for text analysis and generation. Image: Jose-Luis Olivares, MIT
Human language can be inefficient. Some words are vital. Others, expendable.

Reread the first sentence of this story. Just two words, “language” and “inefficient,” convey almost the entire meaning of the sentence. The importance of key words underlies a popular new tool for natural language processing (NLP) by computers: the attention mechanism. When coded into a broader NLP algorithm, the attention mechanism homes in on key words rather than treating every word with equal importance. That yields better results in NLP tasks like detecting positive or negative sentiment or predicting which words should come next in a sentence.

The attention mechanism’s accuracy often comes at the expense of speed and computing power, however. It runs slowly on general-purpose processors like you might find in consumer-grade computers. So, MIT researchers have designed a combined software-hardware system, dubbed SpAtten, specialized to run the attention mechanism. SpAtten enables more streamlined NLP with less computing power.

“Our system is similar to how the human brain processes language,” says Hanrui Wang. “We read very fast and just focus on key words. That’s the idea with SpAtten.”

The research will be presented this month at the IEEE International Symposium on High-Performance Computer Architecture. Wang is the paper’s lead author and a PhD student in the Department of Electrical Engineering and Computer Science. Co-authors include Zhekai Zhang and their advisor, Assistant Professor Song Han.

Since its introduction in 2015, the attention mechanism has been a boon for NLP. It’s built into state-of-the-art NLP models like Google’s BERT and OpenAI’s GPT-3. The attention mechanism’s key innovation is selectivity — it can infer which words or phrases in a sentence are most important, based on comparisons with word patterns the algorithm has previously encountered in a training phase. Despite the attention mechanism’s rapid adoption into NLP models, it’s not without cost.

Read More  More Sensitive X-ray Imaging

NLP models require a hefty load of computer power, thanks in part to the high memory demands of the attention mechanism. “This part is actually the bottleneck for NLP models,” says Wang. One challenge he points to is the lack of specialized hardware to run NLP models with the attention mechanism. General-purpose processors, like CPUs and GPUs, have trouble with the attention mechanism’s complicated sequence of data movement and arithmetic. And the problem will get worse as NLP models grow more complex, especially for long sentences. “We need algorithmic optimizations and dedicated hardware to process the ever-increasing computational demand,” says Wang.

The researchers developed a system called SpAtten to run the attention mechanism more efficiently. Their design encompasses both specialized software and hardware. One key software advance is SpAtten’s use of “cascade pruning,” or eliminating unnecessary data from the calculations. Once the attention mechanism helps pick a sentence’s key words (called tokens), SpAtten prunes away unimportant tokens and eliminates the corresponding computations and data movements. The attention mechanism also includes multiple computation branches (called heads). Similar to tokens, the unimportant heads are identified and pruned away. Once dispatched, the extraneous tokens and heads don’t factor into the algorithm’s downstream calculations, reducing both computational load and memory access.

To further trim memory use, the researchers also developed a technique called “progressive quantization.” The method allows the algorithm to wield data in smaller bitwidth chunks and fetch as few as possible from memory. Lower data precision, corresponding to smaller bitwidth, is used for simple sentences, and higher precision is used for complicated ones. Intuitively it’s like fetching the phrase “cmptr progm” as the low-precision version of “computer program.”

Read More  “Hey, Alexa! Are You Trustworthy?”

Alongside these software advances, the researchers also developed a hardware architecture specialized to run SpAtten and the attention mechanism while minimizing memory access. Their architecture design employs a high degree of “parallelism,” meaning multiple operations are processed simultaneously on multiple processing elements, which is useful because the attention mechanism analyzes every word of a sentence at once. The design enables SpAtten to rank the importance of tokens and heads (for potential pruning) in a small number of computer clock cycles. Overall, the software and hardware components of SpAtten combine to eliminate unnecessary or inefficient data manipulation, focusing only on the tasks needed to complete the user’s goal.

The philosophy behind the system is captured in its name. SpAtten is a portmanteau of “sparse attention,” and the researchers note in the paper that SpAtten is “homophonic with ‘spartan,’ meaning simple and frugal.” Wang says, “that’s just like our technique here: making the sentence more concise.” That concision was borne out in testing.

The researchers coded a simulation of SpAtten’s hardware design — they haven’t fabricated a physical chip yet — and tested it against competing general-purposes processors. SpAtten ran more than 100 times faster than the next best competitor (a TITAN Xp GPU). Further, SpAtten was more than 1,000 times more energy efficient than competitors, indicating that SpAtten could help trim NLP’s substantial electricity demands.

The researchers also integrated SpAtten into their previous work, to help validate their philosophy that hardware and software are best designed in tandem. They built a specialized NLP model architecture for SpAtten, using their Hardware-Aware Transformer (HAT) framework, and achieved a roughly two times speedup over a more general model.

Read More  Hewlett Packard Enterprise 2020 Predictions: Hybrid Cloud, As-a-Service, And Artificial Intelligence Will Change The Future Of IT

The researchers think SpAtten could be useful to companies that employ NLP models for the majority of their artificial intelligence workloads. “Our vision for the future is that new algorithms and hardware that remove the redundancy in languages will reduce cost and save on the power budget for data center NLP workloads” says Wang.

On the opposite end of the spectrum, SpAtten could bring NLP to smaller, personal devices. “We can improve the battery life for mobile phone or IoT devices,” says Wang, referring to internet-connected “things” — televisions, smart speakers, and the like. “That’s especially important because in the future, numerous IoT devices will interact with humans by voice and natural language, so NLP will be the first application we want to employ.”

Han says SpAtten’s focus on efficiency and redundancy removal is the way forward in NLP research. “Human brains are sparsely activated [by key words]. NLP models that are sparsely activated will be promising in the future,” he says. “Not all words are equal — pay attention only to the important ones.”

 

By Daniel Ackerman
Source MIT News

relay

Related Topics
  • IoT
  • Language
  • MIT
  • SpAtten
You May Also Like
View Post
  • Artificial Intelligence
  • Machine Learning
  • Platforms
  • Technology

Using ML To Predict The Weather And Climate Risk

  • March 16, 2023
View Post
  • Artificial Intelligence
  • Data
  • Machine Learning
  • Technology

ChatGPT: How To Prevent It Becoming A Nightmare For Professional Writers

  • March 16, 2023
View Post
  • Data
  • Engineering
  • Machine Learning

Sentiment Analysis With BigQuery ML

  • March 13, 2023
View Post
  • Artificial Intelligence
  • Machine Learning

MuAViC: The First Audio-Video Speech Translation Benchmark

  • March 13, 2023
View Post
  • Machine Learning

Large Language Models Are Biased. Can Logic Help Save Them?

  • March 12, 2023
View Post
  • Machine Learning
  • Research
  • Technology

How Freenome Is Building The Next Generation Of Early Cancer Detection Technology With Google Cloud

  • February 27, 2023
View Post
  • Artificial Intelligence
  • Data
  • Engineering
  • Machine Learning

Built With BigQuery: Aible’s Serverless Journey To Challenge The Cost Vs. Performance Paradigm

  • February 24, 2023
View Post
  • Artificial Intelligence
  • Data
  • Data Science
  • Machine Learning
  • People

Meet Our Data Champions: Emily Bobis, Driving Road Intelligence In Australia

  • February 24, 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay Connected!
LATEST
  • 1
    How Osmo Is Digitizing Smell With Google Cloud AI Technology
    • March 20, 2023
  • 2
    Built With BigQuery: How Sift Delivers Fraud Detection Workflow Backtesting At Scale
    • March 20, 2023
  • 3
    Building The Most Open And Innovative AI Ecosystem
    • March 20, 2023
  • 4
    Understand And Trust Data With Dataplex Data Lineage
    • March 17, 2023
  • 5
    Limits To Computing: A Computer Scientist Explains Why Even In The Age Of AI, Some Problems Are Just Too Difficult
    • March 17, 2023
  • 6
    The Benefits And Core Processes Of Data Wrangling
    • March 17, 2023
  • 7
    We Cannot Even Agree On Dates…
    • March 17, 2023
  • 8
    Financial Crisis: It’s A Game & We’re All Being Played
    • March 17, 2023
  • 9
    Using ML To Predict The Weather And Climate Risk
    • March 16, 2023
  • 10
    Google Is A Leader In The 2023 Gartner® Magic Quadrant™ For Enterprise Conversational AI Platforms
    • March 16, 2023

about
About
Hello World!

We are liwaiwai.com. Created by programmers for programmers.

Our site aims to provide materials, guides, programming how-tos, and resources relating to artificial intelligence, machine learning and the likes.

We would like to hear from you.

If you have any questions, enquiries or would like to sponsor content, kindly reach out to us at:

[email protected]

Live long & prosper!
Most Popular
  • 1
    The Future Of AI Is Promising Yet Turbulent
    • March 16, 2023
  • 2
    ChatGPT: How To Prevent It Becoming A Nightmare For Professional Writers
    • March 16, 2023
  • 3
    Midjourney Selects Google Cloud To Power AI-Generated Creative Platform
    • March 8, 2023
  • 4
    A Guide To Managing Your Agile Engineering Team
    • March 15, 2023
  • 5
    10 Ways Wikimedia Does Developer Advocacy
    • March 15, 2023
  • /
  • Artificial Intelligence
  • Machine Learning
  • Robotics
  • Engineering
  • About

Input your search keywords and press Enter.