Intelligence, Inside and Outside.

CCMatrix: A Billion-scale Bitext Data Set For Training Translation Models

CCMatrix is the largest data set of high-quality, web-based bitexts for training translation models. With more than 4.5 billion parallel sentences in 576 language pairs pulled from snapshots of the CommonCrawl public data set, CCMatrix is more than 50 times larger than the WikiMatrix corpus that we shared last year. Gathering a data set of this size required modifying our previous bitext mining approach used for WikiMatrix, assuming that the translation of one sentence could be found anywhere on CommonCrawl, which functions as an open archive of the internet. To address the significant computational challenges posed by comparing billions of sentences to determine which ones are mutual translations, we used massively parallel processing, as well as our highly efficient FAISS library for fast similarity searches.

We’re sharing details about how we created CCMatrix, and the tools needed for other researchers to reproduce our results and use this corpus for their work. To demonstrate the value of automatically generating such a large number of parallel texts, we trained neural machine translation (NMT) systems on CCMatrix and compared their performance with established baselines. Our resulting models outperformed the state-of-the-art single-NMT systems evaluated in the Conference on Machine Translation (also known as WMT’19) competition in four language directions, including Russian to English, despite using only mined translations (rather than human-provided ones). And when tested against the TED corpus, CCMatrix also enabled us to significantly improve NMT performance for many language pairs, compared with other approaches.

What it does:

Parallel texts — which include sentences in one language and their corresponding translations in another — are the backbone of most NMT training methods. And while more bitext examples typically lead to better translation performance, gathering large parallel corpora across a wide number of languages is a resource-intensive task. Our method automates and parallelizes this bitext mining process, processing multiple batches of 50 million examples at a time on an 8-GPU server. Using the FAISS library, we’re able to calculate the distance between all the sentence embeddings in each batch, with every calculation performed in parallel. This enables a rapid extraction of sentence pairs, pulled from a greater variety of publicly available texts than similar data sets, including our Wikipedia-based WikiMatrix.

Read More  Google Cloud Next 2019 | Revolutionize Traditional Memory-Storage with Intel Optane DC Persistent Memory

CCMatrix’s parallelized approach to bitext mining maps the similarities between millions of sentences in many different languages at once, searching for pairs that can function as training examples for translation models.

Why it matters:

CCMatrix enables the NMT research community to leverage much larger bitext data sets than was previously possible for scores of language pairs. This can accelerate creation of more effective NMT models that work with more languages, particularly low-resource ones that have relatively limited corpora.

Because of its large scale and its use of a broad array of public texts, we believe that CCMatrix will become one of the most commonly used resources for building and evaluating systems across the field of NMT. We also hope that the technique we used to create CCMatrix will help the research community develop new ways to create large-scale data sets that will improve translation tools used by people around the globe.

Get it on GitHub:

Paper: https://arxiv.org/abs/1911.04944

Github: https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix

 

Source: Facebook AI Blog


For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!
Share this article
Shareable URL
Prev Post

Dynatrace Adds Business KPI Anomaly Detection And Analysis To Drive Better User Experiences And Business Outcomes

Next Post

New Study: AI Helps Organizations Grow Profits 80 Percent Faster

Read next

Why Trust Matters In AI

We can all agree that AI has the potential to help businesses, organizations, and society solve real problems.…