Over the last century, scientists have developed methods to map the structures within the Earth’s crust, in order to identify resources such as oil reserves, geothermal sources, and, more recently, reservoirs where excess carbon dioxide could potentially be sequestered. They do so by tracking seismic waves that are produced naturally by earthquakes or artificially via explosives or underwater air guns. The way these waves bounce and scatter through the Earth can give scientists an idea of the type of structures that lie beneath the surface.
There is a narrow range of seismic waves — those that occur at low frequencies of around 1 hertz — that could give scientists the clearest picture of underground structures spanning wide distances. But these waves are often drowned out by Earth’s noisy seismic hum, and are therefore difficult to pick up with current detectors. Specifically generating low-frequency waves would require pumping in enormous amounts of energy. For these reasons, low-frequency seismic waves have largely gone missing in human-generated seismic data.
Now MIT researchers have come up with a machine learning workaround to fill in this gap.
In a paper appearing in the journal Geophysics, they describe a method in which they trained a neural network on hundreds of different simulated earthquakes. When the researchers presented the trained network with only the high-frequency seismic waves produced from a new simulated earthquake, the neural network was able to imitate the physics of wave propagation and accurately estimate the quake’s missing low-frequency waves.
The new method could allow researchers to artificially synthesize the low-frequency waves that are hidden in seismic data, which can then be used to more accurately map the Earth’s internal structures.
“The ultimate dream is to be able to map the whole subsurface, and be able to say, for instance, ‘this is exactly what it looks like underneath Iceland, so now you know where to explore for geothermal sources,’” says co-author Laurent Demanet, professor of applied mathematics at MIT. “Now we’ve shown that deep learning offers a solution to be able to fill in these missing frequencies.”
Demanet’s co-author is lead author Hongyu Sun, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences.
Speaking another frequency
A neural network is a set of algorithms modeled loosely after the neural workings of the human brain. The algorithms are designed to recognize patterns in data that are fed into the network, and to cluster these data into categories, or labels. A common example of a neural network involves visual processing; the model is trained to classify an image as either a cat or a dog, based on the patterns it recognizes between thousands of images that are specifically labeled as cats, dogs, and other objects.
Sun and Demanet adapted a neural network for signal processing, specifically, to recognize patterns in seismic data. They reasoned that if a neural network was fed enough examples of earthquakes, and the ways in which the resulting high- and low-frequency seismic waves travel through a particular composition of the Earth, the network should be able to, as they write in their paper, “mine the hidden correlations among different frequency components” and extrapolate any missing frequencies if the network were only given an earthquake’s partial seismic profile.
The researchers looked to train a convolutional neural network, or CNN, a class of deep neural networks that is often used to analyze visual information. A CNN very generally consists of an input and output layer, and multiple hidden layers between, that process inputs to identify correlations between them.
Among their many applications, CNNs have been used as a means of generating visual or auditory “deepfakes” — content that has been extrapolated or manipulated through deep-learning and neural networks, to make it seem, for example, as if a woman were talking with a man’s voice.
“If a network has seen enough examples of how to take a male voice and transform it into a female voice or vice versa, you can create a sophisticated box to do that,” Demanet says. “Whereas here we make the Earth speak another frequency — one that didn’t originally go through it.”
The researchers trained their neural network with inputs that they generated using the Marmousi model, a complex two-dimensional geophysical model that simulates the way seismic waves travel through geological structures of varying density and composition.
In their study, the team used the model to simulate nine “virtual Earths,” each with a different subsurface composition. For each Earth model, they simulated 30 different earthquakes, all with the same strength, but different starting locations. In total, the researchers generated hundreds of different seismic scenarios. They fed the information from almost all of these simulations into their neural network and let the network find correlations between seismic signals.
After the training session, the team introduced to the neural network a new earthquake that they simulated in the Earth model but did not include in the original training data. They only included the high-frequency part of the earthquake’s seismic activity, in hopes that the neural network learned enough from the training data to be able to infer the missing low-frequency signals from the new input.
They found that the neural network produced the same low-frequency values that the Marmousi model originally simulated.
“The results are fairly good,” Demanet says. “It’s impressive to see how far the network can extrapolate to the missing frequencies.”
As with all neural networks, the method has its limitations. Specifically, the neural network is only as good as the data that are fed into it. If a new input is wildly different from the bulk of a network’s training data, there’s no guarantee that the output will be accurate. To contend with this limitation, the researchers say they plan to introduce a wider variety of data to the neural network, such as earthquakes of different strengths, as well as subsurfaces of more varied composition.
As they improve the neural network’s predictions, the team hopes to be able to use the method to extrapolate low-frequency signals from actual seismic data, which can then be plugged into seismic models to more accurately map the geological structures below the Earth’s surface. The low frequencies, in particular, are a key ingredient for solving the big puzzle of finding the correct physical model.
“Using this neural network will help us find the missing frequencies to ultimately improve the subsurface image and find the composition of the Earth,” Demanet says.
This research was supported, in part, by Total SA and the U.S. Air Force Office of Scientific Research.
Source: MIT News Office