skip to main content
Home  /  Interviews  /  Zachary Ross

Zachary Ross

Zachary Ross

Assistant Professor of Geophysics, William H. Hurt Scholar, Caltech

By David Zierler, Director of the Caltech Heritage Project
September 30, October 26, 2022

DAVID ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Friday, September 30th, 2022. I am very happy to be here with Professor Zachary Ross. Zach, thanks so much for having me in your office.

ZACHARY ROSS: Thank you.

ZIERLER: To start, would you please tell me your title and affiliation here at Caltech?

ROSS: Yes, I'm an Assistant Professor of Geophysics.

ZIERLER: What year did you come to Caltech?

ROSS: 2016.

ZIERLER: Did you come joining the faculty, or were you here as a postdoc first?

ROSS: No, I was here as a postdoc for three years.

ZIERLER: This is a history that would predate you, but how often is it that postdocs convert to faculty? Is this like a rare event? Do postdocs come on the hope that maybe they will be selected to join the faculty? How does that generally work at the Seismo Lab?

ROSS: Well, a few have. Mark Simons was. I think a lot of the people that have joined the faculty here have had some connection to the Institute, whether it's at a postdoctoral level or undergrad or graduate level.

ZIERLER: Just a snapshot in time: what are you currently working on? Too much, I'm sure.

ROSS: Yes. We work a lot on trying to understand earthquake-based systems.

ZIERLER: What does that mean, an "earthquake-based system"?

ROSS: It means systems that are capable of producing earthquakes. That can include both fault zones, volcanoes, or even induced earthquake settings that are not just of natural origin.

ZIERLER: Are there places on Earth that we understand to be totally aseismic, that do not fit in this category that you're talking about?

ROSS: Generally, yes, there are places that are fairly stable. Some parts of Europe have essentially no activity, that we can tell, at least on the time scales of probably decades and that kind of thing. But there is actually a lot of even micro-earthquake activity across places that you wouldn't necessarily even expect it to be happening.

ZIERLER: What are the areas on the planet that you focus on? Are there particular places where you can extrapolate findings, or do you take a truly global perspective in the things you look at?

ROSS: I do quite a bit with the data from California, and really Southern California, so I work a lot with the data that we collect here as part of the Southern California Seismic Network. That's a real-time monitoring system that has longstanding origins here. I do a lot with that data to understand what is in it and essentially try to use all that information—because most of the time, it's not large events—I use all that information to better understand earthquake processes, basically.

ZIERLER: To get back to my question about extrapolatability, the things that you're studying locally in Southern California may tell us things globally about earthquakes?

ROSS: It can, but I think there's a lot of diversity in the processes that are responsible for earthquakes, so not necessarily everything that we learn here may be directly applicable to everywhere else, for example. The rock types and things like that, which may be very unique to a particular region, can have an extraordinary influence on what happens.

ZIERLER: What about plate boundaries in Southern California? Are they sufficiently unique where that might be a block to extrapolating findings here?

ROSS: There are aspects that do seem to be relatively unique, I would say. There are aspects of the San Andreas system that, at least to the extent that is currently monitored in other places—because California is very well-instrumented—there are aspects of the San Andreas that look like they are probably unusual or maybe even anomalous at a global level.

ZIERLER: I want to ask questions that focus on your harnessing of computational power, but in historical perspective. As I've come to appreciate, in seismology, going back 60 years there was some advance in computation, either hardware or software, that really revolutionized seismology, that allowed for not just new findings but new questions to be raised. If we can go back roughly ten years, maybe, when you were in graduate school, thinking about your research and your dissertation and things like that—so roughly circa 2012, in that era—what were some of the advances, both on the hardware side and the software side, that made possible new areas of inquiry that might not have been possible 10, 20, 30, 40 years ago?

ROSS: My perspective is that most of the advances have come on the software side in the last decade. For many years, it was viewed that most aspects of earthquake monitoring were more or less sufficient, and that we were doing about as good of a job as we could be.

ZIERLER: How do you define sufficient? What is that level of satisfaction?

ROSS: Expert judgment. [laughs] There's really no specific criteria that people used to establish that, other than there wasn't—to go back to 15 years ago, we weren't even archiving the continuous waveform data that we do today.

ZIERLER: What is continuous waveform data?

ROSS: That means that all the seismic data that is flowing in here every second, which is mostly noise, we were not archiving that.

ZIERLER: That's a storage issue?

ROSS: It was a storage issue back then, but it was generally believed that there really wasn't anything too important in that, and that we were already extracting pretty much all the useful signal out of that. It has become clear that that was very clearly not the case.

ZIERLER: What's in that noise that's interesting?

ROSS: We have tiny earthquakes that are happening all the time, and not just tiny earthquakes, but other information that can be extracted about the structure of the Earth. That forms the vast majority of the available information on earthquake processes that we can obtain.

ZIERLER: Just a nomenclature question—tiny earthquakes—I've heard Tom Heaton talk about gentle earthquakes. I've heard Allen Husker talk about slow earthquakes. Now I hear "tiny earthquakes." Are they different categories? Do they all mean the same thing?

ROSS: No, they don't mean the same thing. These are regular earthquakes that are just smaller. There's nothing unusual about these compared with some of the other ones that you've listed; they're literally just smaller events. Earthquakes have a well-known property that was established here about 100 years ago, which is that every time you go down a magnitude unit, there's roughly ten times more of them. So, for every magnitude 5 per year, you get about 10 magnitude 4's, and 100 magnitude 3's, and so forth. To the best that we can tell to this day, that initial observation never stops, so it continues all the way down and below essentially the noise level of our instrumentation, to the point where you get to negative magnitudes and so forth, so magnitudes of -1, -2.

ZIERLER: What does that mean? How do you have a negative magnitude?

ROSS: Well, because Richter just calibrated zero as some arbitrary threshold that he thought was the lowest possible value, but it turns out that within all of the modern sensing capabilities, we can go well below that.

ZIERLER: I understand. So we no longer say "the Richter scale"; it's just the magnitude? But we still use his baseline of zero to define negative and positive scaling?

ROSS: We do, but that statement is not correct; we still do use the Richter scale quite regularly. In fact, almost everything that we do here for magnitudes smaller than about three is all still using the Richter scale.

ZIERLER: I meant colloquially. You don't say in communications, "Richter scale," right?

ROSS: We use the symbol for it, which is ML, the local magnitude, and that was what he defined. We still use it in routine publications and everything. There's a misunderstanding, I think, in the sense that when people communicate to the press and so forth today, they're really only talking about large-magnitude events, which is what people obviously care about. For anything pretty much large enough for someone to feel, we use the moment magnitude scale now, but for everything smaller than that, it's still the original scale.

ZIERLER: There's something counterintuitive about talking about a negative magnitude. It sounds, to an outside observer, that it's smaller than a non-existent earthquake, and that's clearly not the case. What's the utility of hanging onto that zero baseline from 100 years ago?

ROSS: It's just that everything that we do today is calibrated against that.

ZIERLER: It's an inertia thing, more than anything else?

ROSS: Yes, but there's nothing specific to it in the first place. It's not like there is an ideal value that you would set to zero, and from there it would make sense. The idea of this power-law type scaling is that it basically in theory it should just keep going down forever.

ZIERLER: Like an Occam's razor; there's no zero point?

ROSS: That's what I'm saying. To the best that we can tell today, based on what we're able to monitor, there is no lower limit to that.

ZIERLER: Is that because these are not static systems, because zero would suggest a stasis that doesn't exist? Is that what it's about?

ROSS: The argument is that it's a fractal-type system, which means that there's no intrinsic length scale in the process, and it just keeps repeating at all scales, and so the more that you zoom in, the more that you see. And, because it's independent of any particular scale, it doesn't matter. [laughs] Physically, there has to be some lower limit, but whether it's the size of my hand or whatever, it's anyone's guess at that point. Even people in the laboratory who try to simulate earthquakes with experiments, rock experiments, can basically generate little lab earthquakes that are like -7 in magnitude. They're looking at basically the little grains of the rocks moving around on each other at micron levels. Some people think that those are just exactly analogs for what we see at the larger scale.

ZIERLER: To go back to the original question, you mentioned it was really the software that allowed for this analysis of tiny earthquakes. What was the software? And, if the storage was an issue previously, why wouldn't it also be the hardware, just for allowing the storage of all of this data?

ROSS: It wasn't beyond the capabilities; it was just expensive. There are places that have archived a lot of their data, if not all of it, back to the mid-nineties. Definitely that has been possible. We did have a lot of instruments in the 2000s, larger than any other seismic network in the U.S., so it would have been quite expensive but I think not prohibitively so, had people realized the value of it.

ZIERLER: Just to get a sense of scale, when you talk about continuous archiving, this is huge amounts of data?

ROSS: By modern standards, it's not that large. Today, we produce something like 15 terabytes of data per year, so it's really not anything enormous.

ZIERLER: The software, is this mostly about sifting through the data and pulling out what's interesting from the noise?

ROSS: Yes.

ZIERLER: To go back again to the beginning of graduate school, what was software capable of doing at that point, that it could pull this interesting stuff out? Is it AI? Is that basically what it's about?

ROSS: It's a big part of it. That's not the only piece to all this, but I would say that there has been a general lack of technology for doing these earthquake monitoring tasks. What I mean by this is that historically, people have relied on seismic networks like ours to produce the earthquake catalogs for them that they work with for the vast majority of their analyses. We have a whole software pipeline that has been developed over decades, and the technology basically dates back to the late 1970s, which does this automated initial analysis of the data. That means detecting the earthquakes as they happen from the largest to the smallest possible, locating them, calculating their magnitudes, and measuring other important properties of the data as it becomes available. This was the source of most of the information that people had available to them. The software systems that we use here are extremely difficult to transport to other sensors and things like that. If today I said that I want to go put 20 sensors in the field, it would be almost impossible to set up that type of workflow to sift through all of that data on this new system. We basically calibrate all this stuff over many, many years to get it reliable and working well, and reliably so, for the network that we have. People, researchers particularly, have dealt with a major barrier in being able to do this basic task, which is to build earthquake catalogs from nothing, just starting with raw data that's coming in. Like I said, the algorithms that our network uses on its current public system still are from the late 1970s. That is something that I am working on improving in a development mode here, but the stuff that is currently operational and posting to the public and everything else has basically not changed in 40 years.

ZIERLER: Is this a budgetary thing? Has there not been scientific interest until recently?

ROSS: No, there were not really significant algorithmic advances until about ten years ago that would have done a better job than the existing algorithms. This has been historically a major barrier to the science. Companies started to develop new types of sensing technologies, cheaper sensors, things like this, that are more portable, and you can go put out a bunch of sensors to test a certain hypothesis or whatever, to collect aftershocks, that kind of thing. But I noticed early on that it was remarkably difficult just to take that product, that data, and even get the basic stuff out of it. It would require you pretty much buying some kind of commercial software, and a PhD student would spend most of their PhD learning how to use that software to get to the point where they could even start doing the science that they wanted to do with it. It was like a four-plus year effort, extremely involved, and it didn't really even perform all that well at the end of it. In terms of what you got out of it, it just was really not that much. A lot of what I have done over the years was trying to address this challenge, because I recognized early on that all the things that I wanted to do scientifically were essentially not possible.

ZIERLER: Were you the graduate student who spent the four years without much result at the end? Was that part of it?

ROSS: Not specifically, because I was addressing this problem at the same time. I recognized the issue, and I started to put a lot of pieces together in terms of trying to advance what we could do in this area. A lot of what my PhD was about was developing new techniques for large-scale automated processing of the seismic data.

ZIERLER: You mentioned algorithms. In the Venn diagrams of algorithms and AI, where is the distinction, and where is the overlap, in just your use of the terms?

ROSS: The distinction is that modern AI, as most people would refer to it, is learning-based, which means that the algorithms learn from data and then essentially make decisions based on what they have learned. Prior to about 2017, nearly all of the algorithms that people had developed in the field, except for a handful of them, including the stuff that I had developed during my PhD, which was basically state of the art at that time, were not learning-based; they are rule based.

ZIERLER: What's the difference? What does that mean, rule-based versus learning-based?

ROSS: It means that you come up with a set of ad hoc rules where if you follow this step and this step and this step, you achieve success. And so, an expert that is very familiar with the problem can hopefully come up with some set of steps that would achieve that.

ZIERLER: The data would probably be less interesting, would yield fewer surprises, in a rules-based system?

ROSS: Well, yes, there would be fewer surprises, just simply because of the fact that you have restricted what it can do. You know exactly what it will do and how the decisions are made, because each step is specified. In a learning-based system, which is what the whole world has shifted over to since 2012, basically, not just in seismology but in every field, that is all driven by the data. Most of this is based on the fact that you have ground truth, and you have a system that attempts to mimic that type of decision-making. You know what the true answer is, and so you basically try to come up with a system that can make those decisions. The learning is by basically optimizing the components of the system such that it gets closer and closer to being able to duplicate that decision-making process. You're optimizing that until it gets to the point that it can do as well as a human can, or better. So, there have been a lot of different types of advances in this space over the last decade, and that has really just now blown the door open on all of it.

ZIERLER: To go back to this idea that there was a motivation to work through these problems and to get to more interesting results faster than what would otherwise consume a PhD life, what was the theoretical basis that that endeavor would be worth it? What were some of the big unanswered questions in the field that compelled you to say, "It's worth developing better tools so that we can examine these problems"? What was the motivating factor in that regard?

ROSS: The issue is even more general than that. It's not even about specific big questions. It's literally just every question that you would want to answer, seismologically. The earthquake data form the vast majority of information that we have available to us—

ZIERLER: About how earthquakes work?

ROSS: How the Earth works. Other than a handful of other fairly not-so-important types of data, I would say, which are focused mainly on the shallow parts of the Earth, everything we know about the deep interior of the Earth pretty much comes from seismology. And so, anything that you want to do, if you want to image the Earth with some kind of tomography study, historically you start with an earthquake catalog. That means that you have the locations and the times that the seismic waves hit the different sensors across the globe, or wherever they are, and you use that information to basically focus it back to where the waves originate and track the rays as they move through the Earth. That's how you image everything. If you don't have this catalog of information, you can't even do that. It doesn't matter what the scientific question was that you were pursuing; you can't even do the imaging thing, which is what the whole point of the project is. This underlies pretty much every type of downstream seismological task that there is.

ZIERLER: This means, Zach, if I understand correctly, that questions that seismologists were pursuing 60, 50, 40 years ago, you did not look at any of that as resolved. All of those issues can be revisited through this prism?

ROSS: Yeah. It was just that everybody did this stuff by hand. It wasn't that it was not possible; it was just that most people were using a very tiny amount of the data, because of limitations. Most people, even ten years ago, were just literally having a student do all this manually. It's not fun. You figure it out after about two hours [laughs] and there's really nothing new to be learned beyond that.

ZIERLER: If we can clarify, earlier you emphasized tiny earthquakes, simply because we didn't know about them, we weren't cataloging them, we didn't have the software to analyze them. More generally, though, as the software developed, as you were working on these problems, just so I understand correctly, it was not just tiny earthquakes for which all of this work was relevant. It was—everything.

ROSS: Yes, it's everything. It's just that my particular interests have been with those because—

ZIERLER: It's unexplored territory?

ROSS: It is, but it's just that they represent the vast majority of the information available, just because they happen all the time. Every couple of minutes, there's a -1 or so earthquake happening somewhere in California. That's just during normal times. If there's a big aftershock sequence, they might be happening every few seconds, for years. It's not just random; there are clear patterns in their positions in space, and when they happen. It's a whole sequence of activity where one big event is just a single thing within this much larger cascade.

ZIERLER: I just thought of a metaphor. The public perception of an earthquake is that is rare and devastating, like a heart attack. But what you're talking about is something that happens constantly, and it's not devastating. Would that be more like a heartbeat? Is that a way of thinking about it? It's just the normal—it's just how the Earth works?

ROSS: Sure, yes. It's just how the Earth works. Sometimes they are accompanied by large events that you feel, and other times, they are not. We have entire sequences that don't ever produce anything that you can feel.

ZIERLER: Before we get to the science, just again to go back to the public policy part, my default assumption would be that the public won't care about this, because it doesn't affect our day-to-day life. But is that true, actually? Are there things to care about with tiny earthquakes?

ROSS: Yes, I think that there are, because what we're able to do now is see so much more about what the cause is behind these different types of sequences. Examples of these are we get earthquake swarms down near the border with Mexico.

ZIERLER: Are swarms the same as clustering, or is that different?

ROSS: It depends exactly what you mean. An earthquake cluster is generally some kind of—I don't prefer that term. We have aftershock sequences, and we have swarms, and then some hybrids of those. Earthquake swarms are generally short-lived sequences. They are very violent when they happen. For example, they will pop off a bunch of moderate-sized earthquakes in the Imperial Valley down near El Centro, California, within a few days, and then they just shut off. Sometimes you might have 20 magnitude 4's within three days or so, and the public is wondering what in the world is going on here. Except we've seen this a bunch of times before, and we know what to generally expect in terms of that behavior. At least in terms of the processes that create those sequences, what is actually happening down in the Earth there, we can now see with a lot of these small ones that basically fill in the gaps between all the larger ones, that in some cases there are basically what we think are pockets of maybe some kind of hydrothermal fluids or whatever, which is basically heated water, and other minerals that are just depressurizing and flowing through the Earth. As it does that, it is going to trigger lots of small earthquakes. If you only saw the large ones, you just see that it's some kind of activity, but now we can actually see, with the kind of resolution that we have, that there's much more order to the whole thing, and that it's probably not going to be a problem. If we've seen this now in 100 different cases, and most of the time it's totally fine, it makes it so that at least as far as the public is concerned, that we can make more concrete statements about what is actually happening, rather than just, "There's a 5% chance of a larger event happening here. We don't really know what's going on." It allows us to be much more specific with the language that we can use. Other examples of these is when we've seen now that there are sometimes these basically slow earthquakes, where the faults just start sliding maybe a few centimeters over a couple days, and as it is doing this, it is just triggering lots of earthquakes beneath it. That is what seems to be happening in the Imperial Valley. In all these cases now, it pretty much goes away within a few days. So, if we see the same thing and so forth, we can make a more informed statement about what is actually happening. I don't know how much we can actually say at least at this point as far as the hazard level goes, in terms of forecasting it specifically, but we can say much more that someone out there understands what is happening, and I think that is a step forward from where it was even 10, 15, 20 years ago.

ZIERLER: We'll get to the forecasting issue and all of the complexities there, but just to stay on the public policy side with regard to tiny earthquakes, is it useful from a mitigation perspective that—for example, the big study that Professor Stock was involved in with Yucca Mountain—is it possible that an appreciation of tiny earthquakes, the seismicity of tiny earthquakes, is important for big infrastructure projects that should or should not be located in areas that are prone to tiny earthquakes?

ROSS: That gets more on the engineering side.

ZIERLER: The better question is, should engineers care about tiny earthquakes?

ROSS: Not so much, because they're really more focused on—I think the state of seismic codes today is such that the engineers will tell you, if you can tell them what the expected ground motion is going to be like, that they can design a building to withstand that. That's really how everyone is thinking about this problem today. Even in some of the most devastating earthquakes over the last 20 years, for example the magnitude 9 in Japan in 2011 which damaged the nuclear reactor there—but that was only damaged from the tsunami that came in afterwards. It wasn't damaged by the earthquake, even though that was the strongest shaking ever recorded in human history. It basically shows you that we can mitigate structural issues provided that we can estimate what the ground motions are. That means that we don't care about the contribution from those small events. It means being able to accurately characterize the shaking intensity from the large ones.

ZIERLER: What do tiny earthquakes tell us about large earthquakes? What are the points of connection? Where are these independent systems?

ROSS: For public policy?

ZIERLER: Yes. In other words, maybe the public should care about tiny earthquakes because there is actually some relation to the kinds of earthquakes that upend our lives?

ROSS: Yes. One example that directly intersected with the public sphere was that in 2016, there was a swarm of earthquakes that occurred exactly in the same area we discussed before, underneath the Salton Sea, near a town called Bombay Beach. Bombay Beach is where the San Andreas terminates in Southern California. This swarm had produced a bunch of magnitude 3's and 4's, but it was located within a kilometer or two of the San Andreas itself. At the time, this triggered a meeting by the—I can't remember what the acronyms are—the CPAC? I can't remember what it is, but it's basically California's—there's a governing committee that is responsible for making recommendations to effectively advise the Governor's Office on earthquake scenarios as they are unfolding in real time, for example whether to issue evacuation alerts or other things like this. The concern was whether this was a sequence of foreshocks that would be potentially leading into a major event on the San Andreas. This committee went and looked at the situation and tried to make some kind of assessment of the situation and delivered this report.

At the same time, the public aspect was, because this was one of the first times that social media had become involved in an unfolding earthquake scenario in the U.S., I think. It just kind of took over the news, and people were basically saying the San Andreas was ready to go, and you could see the public running with all this stuff in a way that was just totally uncontrollable compared with the way things were before Twitter and all this stuff. People who are totally uninformed are pulling figures from our websites and saying things that they don't have a clue about, and it turned into mass hysteria. It forced the USGS to issue a public statement about the level of the hazard in the area, and that they don't expect a significant event, and things like this. So, there is definitely a level of all of this that is important. How much we can actually say in a rigorous and quantitative way I think is very debatable among the scientific community. Certainly the USGS across the street, they have folks over there that basically have a machine to try to quantify this stuff, and it's not really clear exactly how reliable that will be in the long term, but at least in the short term, they've now got numbers and things for sequences like this. They try to run various scenarios and estimate outcomes and so forth.

ZIERLER: We'll get to the ultimate gold standard, which would be prediction, of course, but let's start with either cyclicity or periodicity. What have your studies suggested on tiny earthquakes about the extent to which earthquakes are cyclic?

ROSS: They're not. To the extent that we've looked at this problem, and many other people, there's really no evidence that they are cyclic.

ZIERLER: Is that synonymous with periodicity, that they are neither periodic nor cyclic?

ROSS: Right. Here or there, there seems to be a very small number of highly specific studies that seem to find something like this, but it's not a universally observed phenomenon, and so it really is not considered to be, at least today, of any major kind of source for where the earthquake activity is coming from.

ZIERLER: Would that lead directly, if the findings so far that earthquakes are not cyclic, that they are not now and will never be predictable?

ROSS: Even for the studies that claim some kind of periodicity, you're talking about very minor differences between—let's just, to make this up—between summer and winter. I don't mean you're 1,000 times more likely to have an event in time than another. These numbers are so small that you have to tease them out of the noise. That therefore would imply very little predictability even if they are true, because it doesn't have a large effect on the system.

ZIERLER: This inevitably gets into a philosophical area, and that is the statement, "Earthquakes are not predictable, as far as we can tell now." That could delve into two different areas. One is that that is a statement of the limitations of our technology, of our theory, of our observational skills. Or, it can mean something more fundamental than that, and that is, as far as we can tell, the system itself is essentially chaotic, whereby no advance is going to get us to predict something that is fundamentally unpredictable. What is your sense of the distinction?

ROSS: I think it's somewhat of both. "Random" can mean a lot of different things. I think there is definitely an element of effective randomness, in the sense that the rock patterns are so varied from one place to the next. They have been deformed over millions and millions of years, and so it's not just the same thing everywhere. Each rock that you look at, even the size, is unique. That leads to an effective randomness. It's not the same thing as being deterministic, because maybe if you could actually somehow measure the property of every one of these rocks perfectly, most people today I think would believe that the system would then be deterministic, if you would measure it perfectly pretty much everywhere. There are actually people who don't believe that, but I think most people today would believe that…that if you could make those measurements, that it would be more or less deterministic. It really comes down to a lack of information. Pretty much everything that we do is looking at stuff measured on the surface, whereas the process itself is happening many miles below the surface, and so we're not actually getting the stuff that we would need to know in order to measure this and make that kind of assessment.

ZIERLER: Is that because you're focused on the network as opposed to tomography or geodesy?

ROSS: No, it's just not possible. The deepest holes that have ever been drilled in the Earth are down to about ten kilometers or so, and that's right at the starting point for what a lot of these events would happen at. Even if you drill that hole, that's one point [laughs]; we would need to measure it pretty much everywhere. Knowing it at one point doesn't tell you anything, even if you had it. So, I think it's just fundamentally an unknowable thing.

ZIERLER: Now that you've been at this for ten years, what is understood now that wasn't then? Again, to go back to the beginning of your graduate studies—what are some of the frontiers in knowledge, and what has been resolved or is at least much better understood even in this relatively short span of time?

ROSS: I don't know that there have been any really major scientific revolutions in the last decade within earthquake science. It was a lot more kind of steady progress, I would say. The field today, it has moved far away from the earthquake prediction stuff. That pretty much ended around 1990 and was abandoned entirely. The field shifted from trying to predict events, at least on the engineering side, to instead trying to characterize what the shaking might look like when an event occurs, which ends up being much more tractable. That has moved in a different direction there.

There has been a large push in the last ten years or so towards being able to actually run large-scale earthquake simulations that are sufficiently realistic that an engineer could use those to design any kind of structures against, which historically was not really possible. It requires pretty large-scale supercomputing efforts, which basically started around 12, 14 years ago. In 2008, when they first started these, the engineers looked at some of the ground motions that came out of these simulations and they said, "These are crazy. They're way too high." They basically ignored them. They said, "They're not consistent at all with any of the data that we've seen." There has been a steady effort to try to make those simulations more realistic, adding in elements of the physics and that kind of thing, that brings them closer and closer to now what seems to be consistent with the data. I think now we're finally at a point where a lot of these big simulations are pretty close to being usable. That's quite a big step forward in all of this, because if we're able to run large-scale simulations for different types of scenarios, not only does it allow us to identify potential scenarios that you might have never expected, but it is also really good for communication and other things.

When they first started doing these simulations in 2008, they ran a simulation of basically a large earthquake on the San Andreas Fault, and one of the first things that they observed was—because you see the waves everywhere; it's a simulation—you can see that there was energy being funneled into the Los Angeles Basin through a specific channel there. That was not expected. And so, there was a lot of work after that to study this in detail and see whether that seemed realistic or not, and studying it through various ways. I think today that that is viewed as definitely a possible thing, and that is not captured by any of the empirical hazard models, because we don't have any records of the San Andreas. That is a big thing. It has been a slow-moving target within the field rather than like someone just does a study one day and it's a major change, but it is definitely looking quite positive now. So, there's that.

A lot of the science in geophysics is very regional, because it's different systems that don't necessarily look all that similar to each other, at least initially, so there has been big progress in understanding, I think—for example, in Hawaii, there were some big eruptions that happened four years ago, and massive collapse of the volcano that was being instrumented. There's a lot of things like that, that have really changed the way that people think about and understand these types of systems, in part because they recorded it now for the first time in a century, and they did it at very high resolution. A lot of what we learn about in the Earth, you have to wait for something to happen first, because we don't have the ability to go out and test things the way that they do in other sciences. Those are some of the big areas.

There were a bunch of large earthquakes that happened around ten years ago, all around magnitude 9 or so, and that was very important for advancing our understanding of certain aspects of earthquakes on the source side—what is happening at the fault itself, where the fault can move, how that interacts also with tsunamis and so forth, and understanding where the energy comes from and that kind of thing, because they're just so large that now you can observe this almost perfectly, everywhere. That was really important because there hadn't been any large events like that more or less since the 1960s, and they happened all within a handful of years of each other, so that there was a sequence of these with the same quality of data being collected which was important for those aspects. It trickles down to the tsunami modelers and that kind of thing.

We're starting to understand a lot more about the types of earthquake sequences, the processes that are responsible for them, especially sequences that are not just aftershocks; sequences that are much more opaque to us, and our understanding of them is weaker. That's where a lot of the tiny earthquake stuff that I'm doing has been quite helpful. It is getting us more quantitative and less hand-wavy [laughs], much more objective about identifying persistent behavior and region-specific behavior and that kind of thing. As I told you, I focus a lot on California, and it is very clear that there are certain regions that have very specific types of behavior that is essentially native to that area, and that if you move 50 miles west or whatever, that it can change totally. This region has its own characteristic behavior, and that is important, because it is something that you can only understand after looking at a system for a very long time, to see the repeatability there. That hopefully will move things more in a direction that we can work with the people that do the modeling and that kind of thing to better understand what's actually happening, because we need a clear sense of what is normal and typical there, to be able to just say, it's not just a bunch of random stuff, which historically was really the case.

ZIERLER: To go back to the artificial intelligence issue, do you look at it as essentially a tool, a very advanced tool but one that you're still telling it what to do? Is AI getting to the point where there are things that you can outsource to it where you're really not so involved anymore?

ROSS: We have the ability to build these catalogs more or less from scratch without almost any user involvement. There is still a little bit of manual stuff, but not much. Especially on a network that's familiar to you, and you've been running stuff on that same network for a while, you really don't need to make any changes at all, so it can be fully automated. Almost half of what we do is focused on advancing the algorithmic technology that underlies all this stuff. The rest of it is basically just using the technology, deploying it.

ZIERLER: There's so much hype with artificial intelligence about what it can do at some point in the future. Realistically, where do you see things headed in terms of what AI might be able to do X number of years in the future, and how that might relate to the breakthroughs that maybe have not yet been forthcoming, as you mentioned, in the past ten years?

ROSS: There are basically three different categories that I view this being a part of. We work on all of these different areas. The first is basically automation of tasks. That means everything from just making measurements, automating those things—that's where the big initial breakthroughs were in using machine learning in seismology. That started in 2017, and it just exploded because it was very clear that for the first time, we had techniques that could do as well as humans could do. We have people who work downstairs, and their full-time job is literally just to fix the mistakes that the ancient algorithms are still making, to this day. There's a big need for that, and this was very well suited for that, because it's all learning-based, and we have lots and lots and lots of examples from these guys [laughs] that are doing this and making these measurements manually, so it's perfectly suited for this. Of the geological sciences or whatever you want to call it, seismology and this area of seismology was basically the first to really just explode, because you need tons of data, and we have it. Other areas don't quite have that type of data, or in large quantities. So, that moved forward big-time. How much more there really is to do here in terms of task automation, I think there is still probably a good amount of stuff, but just the idea of moving to a learning-based set of approaches, that alone was the real big advance in all of this. That translates into what Zhongwen is doing. Did you already talk to him?

ZIERLER: No. Next week. [laughs]

ROSS: What he does with the fiber optic data, which is a totally different type of data, but you can still extend these types of techniques to that data. They have their own problems, which are going to require their own solutions, but still it's the same kind of thing. The framework is still in place, more or less, to do all of that. So, that is number one: task automation.

The second one is essentially improving or really accelerating computational capabilities. This is not dealing with anything that is data-based; it is just trying to break down computing barriers. Because like I said before, the simulations that are done, whether it is earthquake simulations or imaging experiments, the tomography studies, that kind of thing, they require you to simulate wave fields, and those are extremely expensive to perform. If you want to do an imaging experiment, what the field is really moving towards now is using the wave field itself to image the Earth. That basically means comparing an observed wave field with a modeled one, a synthetic one, and trying to adjust the parameters of the Earth to match what you observe. That means you need to run lots and lots of earthquake simulations in a big loop, and it's really, really expensive, because you have to do it, in many cases, at a scale of—Southern California—at maybe every 500 meters or something like this, which is quite demanding computationally. For a lot of this, it might require supercomputers to do it. We're doing a lot right now with trying to accelerate all of this stuff using machine learning techniques and basically learning to rapidly approximate this simulation process. That will move a lot of things forward, for sure.

ZIERLER: You mentioned previously how not fun it is to spend your graduate life manually going through all of this stuff. Obviously you're pursuing the best science, but is there a concern that AI gets to the point that PhDs, professors, humans, are not—just from a labor perspective, that the kinds of things historically that seismologists have done, we don't need as any of them anymore?

ROSS: I don't think that this is really an issue. Definitely students don't want to be doing this. [laughs]

ZIERLER: That suggests that there is stuff for them to do. There is better stuff for them to do.

ROSS: Right. We want them to be spending their time thinking about the science, rather than wasting it on these boring tasks.

ZIERLER: That gets me back to my question about viewing the AI still as a tool. The AI is not doing the science.

ROSS: No, no, it's not doing the science. It's just a tool for navigating high-dimensional datasets and finding valuable structure within them, whether they are these small events or other things that you might care about. It is definitely just a tool.

ZIERLER: Another tool you work on, and which you could explain—signal processing. How do you use that?

ROSS: The seismic data is just ground motion data sampled every fraction of a second. Historically, it has just been treated entirely with basic signal processing techniques, filtering the data based on known characteristics of the signal, to remove various types of noise or other things. So much of this now, not just in seismology but in probably every field, is that the signal processing has basically just been taken over by machine learning. The end goal of it is the same: modifying or working with the signals that you're collecting to better do what you want to do, whether it is removing noise or measuring stuff. In a lot of ways, the standard types of signal processing are somewhat disappearing as it moves towards a more learning-based system. The historical types of signal processing were all really rule-based, like we discussed before.

ZIERLER: What about statistics? Do you view that as a subset of AI at this point, or is it still its own discipline?

ROSS: It is definitely merging. All of these areas are being sucked in that direction. Many aspects of statistics are starting to overlap with modern machine learning. I think that you would see the number of statisticians across the world who are starting to pay attention to machine learning conferences and research being published in those types of journals now, whereas they never would have done that ten years ago. The AI conferences that would discuss this kind of stuff had a few hundred people in them before 2012, and now they have tens of thousands of people going, and they're from every field imaginable. I think some of it is that they want to resist it and want to preserve their specific identity, but at the same time, there is now very clearly a large overlap between what they do and what is being developed within machine learning in parallel. The boundaries are disappearing.

ZIERLER: To bring this close to home, in your own research group, as you mentioned earlier, you don't want students to be slogging through the manual stuff. If you could bring that to life for me, what are the kinds of things substantively that your PhDs, your postdocs, are doing right now, where it's really much more fundamental than what might have been possible in earlier generations?

ROSS: I do still emphasize to them that they should spend some amount of time looking at the data.

ZIERLER: There's tactile value there?

ROSS: I view that as still essential. You can't just run stuff and just not have a clue whether it's right or wrong. You need to have some sense of intuition about whether things are working properly or not. Also, when it comes to developing new algorithms and that kind of thing, it requires having a very concrete understanding of the mathematical structure of the data, the problems that can arise from it, and all sorts of stuff. You're not going to get that intuition from just running other people's algorithms all the time. You need to be familiar with where the current shortcomings are. It's just like, "Well, my method works 99.8% of the time," which is great, but then it's the question of why is it not working on the other .2% and what are the cases that it's not working on right, and understanding things about where it might be having problems, or opportunities to improve it. That really requires having an understanding of the structure of that data. So, I definitely emphasize to them that they still need to get their heads in the data itself.

ZIERLER: Is there an example of they don't need to slog through—you want to expose them to the data, but they shouldn't be inundated by it, because we can outsource this. Then, if they are unburdened from that for as much as you want them to be, what does that then free them up to do more substantively in science?

ROSS: It's just the analysis itself. It used to just drive me crazy, because I would think about and talk to people that would write these three-year NSF proposals. They'd get all this money; they'd put out these instruments. You'd think that putting out the instruments would be the hardest part of the whole thing, and putting them in the middle of nowhere in New Zealand or whatever it is. Instead, they would spend two and a half of the three years just trying to get the earthquake catalog there, and then in year three of the project, with six months remaining, they're going to write their initial science evaluation of what they learned from this data, which is obviously not complete at that point, and then the project is over. [laughs] And good luck getting extra funding to actually do what you really wanted to do, at that point. My whole thing is trying to remove these barriers for people, to do that, to think about the problems that they actually want to address. We should be training them to spend most of their time on the analysis of it. What did you learn that is new here, and how does that relate to the way that people have thought about that previously?

ZIERLER: That's better science, and it's more interesting.

ROSS: Yes, and it's a more efficient use of the government's money, frankly. [laughs] They are actually funding real research as opposed to data collection and other things. Also, it's better training for the students. They're spending much more time developing analytical skills over troubleshooting code and other things. That's fine, but it's not what they are here for.

ZIERLER: For the last part of our talk today, some Caltech-oriented questions, institutionally. I've heard stories—you probably have, too—about the old Seismo Lab up in the mansion, and how it was literally and metaphorically an island. It was really separate up there. Part of the impetus to bring it to campus was the need to modernize seismology by making it more part of geophysics, planetary science, and geology. As a young member of the faculty, does that process that has been ongoing since the 1970s feel more or less complete? Are there aspects of the Seismo Lab that remain an island, for better or worse, or do you feel intellectually really fully integrated within GPS?

ROSS: It is definitely still a fairly self-standing unit, I would say. I think it is much more cohesive than any of the other options within the Division. We still have our own faculty meetings within the Seismo Lab. Not regularly, but we still them. We have all sorts of traditions, still, that are done specifically in this building. It functions administratively as more or less a self-contained unit. I definitely see it. It is here, but that history is definitely still part of it. You've probably heard all about the coffee-hour stuff, more than you want to know.

ZIERLER: Oh, yeah. I think coffee hour is the most important aspect of seismology at this point. [laughs]

ROSS: Yes. That's definitely a key component of our lives here. It's a big selling point, or we at least try to pitch it that way to prospective students and postdocs when they come here, as what they can get out of that—learning, getting to meet senior faculty from talking with them every few days, that kind of thing, about what they are working on. People actually have an idea of who you are and what you are doing here. Just listening to intellectual discussions and that kind of thing. Even though most of the real decisions are done at the division level, it still is very much a self-contained unit.

ZIERLER: Two questions relating to computation and bigger trendlines at Caltech. First, as I'm sure you know, 50 years ago, physics was the overwhelming interest among undergraduates, and today that is computer science.

ROSS: Right.

ZIERLER: Is that relevant for you, that undergraduates are really excited about computer science, fully appreciate its utility? Does that translate to both the way that you teach and how you might interact with undergraduates?

ROSS: In some sense, yes. GPS has very few undergrads, so most of my teaching is at the graduate level, although it does draw a few undergrads here or there. Most of my interactions with the undergrads from computer science, and I have quite a bit of those, are basically through the Computing Mathematical Sciences team. I do a lot of work and collaborations with those folks, and they often run these classes that—they are called Projects in Machine Learning, CS101a. The instructor will come to me and ask if I have an interesting problem that would be good for a team of undergrads to work on for the quarter. Then we all kind of work together and discuss progress each week and that kind of thing. I have done this ten times already, and a number of those have turned into research papers, and many different groups of those undergrads. So, I have interacted quite a bit with them. It is good, because it gets them thinking about new problems that are not solved, and getting experience with that, and doing things in a research capacity and that kind of thing as well. It gets them working on stuff outside of just homework assignments, which can be not so exciting sometimes.

ZIERLER: As you alluded with your partnership with CMS, one of the big trendlines at Caltech is that computation is the connecting point for all of these really interesting interdisciplinary partnerships. You have neurobiologists talking to behavioral economists. All across campus, it is computation as the connecting point. Where do you slot in on that, given your expertise?

ROSS: These days, I'm pretty far into that group. Officially, I am only appointed within geophysics, but I spend a lot of my time working with folks from that side, even participating in research on that side of the fence as opposed to geophysics research that uses machine learning.

ZIERLER: What would be an example of a fellow faculty member coming to you with something?

ROSS: We are working with folks over there on accelerating the simulations of the seismic wave fields. Some of the technology that we've developed, the math and the algorithms to do that, is motivated by the real problems that we have here, saying, "I want to be able to do this," and them not even realizing that that is a problem and potentially realizing that the math doesn't necessarily exist to do that yet. As my background on that side has expanded, I have been able to talk to them more closely about all of this stuff and collaborate with them more specifically on pieces of it as needed. I do quite a bit in that space. I've worked with probably all of the machine learning faculty in CMS at this point on different papers and things like that. I definitely try to make a very conscious effort as to where I position myself within this stuff, because I very much am focused on doing things that have real implications for the science afterward. A lot of the stuff that those folks might do might not have any real-world utility but it might be mathematically interesting. Which is totally okay; it's just that not everything that they do is directly usable.

ZIERLER: What about all the quantum science that is happening on campus? I know that earthquakes are classical systems, and there might not be obvious intersections there, but when you look at quantum computation and all of the possibilities that it might have at some point in the future, do you see avenues of mutual benefit?

ROSS: Not now. [laughs] That still seems a bit far removed from all that stuff. But, maybe? You never know.

ZIERLER: You're obviously only here, and you don't have anything to compare it with, but how important is just the culture of the Seismo Lab, the capabilities that you have here, for your overall research agenda? In other words, in a parallel universe, would you be doing the same thing at a different institute, or is there something unique about Caltech that really put you on this line?

ROSS: I would be trying to. I think that they would probably think it was crazy or not appreciate it, or anything like this. When I first got hired here or was even being considered for the faculty appointment, I was one of like two or three people on Earth who was doing this stuff with machine learning in seismology, and nobody was getting it at that point.

ZIERLER: That is a very Caltech thing, both to be able to see where you were, and also to be adventurous enough and support you in doing it.

ROSS: Yes, I could see where this all was going and the potential for it, but most people across the field did not believe, at that point. It took time. It did move unusually quickly even for our field, but it still took a lot of convincing. Here, it was very much embraced, and people just looked at it as something different, that it was rooted in something that seemed likely to be—it wasn't just pie-in-the-sky type of thing. It seemed crazy but still grounded in something rigorous. I never had any resistance here on anything, and so this allowed me to do what I wanted to do.

ZIERLER: Which flows perfectly into my last question for today's session, which is that Caltech, as I'm sure you know, institutionally, it really prides itself on its support of junior faculty, that the success of the junior faculty is Caltech's success. I wonder if you can speak to, at the local level, how that has played out in your experiences.

ROSS: Yes. Very simply, the Institute has given me everything that I have needed to succeed here.

ZIERLER: On the resources side, what is it? What is it that you need?

ROSS: Everything from financial support to personnel support. There is very little red tape here. They hire very good people across the board. They staff the Institute with people that are very competent and know who to talk to about things. Everything runs very smoothly. There is just never any friction or anything like that. Everything just moves very well.

ZIERLER: To the extent that—I don't know if risk is the right word—but where, as you said, only two or three people were working on this, Caltech invested in you, in this; X number of years later, what is the return on investment to Caltech? What can you say to Caltech in terms of, "This is what I've accomplished so far"?

ROSS: I'll leave that up to them to decide.

ZIERLER: You make the case, too, to some degree, right?

ROSS: Oh, I have to, yes.

ZIERLER: What's the case? What's the short version of what you've been able to accomplish so far? How would you couch that?

ROSS: I think everything is moving exactly as I would have hoped it to be. We're just doing all sorts of really exciting things that I would be totally surprised if you had mentioned this stuff to me a year ago. We're discovering new things all the time. I'm just getting a very clear sense of the impact of all that, and it's just very nice to see that all working out. I'm having a good time.

ZIERLER: The two or three people that you were a part of ten years ago, how big is that community now? Is it basically that everybody has bought into it?

ROSS: Oh, yes, everybody has bought into it at this point. Whether they understand it or not is a different story, but there has been tons of universities across the country now—they list something-something-machine learning-blah-blah-blah in their faculty search description or whatever it is, even though they don't really know exactly what it is. Now, all of a sudden, they don't want to be left behind, so now they're struggling to find somebody that could do this.

ZIERLER: Your grad students and postdocs must do well in that regard. They're in demand now?

ROSS: Yes, they are in demand, and they get very well trained in all of this stuff, both at a theoretical level as well as how to use this to achieve whatever they want to. It has definitely changed; that is for sure.

ZIERLER: Zach, it has been a great conversation. Next time, we'll go back and we'll figure out how early in your childhood you started to get interested in earthquakes.

[End of Recording]

ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Wednesday, October 26th, 2022. It is great to be back with Professor Zach Ross. Zach, once again, thanks for joining me.

ROSS: Thanks for talking to me.

ZIERLER: Today what we are going to do is go all the way back, learn about your family background, your early interest in science, after our first conversation, where we took a really interesting tour about your research and all of the things that are interesting to you in geophysics and seismology. Let's start first with your family. Do you come from a scientific family? Does that run back in your family?

ROSS: Not really. I'm definitely the only one in even my extended family who does anything related to science.

ZIERLER: Where are your parents from?

ROSS: They're from L.A.

ZIERLER: You grew up in L.A.?

ROSS: Just outside, yeah.

ZIERLER: What did your parents do for a living?

ROSS: My mom was a speech pathologist, and my dad, for most of his career, worked at NBC. He worked in videotape stuff. Very far removed from anything I do or think about on a daily basis.

ZIERLER: What about just growing up in Southern California? Did earthquakes register with you? Were you interested in them when you were a kid?

ROSS: Not specifically, no. I became very interested in science I'd say probably around middle school or so. I remember even earlier going to summer camps and stuff that were science-themed and learning about how things work in the world and stuff like that. I always really enjoyed that. I think for me, my interest in science really picked up when I got into high school. Particularly I became interested in physics. But the earthquake side of it was not really there for me at that time. I lived through the Northridge earthquake and all that stuff, so it was something I thought about but not something that I was scientifically curious about, at least in those years.

ZIERLER: You went to public schools growing up?

ROSS: I did.

ZIERLER: Was it a strong curriculum in high school in math and science?

ROSS: Yeah, I would say so.

ZIERLER: You said physics. Did you like more the theory or the experimental side?

ROSS: I wouldn't even necessarily try to break it into those groups as much as it was I just enjoyed learning about how things are and how things work. At a high school level, you're not thinking about theory versus experiment; you're talking about stuff that was done and solved hundreds of years ago. I always really enjoyed those aspects, and also I guess some of what we're doing right now, which is the historical aspects of a lot of that. I read a lot of biographies early on about scientists—Newton and so forth—and I always really enjoyed learning about the person behind all that stuff and how they got to where they were, and what put them in the context of being able to do the things that they did.

ZIERLER: That's what I do; I just went the history route. [laughs]

ROSS: Right.

ZIERLER: At UC Davis, did you focus on physics? Was that the game plan for you?

ROSS: It was, yes.

ZIERLER: What kind of physics? Maybe as you were starting to think more about theory and observation and experiment.

ROSS: It was just general physics, I would say, at least as far as the curriculum goes, but at that time, I was mostly drawn to optics, wave physics, and that kind of thing. I always had an interest in wave physics, but I just wasn't quite sure exactly what the right path was at that point.

ZIERLER: What about on the computational side? Were you always good with computers when you were a kid? Was that something that was interesting to you?

ROSS: Yes. My parents have photos of me on an old [laughs] PC back in the 1980s when I was probably three or four years old, writing programs on DOS computers and things like that. I taught myself how to program from a fairly early age. That whole mindset more or less stayed with me, but not in any obvious way until I got to college. I was just more tinkering around with stuff and learning what you can do. It's different when you are all of a sudden given specific tasks and need to solve a real problem of some kind. Then it takes a very different perspective.

ZIERLER: I'm curious if it was in undergraduate that you started to think about the value of bringing a computational perspective to physics?

ROSS: Probably not. I think my view of all of this really started to emerge when I was working on my PhD. That wasn't until some years after that.

ZIERLER: Did you have any exposure as an undergraduate to seismology or geophysics?

ROSS: Essentially, no. Towards the end of my undergraduate years, I was trying to figure out what I wanted to do next, and I didn't really want to stay in that traditional physics track for grad school. I think it was just a bit too esoteric for me.

ZIERLER: Like string theory and those kinds of things?

ROSS: Yeah, a lot of the modern research topics, as opposed to what you're learning about in undergraduate curriculum versus what you're actually doing research on, the research side of it today felt quite distant for me. I was considering trying to move instead towards an engineering direction, and there was—this must have been my senior year of undergrad—but I had seen a poster on the wall in the hallway of something related to simulations of the 1906 San Francisco earthquake and the effect on hazard in buildings in San Francisco. It was kind of trying to reconstruct some of what happened there.

ZIERLER: What was the home institution for this project? Was it UC Davis?

ROSS: Yes. I thought that that was pretty interesting, and so I started to just read about stuff in this area and decided that I was going to go into the earthquake engineering direction. I applied for this master's program in civil engineering, specifically in earthquake engineering, really with the full intent of becoming a card-carrying engineer. So, I did that. I ended up going to Cal Poly in San Luis Obispo. That ended up being a good choice for me, because it was a totally different field. I had to learn a lot of stuff.

ZIERLER: This was an MS in engineering?

ROSS: It was, yeah. A substantial amount of the coursework that I really needed to have going in there, I didn't have. I kind of just assumed that a lot of it was—

ZIERLER: Just recycled physics kind of thing?

ROSS: And it really was not. It was quite a bit different. I had a lot of room there and flexibility to work my way through that program. I was able to cover a lot of the material on my own. They offered a lot of flexibility there in terms of the way that their program was structured, and so it actually worked out pretty well for me. I think if I had just been dropped right into a program somewhere else that was more structured and focused, it would have been quite a bit different.

ZIERLER: Was the focus of this program really applied? This was for training future engineers?

ROSS: Yes. It wasn't meant for research purposes. It was really meant to get you ready to take the test for an engineering license and go off into that world.

ZIERLER: Did you retain whatever wisp of an interest you had in seismology from that poster you saw? Were you aware of people like Tom Heaton and the field of engineering from a seismological background?

ROSS: Yes. Basically, the program that I was in was really quite flexible and it focused largely on geotechnical earthquake engineering, so I was doing a lot of stuff and particularly my master's thesis was right at the interface between seismology and engineering, so really engineering seismology. I started to become very familiar with all of this stuff at that point. But in the middle of all this—really quite early on; not even in the middle—I realized that I had no interest in actually being a card-carrying engineer.

ZIERLER: [laughs] Time to start thinking about PhD programs.

ROSS: Yes. Once I started to actually learn about seismology and I realized that it was just wave physics in a different flavor from a lot of the stuff that I was learning about as an undergrad, except it's learning about wave propagation in the Earth, and that there was kind of a rigorous quantitative description for all of these processes and that kind of thing, that's when it all clicked for me. That it wasn't just talking about earthquakes, but it's actually talking about all the math that describes the phenomena. Then I also really valued the real-world implications for hazard and what all that meant. So, I decided fairly early on during that master's degree to basically shift a bit and take coursework that was specifically going to help me in a research capacity moving forward. Really that was quite good for me, because of the flexibility that it had in terms of what I could take and that kind of thing. Immediately I began to think about reorienting in a geophysics context for a PhD program after that.

ZIERLER: While we're in the narrative, just a reflective question: has the engineering degree, that perspective, been a value to you? Is that an asset in your research?

ROSS: In some ways, yes. It helps me to think about the way that that community thinks about hazard, and what they need and so forth. It is a very different perspective on the whole problem. They don't really care about how any of it works; they only care about what they can input to their big hazard analysis codes. So, it has been helpful, more so relatively recently, I would say, than over the previous decade, because I've started to do some work more recently that dives back into that direction and is on the engineering seismology side of things. So I am relying on it a bit more now than before.

ZIERLER: Once you settled on seismology and geophysics for the PhD, what programs were you looking at?

ROSS: I was looking at programs that were more focused on engineering seismology, which at the time I didn't really quite realize, but there were actually very few of those left in the U.S. at that point. For various reasons, that field itself has kind of disappeared. The two sides of this problem, the engineering aspects and the scientific aspects, have diverged, for a number of reasons, and there are very few people who run active research programs in the U.S. that are right in the middle of those anymore. Historically, that was not the case. There were always a lot of people working right at the intersection between those two, and now it's basically become two completely different communities. At the time, I was looking for more specific programs that actually had people working in this area. That led me to a handful of people that were close to retirement age.

ZIERLER: Because this was considered sort of an older field?

ROSS: Well, it's not "older"; it's just that who studies this has changed. There are very few people within geophysics programs who tend to study engineering seismology anymore. Maybe you could say that earthquake early warning is a modern form of this, which didn't exist really, even a handful for years ago. That's one way of thinking about it. But there used to be a whole community of people who were trained as seismologists, and their research was focused specifically on modeling of the ground motions that could be used by the engineers to do their various types of analyses. That's really not the case anymore. The engineers have themselves taken control of all the stuff related to ground motion modeling.

ZIERLER: What accounts for this? Why the shift?

ROSS: I wouldn't describe it as a shift as much as a trend, just moving in that direction. Why? I think there has been a number of reasons. One is that the engineering community went in a direction that was heavily on the probabilistic side of modeling all this stuff, and one that is heavily based on empirical analyses rather than physical analyses. They are basically just doing statistics on earthquake ground motion records, and they have moved in a direction that doesn't really care about the process by which those are generated at all. For a long time, the physics itself was not far enough long that the models could produce ground motions sufficiently reliable that the engineers would trust them. For many, many years, the ground motions predicted by the physical models were substantially larger than anything that had ever been observed before, and engineers would basically look at these simulations and say, "Those numbers are crazy. We can't actually design buildings with this kind of stuff."

I think these are the kinds of factors that ultimately led to these two groups starting to diverge. It became a lot of echo chamber type stuff where people are starting to talk more to the people who are willing to listen to them. Ultimately I think that that is a big part of what happened. This is all just my personal take on the whole thing. Anyway, these groups have more or less drifted apart in that regard. There are a few people who still work somewhat at the intersection, but they are mainly within the U.S. Geological Survey, not so much in terms of running active research programs at the university level, at the interface there.

ZIERLER: This exchange all goes back to my original question about the kinds of programs you were looking at for the PhD.

ROSS: Yes, so I was looking at this area and not really finding a whole lot. Ultimately, one of the people who I had been looking to work with, whose research I really was inspired by, at least what he had done in the past—this was at the University of Nevada, Reno, which historically had a very strong program in earthquake science and engineering, very integrated. He was on his way out the door, and effectively had said something like that, back then, to me. I met with him and I talked with him, and he basically just said, "I can't support a student right now." He didn't have too many years left in his appointment, basically. He ultimately ended up forwarding me an email that my eventual PhD advisor at USC had sent around to his department looking for students. My advisor was not an engineering seismologist at all. [laughs] He's very much a pure geophysicist focused on the earthquake science problem, and relatively little on the hazard modeling side of all that stuff.

ZIERLER: What about at Caltech? Was Tom Heaton still taking students at that point, or did you think about working with Nadia Lapusta?

ROSS: No. I just didn't think that I could get into Caltech, quite honestly, at that point.

ZIERLER: [laughs] You'd wait to become a professor, of course. [laughs]

ROSS: Yeah, something like that.

ZIERLER: [laughs]

ROSS: At that time, it just wasn't, I don't think, an option. He had forwarded this email and copied this professor on it. I started to talk to him, and the guy was very motivated to look for students.

ZIERLER: Who is this? Who was your advisor?

ROSS: Yehuda Ben-Zion at USC, who is now the Director of the Southern California Earthquake Center. I thought about it. He was very motivated to bring me there and made it all happen right away. That's the direction that it all went into.

ZIERLER: I have heard that he is a prolific collaborator and that his research agenda is very varied. What was he working on? What were some of the big projects when you arrived at USC?

ROSS: When I got to USC, he had just gotten quite a large NSF project funded, a multi-institutional project that was like a five-year NSF thing, all focused around understanding the San Jacinto fault zone, which is one of the major systems in Southern California, which would include a large instrumentation component over the five-year period to monitor the fault zone at resolution much higher than you could get with the permanent instrumentation that was there, long-term modeling of the system, analysis of the data that was collected there, and so forth. Also, in terms of geodetic measurements of the fault zone as well, to look for various signals that people had been proposing might be there, and so forth. It was a very broadly integrated project with about five different PIs on it. I showed up at USC basically around the day that it was funded, and I had a copy of the proposal in my hand [laughs] six months before I got there, and looking at it, and thinking, "This looks really quite exciting." That ended up being a very large part of what I was involved with during my PhD.

ZIERLER: You mentioned it was during the thesis stage that you really came to appreciate the computational value. I wonder if you could speak to that in a little more detail.

ROSS: Right. A lot of this, when I got to USC, I pretty much walked into Yehuda's office and told him that I was really interested in doing statistics on earthquakes. He said, "Okay, okay." During the master's time, I had spent a lot of the coursework I took relating to statistics and that kind of thing, because on the engineering seismology side, that's a central part of how they model all this stuff, the hazard analysis. They model the ground motions. They model the magnitude scenarios of earthquakes. They model all this stuff, and it's heavily statistical and empirical. So I took all of this statistics coursework and stuff, and showed up there, and thought, "Okay, now I've got this whole mindset, and here's what I'm going to do."

Meanwhile, they were starting to collect all this data, and I realized almost right away that it was next to impossible to be able to do any of the stuff that I wanted to do, with that data. Going from that raw data to basically a catalog of earthquakes, which is just a list of earthquakes and their hypocenters, magnitudes, times that it happened at, basic information like this which is what you would use as the foundation for kind of all of the stuff that you would do downstream, it was nearly impossible to do that. There were a few ways to do this kind of thing, but it was not very effective, and it required quite large time investments to learn how to run all this software, tune it properly for the region that you're working on and the dataset that you've collected, learning how to recognize false detections of earthquakes and all sorts of problems that can arise from this whole thing, and in the end it still didn't even work out very well.

So, a large part of what I ultimately did during my PhD was relating to the development of better techniques for this whole earthquake monitoring problem—detecting them, locating them, measuring the phase arrival times automatically, that kind of thing, which is fundamental to what a seismologist does, because in most cases you take all that information as the starting point for your analysis. If there's a seismic network like Caltech's that's already running and established, you can just download all their information, which has been manually reviewed by human experts. But if you put out a bunch of new sensors in the field like this, it's a total nightmare to try to be able to do all that from scratch for a totally new sensor configuration. This was a very big motivator for me in terms of the research to help me get towards where I wanted to go.

ZIERLER: What aspects of your thesis research had an interest in applications, in mitigation and preparation, early warning, things like that, and what aspects were purely basic science, just figuring out how these things work?

ROSS: I think at that time, most of it was focused more on basic science, but there were obvious applications for a lot of what I was doing for that as well, which I was aware of. Just in terms of what I would personally choose to pursue at that time, it was more focused on the basic science. The main connection to me that I thought was still rooted in something that was very applied, I guess you could say, was the fact that all this stuff was translatable to the real-time earthquake monitoring aspects. It wasn't even just that researchers were essentially unable to do this with any degree of ease on their own data, but that the techniques that were even being used for the seismic networks, which are the backbone of the earthquake monitoring program in the U.S. and around the world, even though they had them working, they didn't work very well, which is why we have these seismic analysts that spend all day long just cleaning up the mistakes that these things make. It became obvious especially that even on the side of just—not just in terms of fixing mistakes, but also the idea that there were so many more smaller earthquakes that were being missed by the existing algorithms, started to become very obvious to me, and that this was important.

ZIERLER: Just being at USC and its connections with the network and SCEC, I'm curious just institutionally, administratively, how those things might have been of value for your research?

ROSS: I think being at an institution that has a heavy emphasis on earthquake science was certainly a very important part of my time there. There are really not a lot of those, and they're all concentrated on the West Coast, even though today there's no reason for that to be the case, because most of the time we're not using our backyard as a natural laboratory for this stuff, and you can ship sensors around the world. But really, earthquake science is heavily concentrated in the Western U.S. at a handful of institutions that have gone all in, in that area, and they have a large number of people working on that topic. There are a handful of really big players in this space. So, being there, and also the fact that SCEC is headquartered there, was definitely something that left an impact on me. Being a part of SCEC every year, the annual meeting and so forth, which is in Palm Springs, it definitely played a big part.

ZIERLER: As you mentioned earlier, this shift in the field where engineers were taking on more and more of this themselves, and even in generational terms how the initial interest you had was really led by an older generation, I wonder how that played out, how that might have affected how your thesis progressed.

ROSS: For me, I started to recognize this niche area, that it just didn't really seem like a lot of the community had picked up on it. But if you talk to anybody and you ask them about these NSF projects that they got funded—for a three-year project, the first year is all about deployment of the instrumentation. The second year or more, at least on paper, is building the earthquake catalog. Then the final year of this whole thing, is now you can do the science with all this, that you wanted to do. Often it doesn't even get to that point or it's very thin, before you're already writing your next proposal and thinking beyond this at that point. It seemed to be something that pretty much everybody in the field agreed was a big problem, but very few people were really devoting significant attention to it at a larger scale. There weren't major community efforts to try to solve these problems. I kind of just started to try to do all this stuff by myself.

ZIERLER: Were you thinking about AI and machine learning as a graduate student at all?

ROSS: Yes. I first was introduced to this topic probably around 2013 or 2014 by a friend of mine who had taken a class over at UCLA on this when he was a grad student there. Not in our field; he was in neuroscience. But at that point there was already starting to be a lot of buzz about this topic. The words "machine learning" went from being totally unheard of prior to 2012 to just exploding in usage after that. December 2012 was the moment, basically, of the big revolution in modern deep learning. That's when everything just exploded at a single talk. Within a very short amount of time, the interest in offering courses exploded. You went from having 500 people at the annual meeting listening to these talks to 10,000 or 15,000 people within a few years, and so forth. So it was already starting to make its way around by word of mouth.

I bought a textbook, started to read it. Unfortunately for me, at that time, the textbook itself was a bit too out of date and it didn't cover any of the advances that had happened in 2012, which were extraordinary, revolutionary, in this field. So it wasn't entirely obvious to me how to use that stuff at that point, for good reason—because it just was not exposed in this book at that time. This was a book from like the mid 2000s, and I figured, "Okay, well, how old could it really be?" But it turned out to be quite old. I read the whole thing and just put it in the back of my mind for when it might be useful down the road. I tried a little bit of what I learned about in just toy research settings, but it didn't quite have the success that I really wanted it to. It wasn't really until I was at Caltech that I started to make the connection between all of this stuff.

ZIERLER: Were you following more generally the embrace of really high-powered computing for geophysics that was well underway at this point, the sort of broader trends in the field?

ROSS: To some degree, yeah. The most noticeable of those I think was the usage of GPU devices for computations, which started to really pick up within the last ten years. Geophysics at least at Caltech was kind of at the forefront of this. The Seismo Lab had a cluster in the basement that had a large number of GPUs on it, and that machine was viewed at the time by other outside evaluators as kind of being crazy, because you're putting all this money into these devices that nobody has ever heard of, and betting that they're going to be useful. When I got to Caltech, that machine was there. It was already even kind of old at that point. But it allowed me to do the first really big project that I pursued when I got here. I came to Caltech as a postdoc working under Egill Hauksson. Have you already spoken with him?

ZIERLER: Yes.

ROSS: That was in 2016. He brought me there on an NSF project which basically proposed to—the Caltech contribution was to build the first new catalog of earthquakes using modern techniques by going back and reprocessing the entire waveform archive that we had accumulated basically since 2008. The technique that was being proposed to be used was something called template matching, which is basically using the records of previously detected earthquakes as templates to scan the data and search for similar signals. It was kind of like a guided search, in some sense, which had been shown to be effective in a number of very small-scale studies over the previous decade, since probably 2006 or so. But it was very computationally demanding to do this. I was going to basically scale up this technique to the largest dataset that anybody had attempted on by orders of magnitude, and build this whole thing.

That was a multiyear project, a two-plus year thing, that I wrote all the code from scratch, and immediately was made aware of the potential for using these GPU devices to do this. I pretty much took over the computing clusters on campus during that time and did this really massive search for all these events. That led to the Science paper that was published in 2019, where we built and documented this new catalog and all the stuff that you could see in it which you couldn't before, what was revealed in the process. You detect like ten times more events essentially than we had in the previous catalog; something like two million of those. I think it came at a very pivotal moment in the field and it started to make people recognize what they were missing in the data that they had.

ZIERLER: To go back to this idea that when you were looking at PhD programs Caltech did not seem to be in the cards for you, and then to fast-forward to the postdoc, that really does beg the question, what was the significance of your thesis research that shifted that, that made a place like Caltech available to you for research?

ROSS: I was definitely in a very good position upon finishing my PhD. I had a number of postdoc offers in hand. Without applying for anything or even talking to people, I think I had four offers. But at that point there was no question in my mind that I was coming here. This was the center of everything earthquake-science, and you see it in a million different forms, from just looking at all the web pages of faculty around the country and the fact that a very disproportionate number of them have come out of our program here. There's just an outright dominance in this space. At meetings and so forth, you walk around, and you're just swamped with posters from the Seismo Lab and all of this stuff. So, there was really no question to me that I was going to be coming here. Egill Hauksson was very focused on getting me here and jumped at the chance early on, and I definitely decided to accept that.

ZIERLER: Of all people, why Egill? What was he working on that might have made this a natural fit for you?

ROSS: Ultimately he was willing to put up the money for it. That's a big aspect about the whole thing. I had applied for the various postdoc fellowships that we offered and was not given anything, which is still I think an interesting topic for some of my colleagues, kind of in hindsight, of "how did that happen" kind of thing. For whatever reason, he was the one who was expressing the strongest interest rather than any one of them. I had talked to various people, but there was never a clear fit or there wasn't any strong initiative to make something happen. That's how I came to Caltech.

ZIERLER: What were some of the key challenges in setting up this ambitious program with Egill once you got to Caltech?

ROSS: There's a million key challenges. I had already worked extensively with data of that scale during the PhD, building catalogs and that kind of thing before that. I had already done some of the largest automated runs worldwide at that point. But then this whole thing was at a different scale even beyond that. Scoping out the whole extent of the project, trying to plan for all of the potential problems. One of the big ones was just trying to keep track of all of the processing and making sure that there weren't pieces missing and so forth. Because everything is done in parallel but at a totally independent level. So, you process one chunk of data, a small chunk, totally independent of all the others, and so if something breaks somewhere, you've got to go back and find out what happened there and fix it, even though the rest of them are all running. It's just the kind of thing where the whole thing falls apart, so we don't want big gaps in the data because small pieces here and there had issues. I was writing the code to be able to do what I wanted to do efficiently, and trying to actually estimate how much time it was going to take to do this. Rewriting it over and over again until I got it into a form that I thought I was going to be able to work with, and actually make it happen. It was just an absolutely massive dataset to process.

ZIERLER: How much of the challenge was about the computers and how much of it was about the people, just in terms of resources?

ROSS: The computers were a factor somewhat, but I wouldn't say it's most of it. At one point I pretty much took an entire month using all of the GPUs on the Caltech central machine. So we had the capabilities to do it. It's not like you push a button and it's done an hour later. It was running one year at a time, and coming back days later, and verifying that everything looked okay, or fixing things that hadn't worked, and then running the next year, repeating this, and so forth. Recognizing that there were various problems that showed up in the results at certain stages and having to go back and try to troubleshoot what those were in the middle of this huge run. Because it's not like you can just easily backtrack what had happened when it's something at that scale. So, learning how to diagnose problems in that situation was quite challenging. But yes, the resources were just not really there, and I had to pretty much do everything myself. I got some really good support from NVIDIA at the time. I can't even remember how they got involved, but one of their engineers I guess had heard about it and volunteered some of their time to look at a very fundamental part of my code and make some really important optimization suggestions in there that sped it up quite a bit. Something that is just being looped over, many, many, many times at the lowest level of the code. So, yes, I had some external support for this, but not a whole lot.

ZIERLER: As you explained, the name of the game here is in recognizing all of the signals in the noise, that there's so much valuable data there. Do you have a specific memory, even a eureka moment, of when that was borne out, that you actually started to see this stuff, and that all the effort was paying off?

ROSS: Maybe the closest thing to that was in July of 2016. It was not too long after I first got here. We had a magnitude 5.2 in the San Jacinto. I already had a very initial version of the code written at that point, to do this, so I processed this small sequence basically with this code and could see everything that came out of it. That by itself turned into a fairly important paper. So, I was pretty convinced at that point that it was going to be a worthwhile effort.

ZIERLER: More broadly at Caltech as you admired from afar just all of the authors, the papers, the people who had come through the Seismo Lab, what were your impressions when you got there about the research culture, beyond working with Egill? About coffee break and the collaborative nature of postdocs and graduate students and faculty? What struck you about the Seismo Lab in that regard?

ROSS: That even though the size of it was really—it's quite large. We have a pretty big operation and program at Caltech in this area. When I got there, we had almost 20 postdocs in seismology. We had maxed out the space on the second floor. There were 30 graduate students in this area. It was a huge program. Yet it still felt very close in that sense. Everybody just knows everybody very well and knows what they're working on. This program is much, much larger than what USC had, but it still felt like a very tight-knit community that just valued lots of discussion and science and everything else.

One of the things that stood out for me, probably because I spent a lot of time at institutions that were not necessarily the crème de la crème, was that everybody who was here, they really wanted to be here. You don't need to work really hard to convince someone why they should care about being here and what that meant in terms of their career down the road or anything. It was just you were surrounded by a bunch of people that just really wanted to learn about stuff rather than just going through the motions of doing something because that is what they had always been doing. I saw a lot of that before, but I see very little of that here.

ZIERLER: In talking about your work, in conveying really the excitement about all of the important data that could be gleaned in the way that you were doing this, what was the process of people who might have been orthogonal to this field, in terms of them understanding its importance and its relevance for the things that they were working on? Did this catch on immediately? Was it an elongated process?

ROSS: No, it didn't catch on immediately. I think especially, once I started to work on the machine learning side of all this, which was mid 2017—and we were basically the first group in the world to start doing this at that time—people didn't really quite know what to make of all of this stuff. The level of interest was increasing very quickly, so we had basically two initial papers on the deep learning seismology stuff by the end of 2017, and I gave my first public presentation on this topic at AGU in December of 2017.

I knew the value of all this. Within our group at Caltech that was working on this, it was very obvious to me and to the rest of us that this was going to be a major change in the way that the field did stuff. But we hadn't discussed this really with anybody outside, at that point. I remember just getting up on that stage at AGU, and the room was packed. People were just walking in, and they were standing outside even in the hallway trying to listen. I remember just looking around the room and seeing everybody while I'm talking, and I thought, "There's something happening here." I could see all the heads nodding in agreement with everything I was saying. There were lots of claims that I was making about the way that things had been done before and the challenges with all of that, and what this was going to do differently. So there were already a lot of whispers about all this, and this is before the funding agencies start announcing that they want people working on these types of things. It hadn't followed that whole process yet. This was still very early on in this whole trajectory. It's five years now, since then.

At the same time, over the next few years, it was just that we were attacking one problem after another with all of this, and solving big things, and saw all sorts of doors opening. But the community itself, even though it was becoming very interested in all this, I think that not just in seismology but beyond seismology, geoscience and geophysics was looking at a lot of this and trying to make sense out of everything, about what it was really going to lead to scientifically. I remember applying for faculty jobs in the middle of all this stuff, when it's just exploding, and people who were just not sure what to do with it.

ZIERLER: I know the timescale we're talking about here is really only five years, but it's really incredible progress in machine learning and AI in these short past few years. What was most relevant for you in terms of the advances? In what ways did that supercharge what you had already undertaken?

ROSS: The connections came back to what I had learned about machine learning when I was in grad school. I don't remember exactly the way that it happened, but I remember coming across the modern deep learning, which had not made it into that textbook at that point, but it was a sea change in this stuff. Or it might have been in the book, but it was like this much space devoted to it. Whereas today there are entire fields based on all of this, that was the kind of thing you might just skip past and not even know that it was a big deal. But I recognized that basically there's all these people working in computer vision, which is essentially now a subfield of AI that is focused on learning structure from images, essentially, and being able to do that automatically, whether that means recognizing objects within images, or localizing them within the image, potentially drawing a bounding box around the object, all this kind of stuff.

The connection for me was when I recognized that this type of technology was like the state of the art and able to solve that type of problem extremely well. Specifically, it was kind of the bounding box problem, because mathematically it's identical to one of the major types of problems that we have on detecting the earthquake stuff. Recognizing that was just immediate. Within a short amount of time, I tested the stuff out and it worked so well that I couldn't even really believe the results initially. I had to spend some amount of time convincing even myself that it was real.

Then beyond that, once that was working, it was just recognizing that this was an entire paradigm. It wasn't just about solving one specific problem, but a way of generically solving problems. That there was going to be a paradigm shift in the way that you approach all sorts of task automation and all sorts of things. Just immediately you can see the potential for all of the different applications to the whole field, where it was going to go and all that stuff. We had a huge list of things that we just started checking off, one after the next, because we could realize that this is how you would now solve this problem, which nobody really knew how to do before.

ZIERLER: In thinking about all of these advances in AI and machine learning, did you ever consider this as a two-way street, in other words, that the kinds of things you were developing were good for AI itself? Or did you see yourself primarily as a consumer of these technologies?

ROSS: No, initially it was just trying to make sure that I learned enough not to make myself look stupid within that community and that I was following good practices and things like that. I was entirely self-taught on this stuff. I never took a course in any of it. At that point, there weren't even really textbooks available on it either, so it was a lot of scouring the web for blog posts written by researchers and other things, and trying to figure out the math behind this stuff, and reading some research articles, and just trying to make myself feel like I was actually learning the proper ways of doing all of this stuff.

ZIERLER: I'm always fascinated by, when there's major discovery, there's a real moment of, as you call it, a paradigm shift, were you aware of any what we call multiple independent scientific discovery? Was anybody else beyond Caltech in the field, even beyond this country, on this same track that you were, at the same time, that you might not have even known about at first?

ROSS: Yeah, there were two other groups. I think timing-wise, there was really one other group that was at Harvard that had been thinking about this. They submitted their first paper on this about a month or so before I submitted mine, and I didn't know that they had submitted that until I saw a presentation on it at AGU that year. It wasn't even clear how similar it was. Then there was a group at Stanford that emerged very shortly after that, that ultimately became kind of my main competitor in this space over a period of several years.

ZIERLER: What has been healthy about that competition?

ROSS: Everything. They're a very good group, and they're good people, super smart people and experts in this stuff as well. We kept very close eyes on what each other was doing and trying to work towards a common advance in this area. Of course it just opened the floodgates after that, and there has now been hundreds and hundreds of papers published on the same topic as what we did initially with this, since then. When I started all this, my math background was not the strongest. I had a lot of familiarity with all this stuff, but nothing like it is today. To get back to your question, I initially moved into this, and then I got effectively connected with Yisong Yue across campus in CMS. I got him to basically agree to work on a project and so I started to actually get real experience working with this space. The more that I learn, the more I moved very strongly into that direction. With that came having to really strengthen my mathematical background in all of this. We started to talk to them initially about just trying to solve our problems with existing technology, but eventually it started to get to the point where some of the stuff that we want to do, they don't know how to solve it, or at least they don't have straightforward solutions right away. It's kind of almost come full circle, in some sense, over the last year or two, because we've published a few papers now in machine learning research journals [laughs] on new ML stuff that came out of what we had been doing on the seismology side. We developed it specifically with seismological applications in mind but we realized that it had not been done before, even within the ML community itself. So, I've been working very closely with a lot of the machine learning researchers, some that are still here, some that were formerly here, and that's a very active part of what we're doing now. We not only just applied the technology but we're also very actively involved in developing the technology that we use when it becomes important.

ZIERLER: Moving to 2019, what was the process for you, joining the faculty? Did they open up a position for you? Did you apply to something that was open? How did that work?

ROSS: There was an open position that had been posted and I applied for it. I think that was around March of 2018. It was in the spring of 2018. This was right in the middle of all of this stuff when it's happening. Back then at least, there was a lot more of me doing the big sell on everything, than I need to do today. Today, I know already what it has accomplished.

ZIERLER: Did transitioning from postdoc to faculty change your research at all, or was it seamless?

ROSS: It has changed it.

ZIERLER: In what ways?

ROSS: It has forced me to broaden a lot of what I do.

ZIERLER: In terms of teaching, in terms of mentoring graduate students, that kind of thing?

ROSS: No, in terms of my research program itself. "Forced" is probably not even the right word, but it's kind of a natural outcome, at least for me.

ZIERLER: In a good way, that it has broadened you out?

ROSS: Yeah, because you have a new student that comes in, and you don't want to put them working on something that is almost the same as what somebody else is already doing. Each student wants to have their own space and research identity and to be the local expert on this or that or whatever. Even when someone comes and says, "Oh, I want to work on that, too," it's kind of like, "Well, let's find you something else to do instead." I've definitely broadened a lot of what I do, on the scientific side, so we've expanded in the last two years into volcano seismology, which I never did before. That's because one of my students had arrived and basically said he wanted to apply all of this technology that we had developed to that kind of setting, and so we have done a pretty heavy dive into some of that now, and it's having similar successes within that space as well. We were well positioned to take advantage of that kind of thing also.

I would say that the balance of what I do, I've tried to keep it the same, so my research program is about half on the methods and mathematical side of this, which is the machine learning and statistics and that kind of stuff. So it's all about techniques that we develop to do all these different things better. Then the other half is applying this stuff to solve various scientific problems. I have tried very hard to maintain that balance. But definitely within each side of this, I have expanded as well. Like I said before, my math background and computer science background has expanded substantially to the point where I can read current research papers in this area. I can attend the conferences—I've done it a number of times before—I have worked very closely with people who are leaders in this space and feel very comfortable talking with them today and that kind of thing. I understand all their terminology. I have the right math background to follow it around. And we're doing research even at that end of it as well. That whole thing has broadened, but then also scientifically we've expanded as well into the range of topics that we look at and that kind of thing.

ZIERLER: Being a new faculty member and then not too long after, COVID hits, I wonder, on the negative side, what might have been some of the real difficulties, the isolation? On the positive side, because so much of what you can do remotely, was it an opportunity to really do a deep dive and focus, and not get pulled into all of the other things that happened in pre-pandemic days?

ROSS: I had been already at Caltech for almost four years at that point. The good thing was that I was very familiar with the Institute and the way that it used to be, and everything else. I had been a faculty member for about eight months at that point. That was helpful just to even have a very basic sense of a reference point for what things should be like, as opposed to some of my colleagues that started right in the middle of it. That was a totally different thing, because they got there and just didn't know what to expect, and thought that that was just the way things were. You don't know what's recent versus what's not, and it's a mess. Personally, I don't think that it really caused any major problems other than, it was certainly not the greatest time, and a lot of isolation, obviously. I was still in my office much of that time working, but I was the only one in the building, basically. Having to be the support person for a number of people who are all living far away from their families—a lot of them are foreign nationals. 2020 in this country was a pretty intense time period to be a foreign national. They can't leave the country also, and there's all sorts of stuff happening. It was very hard for a lot of people, and so having to look after everybody and to be sure people aren't falling through the cracks and that kind of thing was certainly challenging.

ZIERLER: To go back to this idea that in the initial part of this journey for you, you were really proclaiming the value of this approach, and now hundreds of papers are coming out as a result, just at a high level, what has been possible now that wasn't five years ago?

ROSS: What is possible now, at least for us, not necessarily—I'm still talking to people all the time that still can't figure this out, but that's a different story—we can pretty much acquire an arbitrary dataset from almost anywhere in the world.

ZIERLER: What does "arbitrary" mean here?

ROSS: The configuration of the seismic sensors, how they are positioned and that kind of thing, the number of them—it doesn't matter whether they're in holes in the ground or they're on the surface where it's very noisy or whatever it is; just kind of any seismic dataset that could be turned on today—you flip a switch and the data starts flowing—you've never seen anything in it before, you don't know whether there are any earthquakes at all in it, it could be a totally new region of the world that has never been studied before—we can basically flip another switch and build a big catalog of earthquakes with very high quality that can reveal so much more information about the Earth than it ever could before. That's one of the main things. It's not the only thing, but it's certainly a very central part of a lot of what we do.

ZIERLER: Why the move into vulcanology? What prompted that?

ROSS: The main thing was because I had a student who was very interested in that and saw all the cool stuff we were doing in earthquake seismology and said, "There's immediate applications of all this stuff to that setting as well." Volcanoes produce tons of micro-earthquake activity around them, and so you can use all this same stuff to study those systems and better understand them. So it was a very natural and obvious extension of these techniques we had developed, with almost no extra effort. It just happens to be that a lot of the volcanic systems naturally produce lots of earthquake activity.

ZIERLER: Are there other students who have pulled you into different fields similarly, or are you looking forward to the next student who will help you do that?

ROSS: That's probably the biggest change in my research program since I started my faculty appointment, which was also a big time investment as well, at least on the scientific side. I knew nothing about volcanoes before that student arrived and had to do a massive review of the literature to get to the point where I felt comfortable enough working in the space, at the forefront of it and that kind of thing. I invested a huge amount of time reading books and papers and it was probably a year and a half effort with a substantial time commitment associated to that alone, to get there.

ZIERLER: Do you think this will remain in your research agenda beyond your affiliation with the student?

ROSS: Oh, yeah. At this point, absolutely.

ZIERLER: What is the frontier for you in vulcanology? What does that look like?

ROSS: Right now, we're doing a lot of really exciting stuff in Hawaii, which is the best understood volcanic system on Earth, but we're finding there's all sorts of really fundamental things there that have been missed, not for a lack of trying but because they didn't have the resolution to see things that we can see. So we're very focused on imaging the subsurface structure of the volcanic system and looking at the pathways for magna transport from the upper mantle to the individual volcanoes and where the magma reservoirs are that store that. It's kind of like, how does that even get there in the first place? Which historically has been very, very poorly understood, for something that is so critical to these systems—how it moves from deep to shallow. We're starting to be able to see very clear evidence for what these pathways look like, the geometrical aspects about them, and where they're positioned and that kind of thing, at a level of detail that is unprecedented. So yeah, pretty much anything that has seismicity associated with it is an opportunity for applying these types of methods.

ZIERLER: This could bring you not just to Hawaii but anywhere else on the globe?

ROSS: Yeah.

ZIERLER: Now that we've worked right up to the present, for the last part of our talk, if I could ask a few retrospective questions about what you've accomplished so far, then we'll end looking to the future. Again, it's such a short timescale in terms of this field or this subdiscipline that you've helped found. At this point, would you say that this is a mature discipline in terms of its acceptance, in terms of its utilization? What are the benchmarks that you would use to answer that question?

ROSS: Yes, I think that at this point it is pretty much undisputedly the state of the art for doing all this stuff today. The field has come a long way in that time as well, in that they have established very rigorous benchmarks during the process of all this for evaluating the performance of these things, which really did not exist in a rigorous way before that. It was kind of every researcher off on their own testing things on the dataset they choose, without really any clear commonality between them, which was also the case in machine learning before ten years ago as well. So the community-wide benchmarks and so forth have also helped to move this and provide validity to the techniques as well. So it's very clear. Like I said, there has probably been between 500 to 1,000 papers on machine learning in seismology that have been published in the last four years. Everywhere I go, I talk to people who are using these things now, and this dataset or this dataset, and I see the results of the research coming out with it, and that kind of thing. So I know exactly how well it works. I mean, I knew that by myself, but I can see that other people are doing things as well, so it's not just me, isolated, looking at things. We've somewhat moved on to other problems in this.

I recognized that a lot of the research on using AI for detecting earthquakes was approaching a saturation point, already three-plus years ago, and I started working on other types of problems with the machine learning that are not just this exact area. It's having impact in those spaces as well. It's not just specifically on the earthquake detection side, but being able to talk with and work with the computer scientists at a pretty demanding level has been very helpful for me in all these aspects. One of the big frontiers that we're working on right now is on accelerating simulations with machine learning. It's not even data-driven at all, but it's trying to run big earthquake rupture simulations at supercomputing scale, but hopefully on your laptop instead, and do this efficiently. This has been a big challenge. We've been working a lot with folks like Anima Anandkumar and some of her former students and postdocs, and Katie Bouman and other people on campus, to do this problem. That will keep our focus for some time now, I think.

ZIERLER: From the initial competition with Stanford, how has the field grown? Where are there other research centers where this is their central research focus?

ROSS: The machine learning specifically?

ZIERLER: Yes. Or is it still early on in that regard? It's still Caltech and Stanford?

ROSS: We're still I think the two big players in this space. In fact, most of the competition that I had up there was really from one graduate student who is now a postdoc here.

ZIERLER: [laughs] That's awesome.

ROSS: I kneecapped them a little bit by taking him away. Not that he would have stayed there anyway, but we're certainly benefiting from having him down here. There's a handful of groups around Europe that are pretty rapidly moving into this space. The whole community was forced to have to learn what machine learning is, in whatever way that they did, so that they could talk to experts on that side. We're definitely seeing that.

ZIERLER: For you looking to the future, what areas might you move into that you haven't yet? For example, taking machine learning seismology to other planets?

ROSS: Yeah, certainly that's a possibility. I haven't worked on the Mars stuff just because for me, that's still pretty small data.

ZIERLER: It's one seismometer, right?

ROSS: It's one sensor, yeah, so intellectually, it's not very appealing to me because it feels very limiting compared to what I am used to working with. For many people, they enjoy that aspect of feeling like you're in a straitjacket and you can only focus on the one thing in front of you. But for me, working routinely with the largest networks on Earth, it's—

ZIERLER: More than enough to do.

ROSS: Yeah. But certainly there are discussions about putting networks back on the Moon. That could be interesting. We're doing stuff now with the distributed acoustic sensing data that Zhongwen is collecting, which is very, very large-scale, so we're working now on trying to translate some of our technique to that context, and it is already starting to look very successful, so that's quite exciting, but it's got enough of a twist to it from the conventional stuff that we do, which it makes it not just a straightforward application of what we've already done, which is nice. It's also more fun that way.

As I mentioned before, I've kind of moved back a bit towards the engineering seismology side of this, so I'm working with Domniki Asimaki in civil engineering to work on machine learning models that generate stochastic realizations of ground motions that can be used by engineers to run building structural simulations and stuff like that, but it's all learning-based. You give it a big dataset of earthquake ground motion time histories, of the shaking. We have a dataset of like 500,000 records from Japan with large shaking records, and it can basically learn from that, and then generate new shaking records that are totally synthetic but fully consistent with the real records, conditional on things like the earthquake magnitude and the distance that it is from your site and how soft the soil is and things like this. We've been working on this now for several years, and it's really going to be, I think, a big deal in that space as well, once we can fully convince the engineers that this is real [laughs]. So there's a lot of different directions from here. It's a very exciting time to be in this business.

ZIERLER: Finally, last question, looking to the future. What is the pace of advance in AI and machine learning? Is there like a Moore's Law or some kind of schedule of advancement that you can basically see, and that would help you plan out what you're capable of doing in lock-step with those advances?

ROSS: I don't know exactly what it is right now, but at least up until a couple year ago, it was outpacing Moore's Law. The number of papers per year is just extraordinary, and it's impossible to keep up with all the literature. Even for the people in the field, it's impossible. Now at least, for my research, like I said, we've started to recognize a big frontier on accelerating the simulations, which is very important for a lot of different purposes, from imaging the Earth to simulating ground motion scenarios to all sorts of things. It's also another major roadblock in terms of doing the science that we want to do. So, doing the stuff faster is very important, and there's a whole framework that was developed at Caltech a few years ago that we got in on very early, and I was made aware of it even before it was ever published, and it's extremely promising. So we're kind of focusing more on stuff all related to this one area, I would say. That means I'm working a lot with some of the same experts who are already very familiar with the new developments in this specific space on solving partial differential equations with machine learning, so I'm constantly being kept in the loop on what the forefront is, so I don't have to constantly focus on searching the literature myself. I let them do that and report back to me on what are the new developments that we should be aware of.

ZIERLER: It's taking on a life of its own beyond even what you might have envisioned.

ROSS: It is. And like I said, it's feeding back in, because we're also developing the technology on the computer science side as well. We're being made aware of what they're doing but also contributing to the advancing of this. So it's pretty fascinating.

ZIERLER: This has been a great couple of conversations. I'm so glad we were able to do this and capture your contributions to the Seismo Lab. Thank you so much.

ROSS: Thank you.

[END]