July 22, 2022
With the research breadth and support of the United States Geological Survey (USGS), David Wald has pursued a wide range of research projects across the many subfields of seismology. He has analyzed rupture processes of both recent and historic earthquakes, he has done pioneering work on waveform modeling and inversion, and he has probed the fundamental physics of earthquake sources, among others.
Beyond the fundamental science, Wald has made enormous contributions to the overall mitigation of earthquake hazards on everything from devising emergency drills, alert systems, and public communication.
Prior to his graduate work at Caltech, Wald completed his undergraduate degree at St. Lawrence University and his Master's Degree from the University of Arizona. At the Seismology Laboratory, Wald focused on mapping out the temporal and spatial slip distribution on a fault. At the USGS, he has remained active in mentorship and teaching, and he has developed the ShakeMap and Did You Feel It? programs.
Interview Transcript
DAVID ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Friday, July 22, 2022. I am delighted to be here with Dr. David J. Wald. David, it's so great to be with you. Thank you for joining me today.
DAVID WALD: My pleasure.
ZIERLER: To start, would you please tell me your title and institutional affiliation?
WALD: I'm formally a Supervisory Research Geophysicist at the US Geological Survey National Earthquake Information Center, NEIC, in Golden, Colorado. And I've been with the USGS for almost 30 years now. I run and operate research and development on earthquake information products, and the majority of the ones that are used for the USGS were post-earthquake response and analysis.
ZIERLER: Just as a snapshot in time, what are some of the current projects you're working on?
WALD: Well, a lot of it's enhancing existing systems we have for post-earthquake information, such as ShakeMap, Did You Feel It?, PAGER, ShakeCast. And we've developed recently and are enhancing what we call Ground Failure, a series of maps that map out secondary hazards of liquefaction and landsliding after earthquakes. It's a challenging thing to do. It uses ShakeMap shaking as the input, and we try to map out which earthquakes around the globe are going to have considerable secondary effects due to landslides and liquefaction. And this is pretty important because they tend to add to the woes due to shaking, often limiting access by roads, ports, and things due to liquefaction. But a lot of the products are in a constant state of flux because of technological advancements, new datasets, new algorithms, and new strategies to improve and reduce the uncertainty, do things faster. Everything's always being reiterated on and improved with advancing science and technology.
Fundamental and Translational Seismology
ZIERLER: An overall question about your research overview at the USGS. What aspects of your job are fairly academic, where you can follow things that are of interest to you, and what are more service-oriented, where, as a federal employee, there's an agency mission, and you're primarily responsive to that mission?
WALD: Luckily, I have a foot in both worlds. The reason for that is, everything is basically applied research. It's fundamentally applied. It's meant to be useful, it's meant to be used, and it is widely used. But to do that, one has to take advantage of what's already been generated, what's already been developed in academia and in industrial research and development. I get to pay attention to what's going on, participate in what's going on in the academic and more basic research realm, but with an eye to applying it in a very practical sense. And the tools that we have are fundamentally leveraged off of what's been developed in the academic world. After my graduate work at the USGS for 10 years in Pasadena, I was an adjunct at Caltech, and ever since moving to Golden, Colorado in 2002, I've been an adjunct faculty at the Colorado School of Mines, so I'm supervising three undergraduates interns and one graduate intern right now as USGS interns, but they're from the Colorado School of Mines in the Geophysics Department, where I have a position. It's a lovely place to be. Not only are we on campus like we were at Caltech–USGS Building is one of the houses on Caltech campus, and it's right across from the Seismo Lab–but right now, we have a USGS building in the middle of the Colorado School of Mines campus. I can pretend I'm an academic and work for the government.
ZIERLER: What about partnerships with industry in terms of instrumentation and technology? What opportunities do you have to interface with industry?
WALD: That's an interesting question because it really has two facets to it. One, from a technological perspective, we work at the USGS more broadly with instrument-makers, seismic instruments, strong-motion and broadband instruments. We're always trying to follow the latest technology in using much cheaper MEM sensors for ground-motion recordings. We're technologically connected to the state of the art in developing seismic network and other technologies, radar, InSAR, things like that. But my time is much more connected to industry in the sense that they're users of our products. I spend a lot of my time interacting with critical earthquake information users such as utilities, lifeline utilities, critical infrastructure, departments of transportation, and many types of financial-risk modelers that are interested in insurance products and catastrophe bonds, all of whom are interested in the kinds of products we produce after an earthquake. A lot of the work we do is actually accommodating the needs of these sophisticated engineering, financial, and aid agencies that use our products, and our products then get evolved to match the needs and desires of some of these agencies. It's both on the technological front and the interaction with downstream users of what we produce.
ZIERLER: A technical question that has a historical dimension to it. First, just a terminology question, what does near-real-time earthquake shaking mean? What's the threshold there?
WALD: It's a fairly ambiguous term obviously because you can interpret it how you want to. If you go back 20 years, it took us about a half hour more to generate a magnitude and location of an earthquake, say, the Northridge earthquake in 1994, and we were happy about that. At the time, we called that near-real-time earthquake information. Now, we're looking at trying to get earthquake early warning to people in the seconds after an earthquake initiates. Early warning has the goal of getting information to people before the shaking reaches them. It's not always going to be able to do that, but that's the goal. What I work on, as soon as the magnitude and location are determined, we can then infer what's going on in terms of the shaking level and potential impact of the earthquake. In that sense, near-real-time is dependent on where you are. In California, that would be a minute and a half to two minutes after an earthquake. We can get a magnitude and location that's robust enough to start generating our secondary products, like ShakeMap and PAGER. But around the world, we don't have the density of instruments, so it takes more time for the density to get to a sufficient number of instruments for us to get a robust magnitude and location. And that can be up to 15 minutes if you're in a country or island where there aren't a lot of seismic stations around. Near-real-time is a flexible term that means as quick as possible. [Laugh] I think in the lexicon, it's commonly assumed to be the minutes after a significant earthquake, but not hours.
ZIERLER: What is the societal motivation? Obviously, for earthquake early warning, it's to give people a few precious moments to prepare themselves. What's the value in real-time earthquake shaking assessment and monitoring?
WALD: Well, earthquake prediction was always a goal, and that hasn't borne fruit. It seems like it's almost an intractable, potentially impossible. But earthquake early warning's certainly tractable, and yet, even if fully successful, it's not going to change the outcome, to a large degree, of many of the devastating earthquakes we'll see in the future. The buildings will still shake and collapse, there still will be catastrophic losses, human, economic, and infrastructure. Knowing that, we're trying to get information out as quickly as possible to respond appropriately, to have the proper resources and knowledge of what's happened as quickly as possible, to trigger the downstream aid and response agencies that need to get into the field, including Urban Search and Rescue, and really facilitate the response to earthquakes. For an earthquake in a very rural or remote country with bad infrastructure in the middle of the night, it can be half a day or longer before we know what's going on. If we stick to the median and try to look at ground truth, we can infer what's going on within 10 minutes of the earthquake now.
We can't know everything about what's happened unless we have the detailed infrastructure like we have in California with thousands of seismic instruments. But we can make inferences, and that's what we do. A combination of what we record remotely and the inferences we've developed over time with seismological tools to estimate the shaking distribution, locate if there's a significant population exposed to that shaking distribution, and then, with some understanding of how vulnerable that population is to shaking, we can make a rough assessment of how deadly and damaging that earthquake is going to be. These differences in the vulnerability of the population are just extraordinary. The difference between a really well-engineered, building-code-centric area like California and, say, Iran or China is roughly three or four orders of magnitude difference in the fatality rate. With that knowledge, we can make a quick assessment of which earthquakes around the world are going to be important, which ones are going to be deadly, and which ones are going to need what level of response.
ZIERLER: You mentioned how earthquake prediction is an intractable problem at this point. I wonder what your feeling is in terms of future prospects. Will we get there at some point with the right theory and instrumentation, or is earthquake prediction impossible because the earth itself simply doesn't know when an earthquake is going to happen?
WALD: There are two ways to look at that. One, earthquake prediction-like rapid or near-real-time information–it's kind of how you define it. If you want to know exactly when an earthquake will be and how big it'll be, it'll never happen. If you want to know how big an earthquake might be in the next 30 years in a particular area, it's a tractable problem. You wouldn't really call that prediction because you need kind of a useful time period and range of magnitudes. To get to be useful, if it's within a year or two magnitude units, is it useful? I don't know. If you make it, "I need to know within a day what magnitude range it's going to be," I just don't think we're going to get there, except in rare circumstances where Mother Nature provides some clues beyond what she normally provides.
But I'll take a harsher stance on that. Let's say we could do it with inherent uncertainties. A couple days, a couple months, a couple units of magnitude. What are we going to do? Are we going to harden our infrastructure? Just like with earthquake early warning, it's going to be helpful to know what's coming, but it's not going to change a majority of the outcomes that are unfortunately going to be with us due to our inability to change the landscape of vulnerable buildings, particularly around the world in vulnerable countries. I'm not optimistic about earthquake prediction, but I also just think it's a red herring. It's not the right conversation to have because what we know is wrong is that buildings are vulnerable, and people in buildings are vulnerable. Same is true of infrastructure. And earthquake prediction won't change that.
ZIERLER: Are you involved in earthquake engineering and mitigating damage from large earthquakes?
WALD: Yeah. In fact, one of my roles is, I'm the editor-in-chief of one of the best-ranked journals in earthquake engineering called Earthquake Spectra. I have 30 associate editors that are all effectively earthquake engineers, and we publish articles about the building codes, building behavior, societal response, post-earthquake reconnaissance, and lessons learned from earthquakes around the world, and that all feeds back into, hopefully, a better-built environment down the road. I've had the very fortunate connection to the earthquake engineering community pretty much since day one, when I started in seismology as a graduate student, and I started joining the Earthquake Engineering Research Institute, which is very well-connected to Caltech.
A lot of the players who have evolved from the Caltech engineering community have been involved with EERI. In fact, this year, I'm the Joyner Lecturer, which is the memorial lecturer for Bill Joyner, who was initially at the interface between seismology and earthquake engineering. And he developed relationships to estimate ground-motion shaking relevant to engineers, not just seismologists. Getting magnitude and location is fine, but you really want to predict the shaking levels relevant to building response. And that's the connection he made, and I was honored with the lectureship this year for following in the footsteps of making the connection between seismology and earthquake engineering. It's been a fundamental part of my pathway in developing useful products from seismology that can feed into engineering.
Advances in Earthquake Engineering
ZIERLER: What are some of the encouraging trend lines in earthquake engineering that would suggest that earthquake damage in the future, at least for highly developed countries like the United States, will be less severe?
WALD: I think the most encouraging thing is that we know how to build safe buildings. We have learned enough over time that it's really only a balance between societal interests and the ability to implement what we know works. That said, the more pessimistic part of me recognizes that anyplace, even well-developed countries like Japan, New Zealand, California, as part of the US, have what we call "inherited vulnerabilities". It doesn't matter what the building code is today, it matters what the landscape of buildings is in a city. If you look at any city around the world, Mother Nature really doesn't care what you do right, she only cares what you've messed up in the past. These inherited vulnerabilities are going to be with us for a long time, despite the fact that we know they're there and know how to do better. It's a huge balance between other social interests, very important social interests, and the risk involved. The interesting challenge that we have going forward is not only communicating the risk, but understanding it better, and being practical about what we should be doing. We can't just replace all the buildings on the entire planet. We have to make practical decisions. And unfortunately, those practical decisions are much more difficult in countries that are developing and worrying about food scarcity and other basic issues.
ZIERLER: Tell me about the origins of ShakeMap and your involvement in it.
WALD: Early on, while I was at Caltech from 1988 to 2003, there was a slowly evolving seismic network of broadband instruments called TERRAscope, which evolved into TriNet. It's expanded further out into Southern California and ultimately turned into CISN, which is the California-wide seismic network collection. And in my graduate work, I was studying earthquake sources, but also looking at the new data that was coming in from these new instruments that were now in real time with continuous telemetry, so we could see what was happening as the shaking occurred. As I crossed the street and did a post-doc with Tom Heaton, Jim Mori, and Steve Hartzell of the USGS, continuing to work with Caltech, of course, I started developing ways to take the seismic records, and rather than trying to understand the earthquake's source, trying to just ascertain the shaking at each seismic instrument. At first, it was a pretty sparse dataset.
We'd have a few key points where we'd have shaking recording, and I wanted to make a useful product that would infer the shaking everywhere else with the tools we could develop on inferring shaking from seismological applications and inferences we'd made. We really had a problem with sparse data and inferring the shaking levels everywhere else. It was a really fun problem. Over time, as the instruments became more ubiquitous and denser, we'd have that shake map, a map of the shaking distribution after an earthquake, better-informed because we'd just have more instruments, and the inferences became less important. If you think about it, and you go to some other place around the world that doesn't have dense instruments, you're going to have more inference. ShakeMap is built-in.
We've developed the tools to predict the shaking even without any recordings from just the magnitude and location, and what we know about the earthquake's source, its depth, location, the shaking due to soil versus rock and other amplification features. We made a system called ShakeMap that will give us an estimate of shaking, and the more instruments you put into it, the better the constraints and the more accurate that map's going to be. California gets a very good ShakeMap after any earthquake in the region, and that can be used to infer what's happened in terms of potential damage and what people would experience. And it happens within 5 to 10 minutes of an earthquake. Again, the challenge of that is to rapidly ingest the seismic information, infer from that what really happened with additional information about what you know about the magnitude and location, and make your best estimate of shaking. That's really what ShakeMap does.
ZIERLER: The Did You Feel It? program, did that develop in tandem with ShakeMap? Or was it more of a separate project?
WALD: I like to think of things all having some relationship to each other and some grand vision, but in some sense, it was independent, and yet, it didn't take long for the connection to be made. ShakeMap is about seismic instruments. It's fundamentally taking what's recorded in the ground, the acceleration of the ground, and inferring what people would experience and potentially what damage would be. Basically, the intensity of the ground shaking. Intensity of shaking has always been, for the last 150 years, a measure that basically reflects what people would feel, what they would experience, and what would happen to buildings. And it could be done without seismic instruments. You look at what happened around you, you felt the earthquake or not, everyone felt it or just a few.
And as you go to higher intensity levels, things become very apparent, like things getting knocked off tables or shelves, a picture getting knocked askew. Then, as you get to even higher intensity, you get to damaging levels, cracks in the walls, potentially chimney damage, structural damage. Those intensity levels are easily described by humans in past and current earthquakes. And that intensity scale is much more intuitive than magnitude and location. Nobody understands magnitude and location. Most seismologists get confused about it. We wanted to do two things. We wanted to make ShakeMap useful, and doing that meant not leaving the engineers behind, but giving them the acceleration, spectral response, and other things that they use for building analysis in the ShakeMap under different layers, but making the signature product of ShakeMap a color-coded intensity map that was easy to read (for the general public) like the USA Today weather map.
Yellow, orange, and red are bad, where damage is possible. That color mapping maps out intensity from one, which is not felt, to three and four, which is widely felt, five and six, where you start to get damage, and seven, eight, and nine, where you certainly will have damage. And that color coding was very useful at communicating what we recorded in the ground converted to this intensity scale. At the same time, we can ask people what they actually experienced in the same scale. We can say, "Did you see things fall off the shelves? Did everyone around you feel the earthquake?" We tapped into the ability to use the internet in the late 90s to get rapid reports of what people experienced and turn those reports into shaking intensities in a system we ultimately called Did You Feel It? The nice thing about it is, it ends up getting location-specific intensities that can be used in ShakeMap. ShakeMap generates intensities from what's recorded in the ground, and Did You Feel It? tells us what people actually experienced, and those two can be combined in the same layer in ShakeMap.
That vision was realized perhaps accidentally, but it was a very nice connection between what humans feel and what engineers record or care about for structures. That, I think, increased not only the popularity of ShakeMap, but it actually reinvigorated the whole notion of using intensity in the United States to describe earthquakes. If you get an earthquake early warning now, it's going to describe what intensity you feel rather than what the peak or spectral acceleration of the ground is, which is what engineers tend to desire and think in. They grew up in parallel. They were developed separately, but the connection became very obvious when we tried to communicate these things to the general population. Going beyond magnitude and epicenter was the fundamental goal there. We want to show not just where the earthquake occurred, but what the shaking pattern was, how intense it was, and over what region that shaking occurred.
ZIERLER: Did you intend to include a software system at the beginning for ShakeMap? Or ShakeCast could've only come about once you saw how ShakeMap was operationalized?
WALD: ShakeMap was always algorithmic, kind of a geospatial interpolation of what we recorded and what we inferred, and that became a rapidly developing software application. Through the Earthquake Research Affiliates (ERA), a program that existed in the late 90s, early 2000s at Caltech, a consortium of corporate, utility, and engineering interests that were interested in what the Seismo Lab was doing at the time, what new novel lessons and tools were available for earthquake information. As ShakeMap became realized, useful, and publicly available, my opportunity to work with a lot of these industry, utility, and critical lifeline folks resulted in the same question, which was, "What happened to my stuff? What happened to my facilities? What was the shaking at my facility right after the earthquake?" With an uncertain estimate of the shaking with the ShakeMap–it could be very well-constrained in places where there are stations and inferred elsewhere. With that distribution of shaking, we realized we could build a layer on top of that, which would query the shake map and say, "What's the shaking at this location?"
And if the utility operator or engineers involved with the ShakeCast system knew what level of shaking would cause concern, whether it needs an inspection, to be shut down, or needs a critical look could be determined in an uncertain but very practical way. We were fortunate to work with the California Department of Transportation early on, who funded the development of ShakeCast, specifically to look at bridges after earthquakes. If you think of Caltrans, they have over 25,000 bridges in California. And a major earthquake could shake thousands of them. They don't have thousands of inspectors. They don't have the capacity to inspect that many structures immediately, so they want to prioritize it. Groups that have a huge footprint of things— infrastructure, lifelines, critical facilities, buildings—and want to know rapidly what assessments are needed or if they need to even get out of bed, can use the ShakeCast system to rapidly pull up a shake map, look at their stuff, and determine whether they need to respond, how they're going to respond, and how to prioritize it.
Some of the critical lifeline and heavily invested ShakeCast users have developed protocols based on what comes out of ShakeCast. "Here's a list of bridges hierarchically of what the inspection priority is. Go inspect them in this order." Normally, utilities, up until this time, and some still do, draw a circle on the map around the epicenter. And we know that for very long faults, the epicenter's not going to tell you everything you need to know. The shaking pattern's complicated, and it can be very elongated along the fault, so between ShakeMap and ShakeCast, these operators can get a better sense of what's going on. I should contrast that to PAGER, which is the Prompt Assessment of Global Earthquake Response, which is much more geared at the societal response. What happened overall to the population in terms of fatalities and economic losses? An aid agency isn't as worried about Caltrans bridges or the bridges in a particular location, they're interested in the overall impact.
PAGER is meant to be used for the big picture, "What happened in this earthquake? Is it of significance? What level of significance?" Green, yellow, orange, red is how we send out alerts. And ShakeCast is much more of a user-centric focus on, "What happened to my stuff? What happened to my bridges, my buildings?" It's a user-centric view of the world, rather than the overall impact. The key there is that the user has more information about their infrastructure, their portfolio of insured properties, their bridges. They have much more information about how those things might respond than any one entity could have for that set of facilities in a bigger project that would cover the whole world. It's a kind of focused response for critical users who know a lot about where and what they have in the ground.
ZIERLER: I wonder if another contrast between PAGER and ShakeMap is that you were looking simply to globalize a system and project that was, in some regards, American-centric.
WALD: Yeah, that's a good point. As I mentioned, with ShakeMap, it's more uncertain in places where we don't have seismic stations in the density we have in California. Technically, it's no problem to make a shake map anywhere, but it has a higher uncertainty. The level of use internationally would be less than that in California. In critical infrastructure, you really want some certainty in the shaking estimates. But even with the uncertainty in global ShakeMaps, we can infer what's happened because the population density is pretty well-known and the vulnerability of the population, basically the buildings in the ground, give us enough information to make a general inference about what's going on. Only a handful of countries around the world have really capable post-earthquake analysis information flow. As the National Earthquake Information Center, we have an international mission to try to get out information that's useful not only to our country, expats, aid agencies, and other people living around the world, but also in-country users who don't have the types of seismic networks we have in the United States.
ZIERLER: One technical term that came up in your publication list that I wasn't familiar with, liquefaction. What is that?
WALD: Liquefaction is when you strongly shake wet, sandy soil. The wet is required because as you shake it, the pore pressure builds up, and the sand-to-sand contacts get basically suspended, and it loses its strength. Sand that could hold up infrastructure or roadways can basically liquify and effectively are squeezed up. Water can get ejected, sand can get ejected, and a lot of deformation happens underground that can really damage the underground infrastructure. Pipelines, utilities, electricity, and roadways, anything else that is on top of the area that liquified. Liquefaction is not all that deadly, it doesn't cause major collapse of structures, for the most part, but it does destroy them. Bridges and critical infrastructure can get disrupted, which leads to very, very expensive repairs. It's wet sand, so it can also affect ports very heavily, and that's usually where you need to have response capabilities for major earthquakes and also where you have major infrastructure all over the United States and the world.
Liquefaction isn't the biggest source of additional fatalities, but it's a very expensive problem. Landsliding, on the other hand, is also something we have trouble analyzing to understand what additional fatalities will happen. One of our current new products is a landslide and liquefaction estimate that's really important to try to understand where landslides occur, but our ability to describe how far they move or what areas they affect in an accurate sense limits us to being able to also add the likelihood of fatalities due to landslides. But in some earthquakes, like in California, for instance, the 1906 earthquake, there were tens of thousands of landslides, the Northridge and Loma Prieta earthquakes had thousands of landslides. We've had earthquakes around the world, like in Wenchuan, China in 2008, where 20,000 people were killed from the landslides alone. Our ability to be able to add that to an estimate after a major earthquake is something we're working really hard on right now.
Liquefaction and Landslides
ZIERLER: To clarify, is the dynamic of liquefaction always a factor in earthquake-induced landslides?
WALD: Interestingly enough, landslides and liquefactions are almost completely separable on the map. The reason is, liquefaction happens in soils, and soils tend to be flat. They tend to be the result of a depositional environment from fluvial, oceanic, and lake sedimentation. As you go steeper and steeper, the particles of the sedimentation get to be larger and larger, and you get, say in Los Angeles, an alluvial plain, an alluvial fan, very steep, rocky slopes. As you get to these rocky slopes, you have the potential for landsliding. You tend to separate landsliding and steep slopes from liquefaction in flat areas that have soft sediments that can ultimately be wet through water saturation from the groundwater table. Our estimates of landslide and liquefaction could almost go on the same map because the areas that are steep that shook have potential for landsliding, and the areas that are flat that shook have potential for liquefaction. It gets more complicated than that, and you want to get more geotechnical information to understand the strength of the soil and the rock on the slopes, and as you bring those parameters in, you get a better sense of what areas are going to slide, given a certain level of shaking, and what areas are going to liquify. It's really an interesting course of evolution for how we make ShakeMaps. We need to know how the ground is going to amplify the shaking.
In basins, it amplifies due to the soils of the basin, and the shape of the basin can reverberate energy. And steep slopes and rocks tend to not amplify as much as sediments do. We did not have a map of the amplification around the world, and so we developed a proxy for that by taking simply the topographic slope, which is really well-known around the world, and estimating what the soil properties were as a function of slope. We now have maps of the world we can use in ShakeMap to estimate the shaking levels, where we predict shaking based on the slope. It turns out further that people live where it's flat, and they don't live on slopes, with the exception of the Hollywood Hills and other really expensive real estate. In general, populations are dense where there's water and in valleys around big mountains. Those tend to be places that, one, have earthquakes, that's where the mountains came from, two, tend to have water, and three, are on flat land, and those flat lands amplify shaking the most.
These things all go together. The reason we made topography as the background layer on the ShakeMap is that if you look at Los Angeles, Pasadena, La Cañada, you're looking at slightly different basins, and they all show up really well when you look at topography. Angeles Crest Highway's going to be through the rocks or mountains, Pasadena going to the east, and then Los Angeles going to the south is in the flat area that's basin. And those, again, tend to shake the most. Showing topography on a ShakeMap is not just for a geographic location, but it's also a description of what areas are going to shake, and it certainly rings true when we get the actual measurements.
ZIERLER: What tsunamis have you been involved with, both in understanding the underlying earthquake dynamics, and also mitigating the damage that comes from flooding, fires, and everything else?
WALD: My connection to the tsunami world is academic and of great interest, but I don't spend a lot of time working with tsunamis. Every time there's a tsunami, though, it is part of our response to understand the scope of that, and yet, the USGS in particular is not the responsible agency for tsunami alerting or response, NOAA is. That's sort of a little bit out of our playing field. And yet, for earthquakes, tsunami can be the dominant loss-generator. For the earthquakes in 2004 in Sumatra and 2011 in Tohoku, Japan, the shaking was not the dominant source of problems, it was the tsunamis. This is something we understand and follow academically and intuitively, and it's of great interest, but we're not responsible for that modeling. It turns out that those recent earthquakes really ramped up the ability to understand and predict tsunami inundation and the damage from tsunami. We understand still a lot more about building response to shaking because we've had so many more examples of that over the years. And in California, the obvious engineering development that's happened over the last century has really been focused on shaking due to earthquakes and not tsunami because that's the main source of damage in this part of the world.
ZIERLER: Between geospatial interpolation and geodesy, what have been some of the advances in the satellite industry and technology that make all of this research possible?
WALD: Satellite imagery has really been a game changer in a number of ways. One of them is topography. We just talked about how we use topography, which is extremely accurate around the planet, to infer other things that are going on, the ability to have landslides, the soil properties, and other inferences. Just from topography, which was based on shuttle radar originally and has been updated, we've learned enormous amounts of things about how earthquakes behave. There are also commensurate layers that come out of satellite imagery that are soil properties, material properties, wetness, and other observations that are fundamental in trying to understand earthquake response. But probably the more interesting stuff that's happened of late–and of course, everything's happening at higher resolution over time, so we're getting more details about the same, but one of the interesting things we're doing now is to recognize that our models of the losses after an earthquake are very transient.
As you learn more and get imagery after an earthquake, you can rapidly update your models to actually accommodate new observations and get to the actual ground truth losses not only more accurately and with less uncertainty, but in a timeframe that's still very useful for different types of responses. Our Urban Search and Rescue teams have to be out there within the first three days, but you then have a long, extracted period of response, recovery, and rebuilding. And with imagery, we can look at much a more spatially detailed likelihood of damage from what's called a Damage Proxy Map that NASA/JPL puts out, who we work with very closely. The DPM, Damage Proxy Map, tells us what's changed before and after the earthquake. We don't know what's changed. We just know that at this location, in this 10- or 30-meter cell, something has changed. And lately, the last year or two of research and ongoing, we can take that change, and with our models of potential landsliding, potential liquefaction, potential building damage, we can say, "This change is most likely due to this, that, or the other physical cause."
It can be noise. It can be a parking lot where people were parked before the earthquake and aren't parked after. There's always a lot of noise. These change maps before and after the earthquake, InSAR or optical change, are very accurate, but we don't know what they are. With these prior models, we can say, "It's on a steep slope, so it's not going to be liquefaction," or, "It's flat, so it's not going to be landsliding," or, "There's no building there, so there's not going to be building damage." With these prior models, we can then attribute that change to the physical processes and then basically have a better map of what actually happened. That process, to me, is just fascinating. It's taking advantage of not only the latest satellite imagery, but also machine learning and lots of the newest tools available for processing these big datasets and updating models with a Bayesian updating framework that's just fascinating to me. Ultimately, it's going to be quite practical.
ZIERLER: You alluded to it, but if I can ask more directly, with these new capabilities in real-time shaking analysis, best-case scenario, what do those loss models that come as a result allow you to do, both in terms of preparation as well as the basic science?
WALD: Our basic mission is to reduce losses due to earthquakes. It's basically loss or risk reduction. One of the attributes of what I'm doing is the real-time sense, and that's an important component. Everything you use to respond to an earthquake can be used to plan for an earthquake. Let's do the former first. Once you have a shake map, and you generate, say, a PAGER loss model and ground figure models, there's a variety of different use sectors, and that ranges from the general public and the media to just get a general sense of what happened–"There was an earthquake in Afghanistan an hour or two ago, we're estimating 1,000 fatalities. This is a big problem." The same time that's out publicly, that information is going to international aid agencies. It's going to USAID, the agency in the United States for International Development, that also calls up the Los Angeles and Fairfax, Virginia Heavy USAR teams for international response, if it's asked for by the country.
You have aid agencies deciding what resources need to be sent, what they're going to do monetarily and getting their contacts in-country to get ground-truth assessments and decide what resources they need to put out, and those resources depend on the area impacted, the nature of the impact, and how many people are affected. Those are the end-members, the general public and responsible agencies. At the same time, we have many different use sectors in the actual response and emergency response sector. Urban Search and Rescue teams. The governments of countries around the world get this information. It's all available not only online but through feeds and notifications we send out. There's a whole range of different uses and levels of situational awareness or actual response orientation. One of the latest developments in the use of these products has been in the financial sector. There have always been very important decisions made on insurance and reinsurance based on the products we make, but there are new financial instruments out there called catastrophe bonds.
CAT bonds are triggered by our ShakeMap, magnitude and location, and PAGER results. CAT bonds are effectively an insurance product that a country, municipality, city, or urban area can acquire, and if something bad happens, like the earthquake you were dreading, you can actually get immediate response within days of an earthquake without having to know what actually happened. But just that the conditions were met that you were insuring against. There's a CAT bond for around $1 billion in Tokyo that is based on having a magnitude-7 earthquake within a box around Tokyo and a certain shallow depth. If that happens, Tokyo will get that payout. There are industries, utilities, and companies that have these kinds of bonds that are insuring against the bad happening. We're the independent information broker who says, "The bad happened. This is the magnitude and location the USGS has. Here's the shaking map. Does that trigger the payout you were insured for?" These products find a life of their own. ShakeCast is very critical for lifelines and other parties responsible for getting us back on your feet after an earthquake. We look at the social impact, but if the roads aren't available, we're not going to have a recovery. We can't reopen business, can't communicate, can't do transportation and response.
Every utility that uses this information is also able to respond better. I said that these tools can also be used for planning and mitigation. ShakeMap is a very nice way of portraying what an earthquake results in. It's the shaking distribution over a large area. "Severity's really high here where there are oranges and reds." And it's intuitively useful for understanding what might happen in a future earthquake. We do a lot of planning exercises, which we call scenario earthquakes, where we'll look at the Southern San Andreas, which hasn't ruptured since 1857, and say, "What if that happened again?" This is called the great shake-out. We generate the shaking from a model of the earthquake, run that through ShakeMap, and portray the shaking so that people can plan for either the inevitable or some consequence like that. We have a lot of scenarios developed that can be used as a ShakeMap scenario, but we can also run the scenarios through PAGER, ShakeCast, and other tools like Ground Failure to show people what would happen to their infrastructure or the population in general if this, that, or the other earthquake happened.
It becomes a tool for, one, response, but two, "Here's what you could see in the future. What are you going to do? Is there a change in your strategy? Are you going to mitigate, try to improve weak links?" That's a very important role this plays. I always argue that if you're going to expect a shake map after a major earthquake, the best way to understand how to use that shake map is to plan ahead. You don't know when you're going to get earthquake information. If you use the same tools you'll have after an earthquake in your planning exercises, you'll be in good shape after an earthquake to rapidly absorb that information and make decisions appropriately.
Computational Advances
ZIERLER: All of these capabilities and research programs, it's obvious this is an enormous amount of data you're working with. What have been some of the computational advances that have allowed you to make sense of all the data, from simulation, to AI, to more human resources poring over the material? How do you keep track of it all?
WALD: It's difficult. The systems are actually fairly simple, and the amount of data flow is not extraordinary. The biggest challenge is not really the innovation, it's the day-to-day maintenance, operational responsibilities, and response capabilities that are fundamentally mundane. We have to be ready 24/7 at the National Earthquake Information Center to generate these products, and that means very hardened systems where you don't have scientists writing software and having that work on their laptop in some hacked-up code. These things need to get generated in a very operational sense. We have very good programmers who take the algorithms from us and turn those into operational codes with redundant systems and hardened environments. The data flow around the world is substantial. We get a lot of seismic stations in real time in the basement of the National Earthquake Information Center in Golden, Colorado as well as at Caltech, Berkeley, Menlo Park, USGS. The data flow is one challenge. The robustness of the data flow is a big challenge.
\We're not a military operation. We have commercial, internet, and phone lines as well as satellite and other flavors of communications, so that's a big challenge, acquiring data. From a ShakeMap perspective, the actual calculations use only the peak shaking at each station. We could have hundreds of stations on a map, but it's only parametric values, so that's straightforward. The calculation that infers shaking everywhere else can be very expensive, and that's limited, and there's a tradeoff between how fine we make that mesh of resampling and how quickly we get it out. Not only how quickly we get it out, but how fast people can then download it. We want to provide the grid to people in a form that's less than 50 megabytes so they can just grab it electronically. There's always a tradeoff between speed and efficiency in terms of what people can make use of. Some of the other calculations can be expensive, but we tend to develop systems that are empirical. This is an important story and lesson. You can simulate ground motions in a completely mechanistic, mechanical model, numerical model of the earth, 3D structure, and propagate the fault rupture through that 3D cube of cells, and generate shaking at any locations, and predict estimate shaking.
But it's computationally very expensive and slow, and to do it in real time would be not only cost-preventative, but the current state of the art is that the accuracy of those results is not as good as what you can do with models of what's happened in the past. We call those ground-motion prediction equations. Simply put, it's magnitude and distance in a certain soil property that allows us to predict shaking from the distance to a fault. That's very cheap in comparison to 3D numerical simulations. And it's the state of the art, unfortunately. We can do that for all frequencies, and we can do that very quickly. The challenging computation is to take these predictions on a grid and infer the shaking at each point, which takes a combination of the prediction, the seismic station recording the acceleration of the ground, and the Did You Feel It? report, all at different locations, resamples those, and weighs them accordingly. When you care about the intensity of the ground, what people would experience, the most accurate report is the one that came in from somebody who experienced it. [Laugh] If you have the shaking recorded at the location, you can say, "I know exactly what the acceleration is. What did you feel?" "I don't know, I have to infer that."
If you're interested in the response of a building, you want to know the acceleration of the ground. If you're interested in the human element, you want to know what people experienced. If you want to know if the chimney is broken, you can infer that from the shaking and what you know about the building, or you can ask the human if the chimney broke. [Laugh] What we do in ShakeMap, the layer of acceleration is highly dependent on the seismic instruments, and the layer of intensity's highly dependent on the Did You Feel It? responses. And there's some tradeoff between the two that we map together that allows us to get the best possible layer for each of these different intensity measures. Getting to your question about AI and higher-level calculations, those inferences can be pretty expensive. We have a number of different layers we're looking at at the same time, the soil map, the population map, the vulnerability of each country, so those get to be effectively just expensive grid calculations.
But those can be done in the cloud and on fast machines fairly readily, in the course of a minute or two. In ShakeMap, you have effectively minutes to get this generated and put out there. We're looking at getting a magnitude and epicenter within 5 to 15 minutes around, and a minute later, ShakeMap, and a couple minutes later, PAGER run. Same thing with ShakeCast couple minutes after the ShakeMap. ShakeMap is the fundamental input to PAGER, ShakeCast, and the Ground Failure products. It's the hazard layer needed for those, so it always goes in that order. Some of the things we were talking about with updating these models on-the-fly do take machine learning and updating processes that are more interesting in terms of the computation themselves, but they're not limited by the current speeds, we're typically limited by data. We need more data. If we had more sensors everywhere, which is another story in its own right, we'd have more accurate maps and more accurate loss assessments. And that could be handled computationally, its' just that we're lacking the data flow from a lot of places around the world.
ZIERLER: One more question before we go back and develop your personal narrative that gets you to Caltech. All of this research obviously is of extreme interest to the general public, particularly those who are vulnerable to earthquakes. What has been your involvement in and what have you learned about the art of public engagement and science communication, perhaps particularly about managing expectations for what seismology can and cannot do?
WALD: I initially did a lot more direct interaction with the public when I was at USGS on the Caltech campus, in part because you're in earthquake country, and you have earthquakes, and you have the opportunity to interact with the media and through other venues, like the products we put out through the internet. My focus has been more geared towards, let's say, professional users of our products over time. A lot of what I've done has been really working backwards from what people like the California Department of Transportation need and figuring out the missing science to make that happen. Going to meetings, to workshops, to forums where these utility operators, government officials, government agencies are available to chat with, you learn a lot about what you know they need and what they say they need, and that combination is something we reconcile all the time.
We can't produce what everyone wants, we can only produce what's scientifically legitimate to share. And that becomes a really interesting problem in communication. All along the way, we've had products that are useful but uncertain, and we've had to decide when these things are sufficiently beneficial that the uncertainty is just something inherent to the process that people need to understand. Communicating uncertainty has always been a challenge, but we've taken it on in some serious ways, and we've tried to be honest about what we can and can't do. The PAGER system was sort of a really bold step forward because nobody really was out there publicly estimating fatalities after an earthquake. It's a pretty risky business because you can be wrong. Uncertainty means you can be wrong. [Laugh] There's a distribution of possibilities, and we try to give our best estimate and let people know what the uncertainty is.
With PAGER, we did that a couple ways. One, we estimate the median number of fatalities. But we never share that number. It's never going to be right. It's simply the middle of the model. We make an alert level that encompasses the middle of the model. We have a green, yellow, orange, and red alert level. These are fatality ranges of 0 to 100, 100 to 1,000, 1,000 plus. And those are pretty wide ranges, but they're very useful ranges. If the alert level's green, we don't expect fatalities. There could be a few fatalities, but there aren't going to be thousands of fatalities. Likewise, if we predict 1,000 fatalities. If there are hundreds of fatalities, it's not a huge loss in terms of information flow because people will be geared up to do what they need to do. We portray those uncertainties not by putting the exact value out, which is what people would focus on and say, "It wasn't that, it was this," but by putting out the ranges are possible and the possibility of being in adjacent ranges.
This is the whole nature of this one-page product that has an alert level and a histogram that shows the probability of being in adjacent loss levels. We do that for financial losses as well as fatalities. That's still one that haunts me because we can be wrong, and we have been wrong. And we have another approach for solving that problem, which is to update these things as soon as we get more information. The first magnitude and location are uncertain. If you wait another half hour, you tend to stabilize the magnitude and location. Another hour or two, you might have a sense of what the fault is that ruptured. Each time we get more information or more accurate information, we update the products, and we re-notify if necessary, if things change significantly. But people who use these products regularly know they're uncertain and know that they need to check back and get the updates as time goes on before they put boots on the ground or take serious action. They also know there's ground-truth information they should be pulling in at the same time they're working with our model and not go in blindly, but have as much information flow from other sources as possible.
ZIERLER: Let's go back and set the stage to how you get to Caltech. As an undergraduate at St. Lawrence University, were you always, at that early stage, interested in earthquakes and seismology?
WALD: I was not. I started in physics. I was going to do a 3-2 year engineering program. And I got a ride home in my sophomore year from a guy named Peter de Menocal, who's now a geologist at Lamont-Doherty, and he gave me a tour of geology through the Adirondacks during a seven-hour drive back to Rye, New York. I signed up for geology courses when I got back. And ultimately, taking physics and geology, the natural course of evolution is geophysics, so that's what I did for my graduate degree. But at St. Lawrence, I did feel my first earthquake. It was 1983, the Goodnow, New York earthquake. I think it was maybe 5.3. I definitely had an inkling of what Mother Nature was capable of there. But I really got interested in geophysics, then not until I got to the University of Arizona did I get interested in earthquakes, and that was a completely fortuitous experience where my advisor who accepted me into the program at Arizona was Terry Wallace, who was a Don Helmberger PhD student just a year before.
He was fresh out of Caltech, and at the University of Arizona, did I hear Caltech stories, oh my gosh. I got very interested in Caltech over the two years at Arizona, learning about earthquake seismology from Terry. At the end of my master's degree, I secured a job in Pasadena at a consulting company, Woodward-Clyde Consultants, which ultimately was taken over by AECOM and many other companies over the years, that coincidentally was founded by Don Helmberger and David Harkrider from the Seismo Lab. I worked with people at Woodward-Clyde for two years and got to know Don Helmberger pretty well, mostly from playing football with Don. [Laugh] Actually, I just got this yesterday, which is the Don Helmberger memorial issue of Earthquake Science from one of his students. I contributed to that memorial issue. But Don and I not only worked on science at Woodward-Clyde together, but we played football a lot. When I applied for graduate school for a PhD after working in industry for two years, I only applied to Caltech. I really liked Pasadena. By the time, my future-wife had moved to Pasadena to work at the USGS ahead of me, and I applied only to Caltech to work with Don, and that was the best move I've made on the academic front.
ZIERLER: Your master's in geophysics from the University of Arizona, did you see that as a stepping stone to Caltech? In other words, was Caltech always the dream, but you needed a certain amount of preparation before getting there?
WALD: I thought a master's was pretty much a complete story when I applied for it. That was more than I was initially intending to do, but once you're in that program, and you get a sense of the excitement Terry had on academic pursuit, and seeing what Terry had become with his PhD from Caltech–Terry actually ended up being the Director of Los Alamos National Lab after his faculty tenure at Arizona. Once I got a sense of the academic and the PhD program at Caltech, I got much more interested in it. But I did go to work first. I didn't go directly to an academic program. And that also enlightened me to the possibility of what could be done at a PhD level. I then realized I might have the chops to do that. To be honest, I really think I got into Caltech on a sports scholarship through Don Helmberger's football league. [Laugh] I'll never actually know the truth. I don't know if it was for my ability to catch a football he threw or my mediocre grades. But luckily, I survived.
ZIERLER: Was the program at Arizona a terminal master's, or could you have stayed on for the PhD if you'd wanted to?
WALD: I could've, and I had some colleagues who got their PhDs there. I applied for the master's degree and definitely was not in the mode of pursuing a PhD at that time. It really was meeting people affiliated with Caltech that opened my eyes to that possibility.
ZIERLER: You mentioned you went to work after the master's. What opportunities were you looking for? What was compelling to you at that point?
WALD: A job. [Laugh] It was very fortunate that with my master's in earthquake seismology, there was a position in California that was looking at hazards to Diablo Canyon Nuclear Power Plant. This consulting group, which was really an earthquake seismology consulting group, Paul Somerville was the principal there, and he was an engineering seismologist who was fun to work with and had great problems to work on. And they were always academic problems. We had to solve problems that were much more in line with a PhD thesis than a consulting geotechnical evaluation. And that's why it was connected to Caltech through Don Helmberger and Dave Harkrider. It was really much more academic than your typical job. But I was very fortunate to get that job. I didn't really know what I was getting into, but I had the background to fit right in, and things went from there.
ZIERLER: What year did you start the graduate program at Caltech?
WALD: I finished at Arizona in '86, and after two years at Woodward-Clyde, I started, in the fall of '88, at Caltech. I finished in early '93. I went to the USGS as a National Research Council post-doc and then stayed on. I was only in the Caltech program for three and a half to four years in total, but I came in with a master's.
ZIERLER: Between the master's and the industry experience, how well-prepared did you feel relative to your classmates?
WALD: I felt very well-prepared in doing research, but I wasn't as prepared from a basic core physics and math background as some of my peers. I heavily relied on Kuo-Fong Ma to get me through Physics 106. [Laugh] But it turned out there are different competencies, and you have to have a core background of physics, math, and the core tools. Geology was very helpful in my program. But it's also about innovation and finding the right problems to work on, and that's where I thought Caltech was spectacular. When I was there, we had coffee hour every day. It was Don Helmberger, Tom Heaton, Hiroo Kanamori, Don Anderson, Clarence Allen. And unlike most students, I went every day. I just loved it. And I did it for 15 years because I went from Caltech to the USGS, and I would just come back across the street for coffee hour. What I learned in 15 years is just remarkable. I was at Caltech four years, took classes, did my research, but coffee hour was where you learned how to pick problems and how to avoid problems–how to avoid a problem that Hiroo Kanamori solved in 1962–and find which ones are going to be either really interesting or really valuable, and I tended to go towards the ones that were really valuable. Doing things that were pragmatic was just where I was at. But the ability to just bring an idea down to coffee hour, and after a while, having the bravery to throw it out there, it'd either be shot down or embraced, and to not spend two years on the wrong problem is a huge asset in science.
The Seismo Lab and the New Seismic Network
ZIERLER: Once you got the lay of the land, circa late 80s, early 90s, what were some of the big ideas and debates happening around the Seismo Lab?
WALD: The innovations were, in part, coming in from the new seismic network. That had a couple different ramifications. One was better understanding the earthquake source. My PhD thesis was mapping out the temporal and spatial slip distribution on a fault. We would take the seismograms that were recorded locally, the strong motion instruments, as well as geodesy, and try to formally invert to get the slip distribution, how it happened, and timing on the fault. And there were some basic questions about how earthquakes slip. There was a debate between a crack model and a pulse model that Tom Heaton was advocating. Basically, to distinguish between a fault slipping all at once or slipping like a ribbon or a Persian rug, where you push the pulse at the end of the carpet to the other end, and the whole carpet's moved over by the time you get there–the debate was whether the fault was all moving at once or whether it was more like the Persian rug, where you make a bulge, and the bulge slides down and gets to the other end. We were able to start inferring that. My PhD thesis was to take some of these new instruments and some of the new datasets and combine them–the geodesy was one of them–to the GPS to infer better what's happening on these faults.
I studied the Landers earthquake, the Northridge earthquake, the Loma Prieta earthquake, and ultimately, the 1906 earthquake, to try to infer which of these modes was happening. I don't think the jury's still out, I think the slip-pulse model Tom Heaton offered was pretty much accepted and realized in both labs and real earth measurements. The other issues were related to ground shaking, and the most exciting to me was, in 1971, the San Fernando earthquake, there was a 1.25 g acceleration record that was routinely dismissed as being possible. There were 100 explanations for why that happened. Engineers did not want to build for over a g in terms of accelerations. As we put in more instruments and started getting more of these measurements, we had to try to understand them better and understand how they reproduce them and how they're generated.
And that does come back to the earthquake source challenge, how the radiation works in terms of the slip pulse, but also how the energy is propagated through the earth. And that just became fascinating. There was an earthquake in Nahanni in Western Canada that was recorded over 1.5 g, and immediately, the joke was, "Let's explain that away with a moose kick. [Laugh] It couldn't have been real, right?" What we were really trying to do was understand that source and how it could've generated over a g of acceleration. And over time, we replicated these things, and it became kind of standard operating procedure. Since then, we've gotten hundreds of recordings over a g, especially with the dense network in Japan and many large earthquakes. Better understanding the source, how ground motions could get so large, and how predictable they could be were really the things at the time.
ZIERLER: What was the process of determining who your thesis advisor would be?
WALD: I had it easy because I decided to work with Don from the get-go. The bigger problem was what problem to work on, but I had some inklings based on what I'd been working on with Terry, understanding the source as a point, and at Woodward-Clyde, understanding the source of an earthquake as a fault with large dimensions. As soon as I got to Pasadena, Mother Nature really informed my thesis. I got to Pasadena a week after the North Palm Springs earthquake in '86, and I was there for Whittier in '87, and I was there for Sierra Madre in '88, '89 was Loma Prieta, '91 was Sierra Madre, '92 was Landers, and so on, and so forth. At Landers, I basically said to Don, "That's it. I'm out of here before there's another earthquake." Because each of these turned out to be a chapter in my thesis. After Landers, I was moving across the street, and I ultimately did study that at the USGS. But I was lucky enough to have Mother Nature outline the chapters of my thesis. [Laugh]
And we were basically doing source studies of these different earthquakes, but developing the tools to do that in a way that has never been done before. And one of the innovations in my thesis was to combine seismic data and geodetic data at the same time. You have a high-frequency view of the source from the ground-motion data, and you have a static view of the source from the geodetic data. The geodetic data only knows that between the time it started and ended, it had this displacement. The seismic data doesn't have the resolution of the geodetic data of the slip distribution, but it tells you how it happened in time. When you combine the two, the geodetic data constrains what the seismic data can do in terms of where the slip happened, but the seismic data offers a better resolution of what can happen in time. And this whole slip pulse thing was really important to know how fast the rupture was propagating and over what dimension it was rupturing at the time. And that relates back to what we call the rise time, which is the amount of time it takes for a piece of the fault to slip, and that imaging allowed us to further understand the physics of the earthquake and much better estimate the shaking of the earthquake. A crack-like model gives you a very different shaking than the pulse does, and that's something that we could reconcile.
ZIERLER: As you mentioned, each of these earthquakes formed a chapter of your thesis. With that in mind, and thinking about your thesis as a whole, what aspects of each of these earthquakes had similarities, and where were there really unique aspects to each of them?
WALD: The similarities all came together in my thesis in the final chapter, and that was to understand the rupture processes that carry through from event to event, that are not unique to a specific earthquake rupture. One of those features was this narrow, self-healing pulse. And that was probably the most interesting to resolve. When I first started combining the geodetic data with the seismic data, it allowed us to image that a lot better. But each earthquake is surprising in its own way, and that reality was that once you image these different earthquakes and find the similarities, in addition to this pulse of slip, you have limits to how fast slip can happen and what the rupture velocity is. Those things carry over and have physical ramifications in how earthquake mechanics work. But there are components to each rupture that are unique and unpredictable, and that became really obvious.
You can plan for a scenario earthquake over a certain part of the fault, but whether it starts from here, there, or in the middle, whether it slips in a uniform fashion or stops and starts, which we resolved in a number of different earthquakes, these things are not predictable. Whether it jumps to another fault, or whether it starts small and stays small, and it never grows to a big earthquake. There were a lot of things that were deterministic and could be explained with physical models, but there were a lot of things that were also stochastic, that were just not predictable with our current knowledge of the underground. It simply became a statistical exercise. "We can infer maybe this segment of fault will rupture. If it does, here are the possible outcomes you'll have." And that gives you a range of shaking levels at each location and a range of possible magnitudes, a range of times. You'd have no idea when that was going to happen. There are certain things you can constrain and certain things Mother Nature keeps secret. [Laugh]
ZIERLER: Was field research a big part of your thesis?
WALD: It wasn't. I got a chance to go out and do aftershock deployments after Northridge and Landers. I got out in the field as part of classes and part of reconnaissance. And later on in my career, I got a chance to go out after deadly earthquakes to look at and understand the engineering components. But I was never a field person. My world was in front of the computer, doing modeling, rather than getting samples out in the field. The field came back to me through data that flowed through the field work. I helped others with field work, geodetic monuments, GPS, active-source seismology, but in general, it wasn't my own work I was helping with.
ZIERLER: This is to say generationally, in the Seismo Lab, people who were there in the 50s and 60s, where the Seismo Lab was really a center of data, the decentralization of data in the field was fully mature by the time you got there?
WALD: For me, that's a very good way to describe it. There were others who were still doing unique field work. Those in the 90s doing geodesy were doing strictly field work. The GPS monuments were actually done by deploying these by hand. Then, my colleagues in the geology department were doing trenching at faults and geotechnical investigations, putting instruments downhole, active-source monitoring, like Rob Clayton still does. That was all in the background. Those ingredients were always there, but I was more involved with the data that came back from the field to the lab.
ZIERLER: When did you know you had enough to defend?
WALD: In some ways, you look around you and see what other people are producing. One of the nice things about a Caltech thesis or a modern PhD thesis in our field is that it can be a series of publications. I had a publication for each earthquake in a good peer-reviewed journal. But each earthquake had a different focus and a different new tool that would be brought into it. The way I see it, Don and I became more peers, and at some point, I could tell him when I thought one of his ideas was crazy and when one was a nice path to go down. [Laugh] At some point, it was like in Kung Fu, where the disciple snatches the pebble from your advisor's hand, and then it's time for you to leave.
But I had all those papers together, so it was easy to wrap those in a thesis because they did have a thematic connection, but I was looking at the next step, and I had applied for a post-doc, and that was successful. I think we all agreed it was time to move on. But it was just fun working with Don, where he would throw out 100 ideas, and you couldn't do 100 ideas. The whole goal towards the end of my PhD was to figure out which of those nuggets to go with and which to say no to because it's just another rabbit hole. [Laugh] I think that was the fun part. And I do the same thing now that Don did. I throw out so many ideas, and I have so many more unpublished papers than published papers. Between Don and the coffee hour, you try to figure out which rabbit holes to avoid and which to dig into. [Laugh]
ZIERLER: Besides Don, who else was on your thesis committee?
WALD: Don Helmberger, Hiroo Kanamori, Tom Heaton. Hiroo and Tom were as important as Don was. Don was my formal advisor, but I worked with some of Tom Heaton's codes, and I worked with Hiroo on just understanding earthquakes and earthquake data. All three of those were great contributions. I also had Don Anderson on my committee giving me some more rounded whole-earth geophysics and seismology input. But ultimately, it was those three I really spent my time with, both on my PhD and in coffee hour.
ZIERLER: After you defended, did you consider an academic track at that point? Were post-doc appointments that ultimately would've led to faculty positions something you were considering?
WALD: Yes, but I didn't want to go anywhere. I applied to a position at MIT, and I remember interviewing with Tom Jordan and the future Director of the USGS, Marsha McNutt, at Woods Hole, but I would say I wasn't wholeheartedly interested in an immediate academic position. There was so much fun stuff going on at Caltech and the evolution of the seismic network that the opportunities there were just all over. I could've gone back to Woodward-Clyde, I could've stayed on as a post-doc at Caltech, but the opportunity to do an NRC post-doc with Tom Heaton across the street was just a perfect match for what I thought was next for me.
ZIERLER: That was more short-term thinking, though. You weren't thinking at that point this would be a lifelong career at the USGS, you just wanted to stay nearby because that's where the action was.
WALD: Yeah. At the time, my wife was already four years working at the USGS, so we had to get a waiver of nepotism (as if she could've hired me). But that academic environment at Caltech was not at all missed by being at the USGS. I sort of had all good things. I was working where my wife was, we were in Pasadena, very happy about being there, and we had Caltech. We played softball with the Love Waves, and we had just a lot of good community connections to Caltech. I just couldn't think of a more enriching environment. Later, of course, we had kids and were going through the evolution of raising kids in Southern California with schools and all, and my wife and I both had the opportunity to get transferred to the National Earthquake Information Center in Golden, Colorado by a colleague who'd just taken over the office.
The opportunity was to move to Colorado, live in the mountains, and continue doing what we were doing. I didn't want to go. I just had such a good, ingrained experience with the USGS and Caltech with an Adjunct faculty position, and all my friends in the Seismo Lab. But my wife had grown up in Texas and spent her summers in Colorado, and she convinced me to at least come visit for a couple weeks. And that was probably the second decision she made that was absolutely right. [Laugh] I just couldn't be happier than being here (in Golden) and having a lot of the same opportunities at the National Earthquake Information Center. But I still miss Caltech, and I love going back to Pasadena.
The Importance of Collaborations
ZIERLER: For the last part of our talk, I'd like to ask a few broadly retrospective questions, then we'll end looking to the future. First, it's obvious, but just to hear you narrate it yourself, what has stayed with you that has informed your career and approach to the research that you really picked up at the Seismo Lab?
WALD: Never do anything on your own. There are always people smarter than you, or who have thought about problems in a different way, or have the tools you could spend the next five years developing. Why put your head in the sand and try to repeat something that's been done? If you're trying to solve a problem, use the tools available, and people who think about such problems. I haven't done a single project where I just disappeared into a hole and did it on my own. I love talking to people. The coffee hour was sort of an education in–it was never just that core group, it was always the post-docs, and graduate students, and visitors coming through the Seismo Lab at the time. Those relationships are still with me. You have a question on something you can't figure out, you have someone to talk to about it. I learned a lot over 30 years. I can guide people, advise people, and push them in a direction I think is going to be fruitful. But it's always so nice to be able to bring in other people in a collaborative way. I think that's probably the most useful lesson besides the hardcore coding, physics, geophysics, and everything else I learned at Caltech. It's the human element.
ZIERLER: Because so much of your research is so applied, it's for the real world, where can you point to a project or collaboration where you can feel specific satisfaction that it's helping people, either saving lives, saving infrastructure, or just making people feel more secure in their decisions?
WALD: I put a summary into a slide, the Earthquake Information System, that the USGS puts out. When I say USGS, the regional networks like Caltech, Berkeley, and others run, that all play into these products. The data that's collected, the contribution to ShakeMap, and all the other pieces are very important. But USGS is the overlying entity that runs these products and delivers them. ShakeMap itself, to me, is the most astonishing one. I've got a picture of Arnold Schwarzenegger as governor pointing to the ShakeMap for an earthquake in Southern California and trying to explain it. I've got CNN with pictures of ShakeMap describing where things have happened after a major earthquake. My wife does a lot of education outreach for the USGS, and she gets literally thousands of emails a year that she goes through from everyone from the general public to professionals, and I get the same.
I interact with so many people who use these products and use them in a variety of different ways to plan for or mitigate their earthquake problem, and the feedback from that is just wonderful. To see it on TV is one thing, but to hear from people who use it and say, "Wow, this is exactly what I needed," that's what counts, and that's where it all comes home and gives me the strength to do this for 12 hours a day. [Laugh] And to be on call pretty much 24/7 with earthquakes around the globe. It's exciting, it's rewarding, and I think it's making a difference. ShakeMap runs in probably 20 different countries now with the same software we share. As a government employee, everything we do has to be open. The products are sent to specific critical users, yet the underlying products get sent out through the web and the feeds we produce to everyone.
And every ingredient along the way, the Vs30 maps, this topographic sloping that I mentioned, all those things become products that go to the public openly available and can be used for all sorts of other mitigations. In contrast, we work with consultants and risk-modelers who have their business model to have proprietary data about specific portfolios, engineering aspects of structures, and things like that, which are not in the public domain, and we're fortunate to work in a realm that has to be in the public domain. There's no money to be made, there's only usefulness to be made. You aim for different things. You aim for things that are useful, and you want to communicate them in a useful way. And it's a luxury, I think, to not have any financial applications to make it a successful business model. But it's been successful in bringing in resources at the USGS, so in that sense, it's been successful.
ZIERLER: To flip that question around, not focusing on satisfaction, but perhaps frustration, earlier in your career, what are some projects or initiatives where there was a lot of optimism, but that turned out to be intractable, either from a bureaucratic, administrative, or even psychological or technical perspective?
WALD: When we first came up with Did You Feel It?, the seismic network was being expanded from Caltech and USGS to California Geological Survey, and basically, we got advisory groups together that consisted mainly of engineers. And some of the engineers did not like Did You Feel It?. It was qualitative in their mind, even though observing something falling off a shelf (or not) is absolutely quantitative and can be measured. But they didn't like that it potentially could replace the need for more instruments. And I didn't find that argument to be credible. But for a time, we were not able to put the links to the Did You Feel It? system on the seismic network webpages, but we were able to put them on our own USGS pages. I think the popularity of Did You Feel It? overwhelmed that concern, and the evolution of the seismic networks succeeded nonetheless. With earthquakes came more funding and more instruments.
That was one sort of bureaucratic and political hurdle. Releasing real-time information that is uncertain is always a challenge, and it takes a lot of convincing. We were fortunate to be able to put things out to the point where they were shown to be useful and get funding later. We would get a little bit ahead of where we wanted to be, show the utility of a product, and then be able to convince the USGS and other entities to fully support it. But I've always said the ideas are 5% of the problem. 95% of it is the implementation. And that's where working for the federal government gets very difficult. You're facing uphill battles on the IT front, on the ability to hire quality programmers and technical experts in a domain that they could easily do much better in elsewhere. Resources have always been an issue once you have something that needs to be maintained. People don't understand that today's software is obsolete in a year. In two years, the technology and the platform you're using are going to change.
If you depend on ArcGIS, for example, and it changes, you've got to switch platforms. You have to think ahead and get the resources to stay flexible, and that's always been a challenge, despite the success of the products.
This topographic slope proxy was a really easy way to get a map of the globe in terms of site amplification, but it was very approximate. It was extremely useful because we don't have the geotechnical details or geology maps everywhere. But people who develop geotechnical details or geologic maps were really annoyed that it was such a simple and cheap way of getting a map of site amplification for the planet. There was a lot of pushback, like, "This isn't as accurate as doing it this way." I'd argue, "Do it that way, and we'll incorporate that into our map. But in the meantime, we need to take a shortcut." A lot of what we've done has been with empirical solutions rather than physical solutions to problems.
And a lot of people, Southern California Earthquake Center, for instance, are very adamant about employing the physical solution to a problem. As an academic, it is the important way to understand the problem, to do it with a complete end-to-end physical model. Absolutely, you learn more about the problem when you do it in a full physical model. But sometimes you can't use it to predict things because you don't have the ingredients in the physical model to get to where you need to get. High-frequency seismic shaking, you need to know the details of the earth structure better than we know them. You can't do it with a physical model. You can test different things and learn about the processes better than you can with an empirical model, but we've always taken the empirical approach because we're pragmatists. We've got to get there. We want to learn along the way, and we adopt what people learn from the physical models in our empirical approaches, but that's not satisfactory to everyone. Getting an answer that you don't understand but works is not typically an academic pursuit. [Laugh] But we need an answer. Sometimes these things are pursued as shortcuts, and sometimes they're appreciated as practical solutions. They're all in between and all good opinions about where that should go.
ZIERLER: One last question looking to the future. For however long you want to maintain this schedule, to be involved in all of these areas of research, what's the frontier for you? Where do you want to see progress that's not currently available?
WALD: I'm pretty heavily invested in this updating approach. There's so much data collected after an earthquake, a lot of data in all sorts of flavors, that nobody can absorb because it's all disparate, different formats and flavors. We're looking at ways to incorporate that. The low-hanging fruit is satellite images. They're going to be better with time with NISAR as well as commercial efforts. We know we're going to have high-resolution imagery, and to use that to our advantage is, I think, just a really fun problem. At the same time, people are going out there and crowdsourcing. Did You Feel It?, but analogs in the social media and regular media worlds, where they know what happened at particular locations. Can those things inform the model right away? Not just collecting these disparate bits of information, but allowing those to feed back into models is a cool challenge, and I think it's something that takes coordination, not just scraping the web. It takes coordination of who does what, how it's collected, and what they collect.
If you go to a location and see there's a building down, I want to know what happened to the four buildings around it. Is it one in five buildings, or is the only building that collapsed the only one that's there? These tells you different things. You've got to have the denominator to know the shaking level. Somebody has to coordinate those kinds of collections in order for them to be useful, and we're working on that particular problem. That's sort of this ground-truth updating of these real-time models. But I just love the new tools being put forth to try to improve the little bits of ingredients along the way. They're the cumbersome, mundane ones. Slow progress in science. You reduce the uncertainty a little bit, but you have to do it, and you have to do the hard work that carries forth. It's this combination of slow progress and innovations that makes things exciting. My current goal is to make sure people around me have the ability to carry these things forth, so training, mentoring, and making sure we have the resources going forward to maintain these systems. I'm not too fearful for that because I think they're in-demand. But these different products have different realms of science, everything from geology, to geodesy, to seismology, to engineering, and there aren't that many people playing across the fields. It's been fun to do that, but also to continue to bring in people who can continue to do that is one of the challenges.
ZIERLER: On that note, this has been a terrific conversation, a great historical record for the Caltech Seismo Lab. I'm so happy we connected. Thank you so much.
WALD: I appreciate it.
[END]