skip to main content
Home  /  Interviews  /  Andrew Johnson

Andrew Johnson

Andrew Johnson

Principal Robotics Systems Engineer, Jet Propulsion Laboratory

By David Zierler, Director of the Caltech Heritage Project
March 8, 2023


DAVID ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Wednesday, March 8, 2023. I am delighted to be here with Dr. Andrew Johnson of JPL. Andrew, great to be with you. Thank you for joining me today.

ANDREW JOHNSON: Very glad to be here.

ZIERLER: To start, would you please tell me your title and affiliation within JPL?

JOHNSON: I am a Principal Robotics Systems Engineer, and I am in the Guidance and Control Section, which is in the Autonomous Systems Division, on the Engineering side of Jet Propulsion Lab.

ZIERLER: If you can give me sort of a broader overview of where you fit in overall at JPL within the division as it relates to what JPL is doing, and then what you do as it relates to specific missions or some of the ongoing projects that are happening at JPL.

JOHNSON: JPL is split up into divisions, and my division is really focused on electronics, autonomy, robotics, guidance and control systems. I am a Principal Engineer in that division. There are other divisions that are related to mechanical engineering, navigation, and instruments. I am on the section staff of the Guidance and Control Section. Until recently, I was the Guidance and Control subsystem manager for Mars 2020. That subsystem is one of the subsystems that's in a spacecraft. It's in almost all spacecraft. It does attitude determination, it figures out onboard position estimates, velocity, these kinds of things. My subsystem covered cruise to Mars and also the landing on Mars, as well as the surface operations with the rover and its ability to drive autonomously.

My main technical contribution to that project is similar to all the things that I've done. It was in the landing area. At JPL, we call that entry, descent, and landing. Those are the three phases. First, there's atmospheric entry, where you slow down tremendously using a heat shield. You can do some guidance there to shrink the landing ellipse. Then, there's the parachute phase, where you're drifting on the parachute, wherever it wants to go. And then, finally, there's a propulsive, rocket-powered descent phase that we call landing. My contributions have been, on multiple missions, developing computer vision systems that operate either during landing or during the surface mission. That's my expertise, computer vision. It's a subset of robotics. I went to grad school to get a PhD in robotics, and I've been doing that type of work ever since I got here. Really, my main focus is on landing, sensors and algorithms, to basically improve the science that can be achieved with these missions as well as make them safer and more functional.

ZIERLER: The specialty of computer vision, how far back does that specific field go?

JOHNSON: Well, you need a computer, so I think that probably in my memory, computer vision started in the 60s and 70s. There were two motivations. One was to have vehicles be able to process camera images so that they could drive themselves around. At some point, now, we'll have autonomous cars. That would be the origin of that type of work. There's also the problem of being able to look at some parts that are out there in front of a robot arm, figure out which part is which, and be able to pick it up with the robot arm. Like a human, how do we use our vision system? It's the same. We're trying to get the robots to do that. It started early in the 70s. Computer vision these days takes just incredible amounts of computer processing power, which was not available back then, so the techniques were quite different back then than they are now. In particular, machine learning is a really important part of computer vision these days in its ability to do object recognition, location of objects, and things like that.

ZIERLER: Where does this expertise fit in within the larger give-and-take between science objectives and engineering possibilities? What is an example of what the scientists want to achieve and what you can make possible?

JOHNSON: I can give some specific examples. Scientists in the planetary in situ missions want to do the best science they can, which means they need to go to the places on a planetary surface that are the most scientifically compelling. They use the spacecraft, rover, or lander system to get close to those locations. Mars 2020, which landed the Perseverance Rover, is the first mission for Mars sample return. Perseverance is picking up the samples that will be returned. We don't know exactly where the samples are that are the most likely to have the best scientific content–the evidence that life that existed four billion years ago, for example. We need to go to a place that has a diversity of samples, types of rocks, different layers, different ages, and we need to be able to date the the samples that are collected. What that means for me as an engineer and a computer-vision person is, we need to get the science instruments to a place that has a lot of terrain, that is very rugged. Past systems were not able to do because they landed without any vision. They just landed randomly within where the landing system dropped them off.

What we added to Mars 2020 was a vision system that could tell the lander where it was so that it could target a specific spot on the ground. And we used that specific spot to allow the lander to be targeted at a location where there were very hazardous regions, but also some safe regions embedded within the hazardous regions. By knowing where we were, we could figure out a safe place to land, and that allowed the whole mission to pick the Jezero Crater landing site, which was decided to be the best possible landing site for sample return, within the current constraints of how we do landing. But by adding this new capability, we are able to land there as opposed to somewhere very flat. Curiosity, Mars Science Lab went to a landing ellipse in Gale Crater next to Mount Sharp. It couldn't land on Mount Sharp because it's too hazardous.

The engineering supported the science in that it gave them a mission where the samples to be collected and the in situ measurements would be the best they possibly could be. And then, in that mission, there's another computer system that does autonomous driving, and it does it quickly. The past rovers didn't drive very quickly, they were pretty slow and also couldn't drive in very hazardous terrain. We built an onboard computer for the landing application that we repurposed so it could be used for autonomous rover driving. What that means is, you take images–stereo images, two cameras, like your eyes–and you can take those images simultaneously and figure out if there are any rocks sticking up or steep slopes.

Then, the software onboard can plot a path that goes around those rocks, and the computer can do it quickly so the rover never has to stop driving, and it can basically traverse much greater distances. That allows for a diversity of samples. You're not just landing in one spot and have to collect from there. You can land there, collect those samples, then drive some more and get to better places. Right now, Perseverance is driving up the delta to get to some other regions and trying to do that as quickly as possible so it can get back to collecting samples.

ZIERLER: In the way that there's a duality for every mission–there's doing what's necessarily in the here and now, and there's planning for the next one–how do you fit into the planning stages, what we're learning now, and how we might apply that for what's coming in the future?

JOHNSON: Very much so. I'm actually working on the next Mars mission, and that's called Sample Retrieval Lander. It's the one that lands and has the rocket to bring the samples into orbit. I'm also working on Sample Recovery Helicopters, which is a backup for retrieving the samples. What we're doing for the Sample Retrieval Lander is an extension of what we did for Mars 2020. Now, instead of going to isolated spots that are safe within the landing ellipse, but still spread out over many kilometers where you might possibly land, we're doing pinpoint landing. We'll be able to go to a specific targeted location. That will be the location where Perseverance is. We'll land there, and Perseverance will drive up and deposit the samples its collected into SRL. We've had to add capabilities to do the pinpoint landing.

But one of the reason we're able to add those capabilities is because when we built Mars 2020, we put things in there that we thought might be needed in the future. For example, our system, the Lander Vision System, takes images during descent to estimate position. That's all you needed for Mars 2020, to figure out your position relative to a map. But we also added to the system the ability to estimate velocity and altitude. These are two separate capabilities, and now on Sample Retrieval Lander, we're actually relying on the Lander Vision System to estimate those and provide them to the spacecraft, whereas for Mars 2020, they weren't required. But we proved them in the landing in Mars 2020, so now they're available at low risk to Sample Retrieval Lander to use.

ZIERLER: To turn that question inside out, when you're planning for a mission, there are all of the contingencies you can think of, but then, of course, there's the improvising once you're actually on Mars. I wonder if you can talk about how you plan for those contingencies and what things you need to make up in real time as conditions change.

JOHNSON: I'll start with landing. We can't do that much when we're landing, but we make sure that we test the system under as many possible conditions as we think might occur, and we also do something called field testing, where we take our sensor systems out and test them on Earth under conditions that are like Mars. We fly over Death Valley, Mojave National Preserve, places like that look Mars-like. This gives it an opportunity to not really design how we're going to use it that much, but just see what happens, stress it out, try and make it fail. We do all of that first because it has to work the first time. If things go wrong during landing, we do have the ability to adjust a few things. You can try again to estimate position, you can switch to another computer on the spacecraft, those are options that you have.

For the surface mission, there are all kinds of adjustments on the fly. But those decisions are made by humans because every day on Mars, you can get an uplink and a downlink of commands. That's the planning cadence. You get the rover to do something, and then it goes and executes that. And if it couldn't execute it, it stops. It says, "What happened? I need help." Then, the people on Earth figure out what to do in that case. There are a few–maybe just one at this point–opportunistic capabilities in there that, as the rover's driving, can look at the scene and decide how to point some instruments to get an opportunistic measurement of some feature out there that it wasn't planning to image. The autonomous driving capability does have some decision-making in it in that it has to decide how to drive, what path to take. On a very local scale, that's an autonomous decision-making process, to go around those hazards in order to get to a goal.

When the rover drives, onboard decisions are happening all the time very rapidly. That is, I think, the state of the art right now. Certainly, more autonomy is our goal. We take it in steps. Right now, we're just getting the basics of having the sensor systems that can give us information to make decisions on board, having computing power that allows us to do that as well. The future missions will definitely have more autonomy, but we're sort of in that phase of just starting out to get the basics under our belt before we go into a more autonomous exploration mode with these spacecraft. And it has to be justified, too, because the science community wants to be in charge of what the rover does, where it goes, what it measures. The whole decision about what is a good science experiment still resides within the scientists, as it should, so they definitely strongly prefer to have tight control over what happens. And you could imagine missions where that's not the case, where we're on a moon, and the downlink is so minimal, the spacecraft has to have a lot of autonomy on board in order to do anything because you just cannot command it quickly enough, or you can't get enough information down to make a decision.

ZIERLER: I don't have to tell you how exciting Mars Sample Return is just in terms of the engineering triumph and what the samples might tell us about Mars. How do you see your area of expertise contributing once this happens?

JOHNSON: We're definitely contributing very much to making it happen. Obviously, on Sample Retrieval Lander, the computer-vision systems during landing are used to go to that specific target. You can't land without it, and it has to work this time, 100%. We're enabling that. Our system is low mass because we use cameras, which helps the project out because the more mass you add, the more expensive the project gets, and at some point, it just is not possible to do it. We're doing the pinpoint landing. We landed Perseverance in Jezero Crater. It's getting these great samples. There are various pieces of computer vision on the lander itself, which I don't actually work on, but they're related to figuring out where Perseverance is relative to the lander so that they can adequately dock to each other to transfer the samples. I also work on the Sample Recovery Helicopter, and this is a backup to Perseverance.

The main approach is that Perseverance will come up to the lander and deposit the samples. But if Perseverance happens to fail, then there's a depot of samples that were just recently dropped, and Sample Retrieval Lander will land there, the helicopters will come out, and they'll grab those samples. Helicopters are based on Ingenuity. They can just lift a single sample, they have a little robot arm on them, and they also have wheels. They can go out and grab a sample, which is a complicated process, lots of great steps there, and then fly it back to SRL, drop it on the ground, then a robot arm on the lander will grab it and put it in the rocket. That's a backup. There are other places computer vision is used. When the orbiting sample is in orbit, the volleyball-sized object with the samples in it, then the European Orbiter needs to detect that–that's OS, Orbiting Sample–and must do it from very far away.

It has to detect this very faint object, single-pixel-type detection when it's 1,700 kilometers away. That's a very challenging problem. I wouldn't say it's the most challenging computer-vision problem, it's just very hard because you can barely see it, and you don't know where it is. But it is a computer-vision problem. Then, finally, when it gets up close, there's a scanning LIDAR that can see the sample and basically figure out where it is more accurately so it can be rendezvoused with to be put inside the spacecraft, then the spacecraft returns to earth. Vision has really exploded. When I started at JPL, I can't think of a computer-vision application that was on a spacecraft. Since I've been here, 25 years, it's really exploded. I've had pieces, other people have done big pieces as well. But it's really a blossoming field here.

ZIERLER: In terms of how much it's exploded in those 25 years, in terms of technology development, how much of this is developed or built in-house, and how much are you liaising with industry, things that are happening beyond JPL?

JOHNSON: I would say, at this point, the vision algorithms are done at JPL, all of them, for the pieces that I've worked on. The cameras, we purchase from companies. The computers are put together from pieces or parts that are not made at JPL. But we did build our own computer for Mars 2020 to do the landing problem, so that was done in-house. But we do interact with industry, and also academia and students at the cutting-edge. But typically, the in-house missions, like Mars 2020 and Sample Retrieval Lander, are built mostly inside of JPL. The vision algorithms, guidance, nav, and control, are done internally to JPL. But we recently have been actually doing the opposite. We've been transferring our technology out to industry. This capability I've been talking about for estimating position, in the field, is called Terrain Relative Navigation. We have worked with two companies. A small space startup called Astrobotic in Pittsburgh, we've built a Terrain Relative Navigation system with them that takes advantage of some pieces from Mars 2020, but also adds a new algorithm we developed at JPL. Then, we've also worked with Blue Origin on various tasks. But essentially, the main one that was funded was to transfer our Terrain Relative Navigation technology to them so that they could use it for landing on the moon.

ZIERLER: I want to ask a few questions in historical perspective about the development of data science and machine learning for you at JPL. When you first joined, was anybody talking about machine learning as it related to planetary exploration, or does your career sort of coincide with these developments?

JOHNSON: It coincides with it. I went to Carnegie Mellon and got my PhD there. When I was there, there was an autonomous van that used a neural network to figure out how to steer. I learned about these things in grad school, the cutting-edge, but then there was kind of a lull in machine learning, at least neural network-based learning, for a few years, until the advent of deep learning and the huge successes that it's had. Certainly, these types of learning algorithms require a lot of processing power. And the spacecraft computers we use must be very reliable and radiation-tolerant. As a result, the processing power is not as high. There are reasons for that. Part of it is, it takes a long time to design and make these computers, and then Moore's law is such that you're always behind the times because you're using technology from the past.

The typical spacecraft computer we use in our current missions is 200 megahertz-type clock speed, and that's about 1/10 of your laptop, or even less. We don't have that type of processing power on board and haven't in the past missions. What we did on Mars 2020 was, we built our own computer that had a special-purpose chip that we could reprogram that did a lot of the computer vision so that it could be done quickly. But it was a much more complicated process than just programming a laptop or a Linux computer. But we needed it, so we did that. I would say that the machine learning was at a disadvantage because of the processing required to implement those algorithms. But now, new hardware's being built that has much more processing capability and is also very reliable. There's some NASA effort in that area that appears to be working out very well.

For other projects, you can do things a little bit more risky in terms of the radiation and actually fly some kind of cell phone processor or commercial terrestrial processor to get the processing power that you need, but then also have it be not quite as radiation-tolerant. It's a different risk posture for the project, but sometimes when you need that capability, that's what you do. Ingenuity, the helicopter that was brought there by Mars 2020, has a cell phone processor in it, which is why it's able to do all that incredible flying at very high rates. You need a lot of processing power. The new helicopters that are going to go on Sample Retrieval Lander are also going to have cell phone processors. Within those, there are going to be all kinds of computer-vision algorithms to do various functions, and there's also going to be one algorithm in particular that will most likely use machine learning, and we'll be able to do it because we have the processing power.

ZIERLER: Let's go back to graduate school. Your degrees are in robotics, so do you consider yourself, from your educational training, a roboticist who does engineering? An engineer who does robotics? Or are those not really meaningful distinctions?

JOHNSON: I think, these days, robotics is a discipline in engineering. Really, the distinction we often make is, "There are scientists–hard scientists, geologists, physicists, astronomers–and then there are engineers, people who build the spacecraft for the scientists." I'm definitely an engineer. I have skills that are kind of new, related to AI and vision, these kinds of things. They're not your traditional field. But really, it is an engineering field these days. And the training I received at Carnegie Mellon taught me about AI, neural networks, geometry, how to do estimation of different parameters for complicated-cost functions, these kinds of things, aspects of robotics. There's also a piece of robotics very much related to mechanical-type work, robot arms, the vehicles, how they drive along the ground. That's not really where my specialty was. I'm in the Guidance and Control Section. There's also a Robotics Section here. I was in the Robotics Section, but there's a lot of overlap between the two.

ZIERLER: Tell me about the Robotics Department at Carnegie Mellon. What is its focus, what is it known for?

JOHNSON: Oh, my gosh. It's huge. It's part of the School of Computer Science. They, as I mentioned earlier, did a bunch of the early work on autonomous driving. Any company now that has plans to make autonomous cars, I'm sure they have people from Carnegie Mellon there. It has done a great deal on the computer-science side, more on machine learning, object recognition, on machine translation, understanding speech, so if it's not the top computer-science place, it's number two, pretty sure. It's very focused on the AI, learning, and robotics-type applications. They built a bunch of vehicles; they always compete in these DARPA grant challenges, where you have your vehicles driving across the desert and doing it autonomously, that kind of thing. This Astrobotic company is a spinoff of Carnegie Mellon, so it's now working on lunar landers. The thing at CMU that was strongly encouraged, and robotics was like this at the time, was investigating the intersection of things. And they do this a great deal at Caltech as well. Robotics is the intersection of all many fields, computer science, mechanical design, electrical design, control systems, computing, sensors coming together to sort of create new capabilities and new innovations. It was very strong in the vision area when I was there. The vision group was about 100 people. That was a great experience.

ZIERLER: Tell me about developing your thesis research. What did you work on?

JOHNSON: My thesis was on this three-dimensional object recognition. For example, when you get a picture of something, you recognize what's in the picture. Computers can do that now, there have been incredible advances there. But back then, computers couldn't do that, even with a simple object in an image. My thesis was on using three-dimensional data to solve the same problem. You have something like a stereo camera pair, it gives you a depth map, figure out from the shape what objects are in the scene. You have something called the LIDAR. It scans across, measures range each time the laser fires, gives you a cloud of 3D points, figuring out what objects are there. My thesis was this descriptor called the spin image, which was a local descriptor created solely from the 3D data. It was basically invariant to the position of the object. The object could be anywhere in the 3D data. And then you matched that descriptor between, say, a database of objects and an image of objects, with the ability to localize them and identify which ones were in the scene. It seems like it was pretty seminal work. Almost every day, I get another email saying someone's referenced my paper. It was kind of early enough in this field that it remains relevant, even today.

ZIERLER: What did you see as some of the broader advances, either in computation, technology, instrumentation, that allowed you to make these contributions so early on?

JOHNSON: We had a LIDAR at Carnegie Mellon, and we might've had the second LIDAR in the world, I don't know. It was built by a company in Michigan, and it was pretty special and large. Once you have that sensor, you go, "What can I do with this?" My advisor, Martial Hebert, who's now the dean of computer science, 3D recognition was his area when he was in France. He was like, "Andrew, I want you to work on a problem where you just use the 3D data. What can you do with the data and not relying on models? Simpler descriptions. Do it for freeform objects." That was really the direction he gave me, and I eventually came up with spin images. But, we also had really powerful computers there, all the students had Silicon Graphics computers, and those were really the sort of very high-end, fast computers that were also very good at doing graphics, so we could really visualize our vision algorithms easily. A bunch of libraries were available to do that quickly and easily.

We had labs with calibrated imaging setups, calibrated light sources, calibrated cameras. We had robot vehicles, people building vehicles all over the place, funding from NASA, from Department of Defense to solve different problems. What paid for part of my time was a pneumatic robot, so the joints were moved pneumatically, and its purpose was to go into radioactive zones and do work that humans can't do because it's too radioactive. My object-recognition work was to figure out where the wheel is that the robot can turn to open something up, or where the pipe is to cut it and pull it out, that sort of thing. Very fertile ground. Many roboticists out there in the industry and making a difference came from Carnegie Mellon, doing that type of work back then.

ZIERLER: After you defended, what were your ambitions? What were you thinking in terms of next steps? Industry, government, academia?

JOHNSON: I wasn't sure, so I interviewed at a lot of places. When I defended, I actually had already accepted the job at JPL. That was sort of at the beginning of the dot-com boom, the first one. I interviewed with Digital Equipment Corporation. They had a research lab at Cambridge in Massachusetts. I actually worked there two summers, so that was an option, which was industry. I discussed a post-doc at Columbia. Didn't really go far with that, but I think that probably could've happened. I got the interview with JPL because the person who hired me, Larry Matthies, was a former CMU graduate, so there was some joint work being done with JPL and CMU on autonomous vehicles. That was really how it came about. It was a difficult decision for me. JPL's a government lab. But honestly, I was doing robotics, not space exploration. I wasn't doing these things. But when I was a kid, I watched Cosmos with Carl Sagan religiously every Sunday with my mom. I loved that show, and so my dad reminded me of that and how incredible the advances are that are made. "The pictures you saw on that show were taken by the spacecraft from JPL." He thought I'd be a fool to go anywhere else, so he helped me make the decision. I ended up here.

ZIERLER: Cosmos definitely planted a seed.

JOHNSON: Oh, my gosh, so much so. Yeah.

ZIERLER: Did you appreciate when you were in graduate school how important robotics is for JPL's overall mission?

JOHNSON: In grad school, no, because I wasn't thinking about JPL. I was thinking about just vision and all kinds of vehicles. Terrestrial vehicles, probably mostly. But now, obviously, these are robots. The spacecraft are robots. We don't call them that, but what is a robot? A robot interacts with its environment, it senses its environment, and it should also think some about its environment. And the thinking part is probably the weakest part at this point, but we do have a lot of capabilities on board, somewhat scripted. We know how to deal with faults on the vehicle, how to organize reactions to that on board to make the spacecraft safe. All spacecraft have to have that ability. Now, with autonomy and machine-learning techniques, we can begin to work more on the thinking side. The perceiving side is what I'm doing, Mars 2020, Terrain Relative Navigation, rover driving, interacting, that was there at the very beginning. Pathfinder had a rover, and we've had rovers ever since. And of course, spacecraft going through space, getting up close to an object, circling an asteroid like the NEAR Mission did, these are basically robots.

ZIERLER: Accepting the offer, even before you defended, does JPL recruit at Carnegie Mellon? Did they come to you?

JOHNSON: JPL definitely recruits at Carnegie Mellon, but that wasn't how it happened for me, wasn't the official route. The guy who hired me knew people, knew my advisor, and they talked, he interviewed me, and he was happy to get me. [Laugh]

ZIERLER: What year did you arrive at JPL?

JOHNSON: 1997.

ZIERLER: What was your first job? What were you working on right out of the box?

JOHNSON: When I interviewed, I was given a choice of working on vision for landing on a comet nucleus, how to figure out the position, how to map the nucleus, how to estimate the velocity, or I could work on a Department of Defense project for ground vehicles. And I thought, "I'm not going to JPL to not work on space, so I'm going to work on the comet one. It sounds more interesting, anyway." The payment I had to make was, one of the first things I did when I got here, I wrote a proposal to DARPA with others called Tactical Mobile Robots. It was basically a robot that a soldier could put on their back, and it had tracks, very capable robot. I worked a lot on not only the initial proposal concept, but also, I helped them write the proposal. I put the proposal document together.

And we won. I made a deal, though. I said, "I'm going to write the proposal, but I'm not going to work on it." Once they won, they allowed me to go work on the other task. And I've been working in that area ever since. And actually, that was really good because that's where I met Miguel San Martin, who is a fellow here, but wasn't at the time. And we've been working together ever since. The first flight project I worked on was Mars Exploration Rover, and the success I've had was because of him and an idea he had that was very high-level that I ended up implementing as my first project.

ZIERLER: I'm curious, in the late 1990s, if some of the high-profile mission failures on Mars registered with you or changed your day-to-day at all.

JOHNSON: That was pretty devastating because we assumed the MPL and MCO were going to succeed, so we were really excited for the landing in particular. I hadn't been there long enough and also wasn't involved with those projects at all, so it didn't impact me in and it bad way. But one good thing that came out of that was that on MPL, they were concerned that the vehicle had failed because it had landed on a rock. Another area we're still working on is a system that will autonomously map the surface as the vehicle's landing, build a 3D terrain map, then pick a site based on the LIDAR data it's collected. In those early days, I was working on some pretty big projects that were building those sensors, and also the software and algorithms that would detect the safe landing site. I did the algorithm side. That was the outcome of MPL and the focus of the technology funding shifted, like, "Well, we better not let that happen again. How can we solve that problem?" I did work on that for a while, made some advances that led to some bigger technology projects. We still haven't done that in any of our missions, though, I would say. But it was a good experience for me.

ZIERLER: I'm always interested in when new directors come in at JPL. When there was the transition from Ed Stone to Charles Elachi, did that register with you? Did you see changes at JPL as a result?

JOHNSON: No, I was not high enough. We get more impacted by these giant flight projects that come in. They change things. I actually went to some presentations that Elachi gave, and I was quite impressed with him actually before he was director, so I was very happy when he became director because of those things. He did presentations about, "We need more autonomy. We need more sensing," the things I was working on, so I was happy to hear that. Not that Ed Stone was not saying those things as well, I just hadn't heard them. And we all really liked Elachi a lot. [Laugh] We still do. But I don't think the director has a big impact on stuff–I shouldn't say that. They definitely have a huge impact on us getting the work done, getting the work, dealing with those high-level problems that get up to their office, and we've had some good success in that area. It's amazing how the director gets involved right before landing. All the decisions about stuff, "Oh, we have a problem, but it's 10 days away. What should we do?" You've got to have the director involved because it's going to be so impactful if the decision goes awry. That's always really great.

ZIERLER: Moving into the early 2000s, how much of your work was specifically Mars-focused, and how much of your work was more general to what JPL needed?

JOHNSON: Early 2000s, I worked on Mars Exploration Rover, so that was a flight project. That's all Mars. Built a vision system for that that estimated velocity. They basically didn't realize that there could be a steady-state wind on Mars that's blowing on the parachute to the side. All the models were like, "It's just going like this. It's not moving to the side." They needed to be able to estimate how much it was moving to the side, and the only way they could fit that in, given the really short schedule, was to add an algorithm and a camera. That's how I got involved. They were like, "We could probably add a camera, but how are we going to estimate velocity? Maybe we should ask Andrew about that because we know him." And they did. I built the algorithm, with others, and that system was called the Descent Image Motion Estimation System, so DIMES. That worked. That was the first vision system used during planetary landing. That was up until, like, 2004.

But I was doing some other things. Another thing I was working on was an autonomous helicopter. We hired Jim Montgomery from USC, and he had built an autonomous helicopter there for flying competitions. We hired him, and he built us a copy here, and we started using it for this comet-landing project. Like, "We're going to use it. We're going to show that we can fly along, use cameras to figure out our velocity, to build a terrain map, decide where to land, and then land the helicopter in a safe location." We did that with his gasoline-powered helicopter at USC. We actually did most of the testing at USC, but we also did some testing out here at Hansen Dam. That was focused on all bodies. Mars, a comet, an asteroid, it was very generic, the work we were doing. Very much in the technology realm. Then, 20 years later, there's Ingenuity. They didn't take my code from back then and put it on Ingenuity, but did they use computer cameras to estimate velocity? Yes. Did I do that? Yes. These kinds of feasibility studies early on I'm sure helped them make people believe they could do what they actually ended up doing.

ZIERLER: For the DIMES project, what were some of the main challenges in developing the algorithms?

JOHNSON: Well, the challenge was that no one had ever done it before. And we had very little processing power. We had to decide how to estimate motion but have the system not lie. Basically, how a lot of these things work is, you take an image, you take another image, and you digitally look for a feature for one image in the next image. That's called feature tracking. The more feature tracks you have, the more reliable the system is. But in our system, DIMES, we were only able to track four features total. We'd track two features between the first two images and two features between the second two images. That was all that would fit in the computer. That was the challenge. That, and the camera was horrible. Because of the high attitude changes on the parachute, we needed a very short exposure time, otherwise the image would be very blurred. But this camera wasn't designed for that. It was a charge-coupled device, CCD. It had a big ramp in brightness across the image that was useless to use.

We had to figure out ways to use that camera. Those are peripheral things, but that's where the work was. Because for the algorithms themselves, I took pieces from the autonomous helicopter. I had the ability to do feature tracking from one image to the next. That's kind of a standard thing that was pretty mature, even back then. And you have to warp the image, and there are established ways to do that. You have to decide what to track, that's called an interest operator, and we'd used those before. Putting those pieces together for this particular problem was an innovation, but it wasn't like we had to invent a new field of computer vision to do this. We used components that were pretty mature, that we'd been using for a while, and put them together to solve the problem. And that's how it works, that's how we did the Terrain Relative Navigation as well. It's already super challenging to do this because it has to work perfectly, so don't come up with some brand-new idea that you really haven't used that much, because you're going to find it doesn't work that well and not be able to solve the problem.

ZIERLER: You mentioned the importance of the CCDs. I've come to appreciate the topic that brings us together, data science and astronomy. These really revolutionized the big sky surveys in terms of all of the data that these telescopes were then taking in, and the need for machine learning and AI to make sense of it, to figure out how to pull all the signals from the noise. Were you coming up against this, too, as a result of CCDs? Were you now dealing with an explosion of data?

JOHNSON: Well, absolutely. That's really what it is, pulling something useful out of that image, which is huge. You don't care about the individual pixels. Some information piece has to come out of that. For my application, it's velocity, position, change in attitude, these kinds of things. But that's extracting and converting the pixels into that useful information, doing it efficiently with the onboard processing that you have available, squeezing it to be just optimal, just enough information to do the problem and do it quickly enough.

ZIERLER: I wonder if you can explain from a technical level what it is about CCDs that allows for this proliferation of data.

JOHNSON: CCDs–we use CMOS now–are these arrays measuring photons, which turn into electrons. They've become very large. Even now, we typically don't process images for our applications greater than one megabyte, 1,000 by 1,000 pixels, even though the CCDs can be many times bigger than that. There are challenges. Part of it is getting that camera image off of the camera quickly, getting it into the computer, then when you have megabytes of data, processing it by the computer quickly enough to get the information out of it. But the more resolution you add, the more accurate you can be. And I'm sure from a science perspective, the smaller the things you can see, the fainter objects. My application is not a science, pulling things out of the noise where there's barely anything there, like Kepler or something like that. We see bright and dark patches in an image of the ground, and we need to match those. There can be a lot of noise on the data, meaning the brightnesses don't have to be perfect. Our algorithms still work under those conditions.

ZIERLER: Were there any shortcomings or technical limitations for DIMES that you thought may inform the next Mars mission?

JOHNSON: Yeah, there were. Let me go through them. One is, you need a camera that's designed for the problem you're trying to solve.

ZIERLER: And just to clarify, is that only known in retrospect? Did you go up thinking the camera you had was up to the task at hand? Or you could only know this in real time?

JOHNSON: I think we were only given one choice for the camera because it had to interface with the spacecraft computer, and there was only one type of interface. That was something we just had to deal with and didn't think that much about except solving the problem. But afterwards, when planning the next mission, we spent significant effort obtaining a camera to work for what we needed. If it had been designed differently, if it had shorter exposure times, if the image could be read out more quickly, if the field of view was wider, then it would've been a much more effective sensor for us. The other was, I mentioned that there were only four features checked between the three images. In fact, during both Spirit's landing and Opportunity's landing, one of those features was discarded. A lot of computer vision is about outlier rejection, deciding what's a good measurement and what's a bad measurement, doing that autonomously on board. We had some methods for that, and one of them we decided wasn't good in each of the landings.

Now, we had three to do the job. Had we lost one more, it wouldn't have worked. That's how close to the edge we were. When we designed this Terrain Relative Navigation system for Mars 2020, we realized algorithmically, we need, like, 100 features. Because if you have 100 features, you can lose them for whatever reason and still be able to estimate position or velocity. But if you're going to process that many features, you need to have plenty of computing power. Because of that, when we did our technology development, we designed our own computer to do the job. When we do computer vision now, my feeling is, you always have to have your own computer to really implement something that's very effective. Camera, computer, and then large number of features.

ZIERLER: What came next for you after MER?

JOHNSON: A bunch of things. One of them, though, was, immediately after MER, we started designing this Mars 2020 system. Even though we didn't know Mars 2020 existed or would ever exist, we knew someone would want to do pinpoint landing. And to do pinpoint landing, you have to estimate position from an image. DIMES was velocity. We needed something that could do position. We started prototyping those algorithms in 2004, shortly after the two MERs landed, and worked on it mostly all the time, with a few gaps, until it landed on Perseverance in 2021. That's a long time. But I also got involved with another part of NASA, human spaceflight. There was a project for building this hazard detection and avoidance system, like the LIDAR that scans the ground to see the hazards, but for a crewed lander going to the moon. And I worked on that, it was called ALHAT, Autonomous Landing and Hazard Avoidance Technology.

It was run out of Johnson Space Center, but JPL had a significant contribution. Our contributions were the hazard-detection algorithms, velocity-estimation algorithms, position-estimation algorithms, and also field testing of the system. I did algorithm development, developed a simulation to go along with it, and eventually became the lead of that technology project at JPL. It lasted quite a while. I think I was involved from 2006 to 2011. That was a major thing. We did a lot of field tests, a lot of algorithm development, mainly focused on using a LIDAR instead of a camera to do these things that we're talking about. That type of technology, although it was designed for a crewed lunar lander, could apply to robotic landers as well, and exploring comets and asteroids, so it was fairly generic.

ZIERLER: Did you see going into more of a focus on machine-vision systems as somewhat of a departure from DIMES or more of an extension?

JOHNSON: What I've always focused on has been the vision side with a camera, but also the LIDAR side, doing the same thing but with a LIDAR. It was different techniques, different algorithms, but because I used LIDAR data in grad school, it was something that I could work on, and both of them functionally solved similar problems, position estimation, velocity, hazard detection and avoidance. Different sensor, same functions, so it wasn't a departure in that I didn't feel like I couldn't do it. It was more like diversification. Maybe for some projects, this is a better choice. For example, on the moon now, the South Pole, the lighting's horrible. It's the South Pole, the sun's barely above the horizon, there are deep shadows. Possibly, the lander needs to come in from the night side of the moon and go to the landing site instead of from the lit side. That would require a LIDAR approach. You come from the lit side, you can use a camera. They have complementary use cases, but similar functions.

ZIERLER: To clarify, is the LIDAR a response to what you were saying before, some of the shortcomings of the initial camera?

JOHNSON: It's a response to cameras needing light to operate, and LIDARs don't. They shine their light out, and it comes back. That's one complementary aspect. LIDARs are very good at mapping the elevation of a landing site. For deciding where to land, measuring the slope, measuring if there's a rock there or not, they're the ideal sensor because they measure the thing that you care about, the height of the ground. Cameras don't do that very well, if at all. They're horrible at measuring slope without a big change in position. I wouldn't say we discovered that cameras couldn't be used for this, it was more obvious that LIDAR would be better at hazard detection than a camera would be, so we needed to do algorithms for LIDAR for that. And then, "Oh, we have a LIDAR on the mission. Maybe we could use it for velocity estimation and position estimation, too." I think that's how the reasoning went. It's conceptually responding to advantages and disadvantages of the two sensor types, but it wasn't well planned out. It wasn't an event where we were like, "This camera's horrible. Let's switch to LIDAR."

ZIERLER: We hear about LIDAR with electric vehicles. Are you involved at all in that or following those developments?

JOHNSON: Yeah, absolutely. I follow them. One of the LIDAR companies we work with started out doing space LIDAR but now also has a division that does automotive LIDARs. That's Advanced Scientific Concepts up in Goleta. Definitely, a lot of what we work on here in this area can be applied to autonomous cars as well as autonomous road vehicles, off-road vehicles, drones. The landing stuff in drones, there's a lot of crossover there. Amazon delivery drones, should they ever happen, might need to land where there are wires around, trees, bushes, and it's got to find the landing pad. That's very much like landing, so the crossover is really quite strong.

ZIERLER: What were some of the developments or opportunities in applying this not just to Mars but other missions, icy moons, things like that?

JOHNSON: There's been a big push at JPL to work on a Europa lander. Europa is a moon of Jupiter that's covered with ice, underneath which is a big ocean. Very compelling place to look for signs of actual life right now, not ancient life. The problem is, the ice crust is very thick. It's hard to get into the ocean. That's not my area, but in order to get into the ocean, you've got to land on the surface. And the problem with Europa is that the only reconnaissance we have currently comes from the Galileo mission, and Galileo had a failure on its high-gain antenna. The quality of data it transmitted was excellent, but there wasn't very much of it. We don't have very much imagery of Europa we could use to figure out where to land.

We're building a LIDAR that helps us with hazard detection and avoidance, position estimation, velocity estimation. It also has a camera system. That's because the system has to basically land without having a whole bunch of information about where the best place to land is, so it has to be super functional. It has a bunch of sensing, it also has very robust landing gear that actually can conform to the surface. Those two seem to make it possible to go there without having a bunch of reconnaissance, to go there even prior to Europa Clipper getting there, which will generate a bunch of reconnaissance. That doesn't seem like that's going to happen because the lander was not in the planetary decadal survey for science missions, but those technologies could very easily transfer immediately to landing on Enceladus, a moon of Saturn which also has an ocean and plumes of water vapor coming out of it.

It's very compelling to go there and grab the water. You don't have to go down into the ocean to get it. It's being shot out of these vents. But if you want to land on the surface, it's really rough. It's got a lot of terrain relief and ice blocks, pretty hazardous. We have good images from Cassini. This technology could transfer to that type of mission. And asteroids. Pretty much any planetary surface. Then, it's more where the scientists want to go. And I think they've said clearly that after they do a Uranus orbiter and probe, the next flagship they recommend is an Enceladus orbiter and lander, and we could use the technology there.

ZIERLER: When we talk about hazard avoidance, what hazards are we referring to?

JOHNSON: Hazard avoidance is during the final landing phase. These are things that you couldn't see from orbit. You might have had a satellite going around, and its pixels are only so big. For example, on Mars, there is actually a really good camera. We have a high-rise camera there, and it has 30-centimeter pixels. But that's still not fine enough to detect rocks that could damage a lander. Say you take a legged lander. If it lands on too steep a slope, it could tip over when it lands. Because they're coming down hard, so you can't have steep slopes. The other is, the legs are only so high. If there's a rock that sticks up and pokes into the lander body, it could damage the instruments, the propellant tanks blow up, whatever. You can't have that happen. Rocks that are small but stick up are hazardous. That's really what we're talking about. We're not talking about quicksand, dust bowls. The LIDAR detects things you can measure based on the shape of the surface. A big bowl of dust, which might be a big problem, we just need to not land in places those exist because we can't detect those with the LIDAR. Maybe there are other sensors that could be used to detect those.

ZIERLER: Is pinpoint landing part of hazard avoidance? In other words, are you choosing exactly where to land in real time as a means to avoid these hazards?

JOHNSON: Yeah, that's a very good nuance. Definitely, when you do hazard avoidance, the maneuver to avoid the hazards has to be very accurate. But typically, you've measured where those hazards are on board, and the lander needs to use the on-board measurements to get over to a particular spot. That happens in a lot of space applications to do that type of maneuvering. The distinction with pinpoint landing is, for example, Sample Retrieval Lander, we need to land within 60 meters of a spot on Mars that we determined on Earth. We launch, and we've got to get to that place on Mars within 60 meters. That's pinpoint landing, a very specific spot. Whereas hazard avoidance, you don't care as long as you don't land on a hazard. It's not a particular one. That's the difference between pinpoint landing and hazard avoidance.

ZIERLER: From MER, did anything go more smoothly or more according to plan for Curiosity as far as you're concerned?

JOHNSON: Curiosity didn't have a vision system during landing, they had a Doppler radar. JPL designed and built, in division 33, the Radar and Comms Division, a phased radar that could have very tight beams and measure both range and velocity along those beams. And there were multiple beams. That was a power-hungry, expensive, and massive solution to DIMES. DIMES wasn't used because it was honestly not the best way to do it. A radar, especially if the beams are narrow, is going to be better at measuring velocity. If you're willing to sacrifice the power and mass, that's the way to go. That's the way Curiosity did it, and that's the way Mars 2020 did it. That was the big difference. Also, it had the SkyCrane lander, and the rover landed on its wheels. That removed the risky phases when rover drives off of something to get out onto Mars, which was huge. That really was the only way it was possible to land the really huge Curiosity and Perseverance rovers, with this SkyCrane invention. I was part of that team in the beginning, but then I just wanted to work on a flight project, I wasn't doing vision. And my friend Miguel San Martin, who I mentioned before, said, "This is not for you. You don't like doing this work, I can tell. You need to go do your technology work, work on vision, invent something new that we can use later, like you just did," so I left the project and went to do the technology work on ALHAT and other projects.

ZIERLER: What were your motivations in switching course at that point?

JOHNSON: Well, it's very exciting to be on a flight project, especially if they don't take very long. For the DIMES system, it was something like 19 months before launch that we started, then we landed 26 months or so later. It was a super short development of something critical for the spacecraft. Obviously, there was a lot of pressure, but it was also really exciting. Basically, this was my first flight project, and when we landed and it worked, the feeling was, "This is why I worked so hard in grade school, high school, college, grad school, JPL." All my engineering, planning, all those long hours paid off in that one moment. Because we did something no one had done before, and it was on another planet. Just an incredible feeling. That flight project feeling, you want to have it again, so you want to work on a flight project and experience it. It's hard to leave. But it's true, the work just wasn't as interesting to me. I love doing computer vision, collecting data, and doing autonomous systems, and that's going to drive me day-to-day. It wasn't too hard to make the decision to go back into technology. And that's kind of how I've been. I don't want to work on any flight project, I want to work on the ones that have computer vision, where we're going to advance it. I have to pick and choose.

ZIERLER: Why the switch to Doppler technology? What was the thinking there?

JOHNSON: Actually, Doppler's a pretty old tech. There are wide-beam radars used for the other Mars landers, and they have their own issues. But they're not as accurate, maybe a meter-per-second accuracy and velocity. And to touch down on the SkyCrane, you need more like 10-centimeter-level accuracy, so they needed to switch to this narrow-beam radar to get that type of accuracy. The technology existed, but it was a need that was there that wasn't something you could just go buy for your mission. We had to sort of put it together for space here at JPL.

ZIERLER: Tell me about the collaboration with the University of Minnesota and USC on the autonomous helicopter work.

JOHNSON: Stergios Roumeliotis was a grad student of George Bekey, a famous roboticist. He had a post-doc at Caltech, and during that post-doc, he worked with us at JPL. His specialty was Kalman filters and navigating, basically, figuring out the state of the vehicle, position, velocity, attitude. The other connection to USC was that Jim Montgomery had this autonomous helicopter project that he had built at USC under Bekey also. We hired Jim. Stergios was a post-doc, but they knew each other. Jim was moved into my group, so we all started talking about an autonomous helicopter. Larry Matthies really wanted it, so we had to figure out how to get this thing to fly around autonomously. Stergios did the first Kalman filter to do the estimation, I did the computer vision, and Jim built the helicopter. Then, Stergios went to University of Minnesota, where he's a professor. That's how that relationship got started.

Then, we also started working on this Mars 2020 technology, Terrain Relative Navigation, we worked with Stergios and his students on the Kalman filter that's used also for position estimation. And so, that was a great collaboration. We did a bunch of sounding rocket test flights, parachute-drop tests. And eventually, that navigation filter is what we used on our Lander Vision System on Mars 2020. We were able to do that because we hired one of Stergios's students. We brought the student over, and now he works at JPL. I work with him every day, Nick Trawny, and we're doing the Sample Retrieval Lander together. That's what the interaction was with the University of Minnesota and USC.

ZIERLER: Thinking about collaborations with outside universities, what about Caltech? Has that been an asset for your work at all?

JOHNSON: Absolutely. I didn't mention that. I should've. That first comet research project I worked on, I actually spent many days at Caltech working in Pietro Perona's lab, and I worked with some of his students, Jean-Yves Bouguet, Stefano Soatto. They were experts in what we call structure from motion. And honestly, when I got here, I knew a lot about object recognition from three-dimensional data, but I didn't know a great deal about images and feature tracking. Larry Matthies said, "Go work over there, interact with these grad students, and learn about these other computer-vision techniques." That was really when I was trained up in that area by collaborating with them. Then, I would say there was a pretty long lull. But I went and visited Soon-Jo Chung's lab, the drone lab, gave a talk over there recently. He does work that's similar to mine. Maybe something will come from that.

I have a lot of Sample Recovery Helicopter work that I think will be interesting to him and vice versa. I've been so busy on the flight projects that I haven't done much of this type of interaction, but I'm definitely doing more now. I also have a collaboration with MIT, where one of the machine learning things I'm working on, which is using machine learning to deal with this issue we have when we're doing position estimation, where we want to match an image to a map. It doesn't work as well when the map image was taken under illumination conditions where the shadows and shading are different in the map than in the image you're taking while you're landing. Those differences can mess it up. We're developing a machine-learning filter or network to transform those two images into a representation that is invariant to illumination. Doesn't matter where the sun was, it can still match the image to the map. There's a student and a couple professors working on it at MIT. I'm actually going there next week. Then, there have been smaller things, but probably not as much collaboration as there should be, frankly. [Laugh]

ZIERLER: Just a technical question, in thinking about future Mars rovers, visual odometry, what does that mean?

JOHNSON: Visual odometry is using camera images to figure out how you've moved. Basically, odometry in the car tells you how far you've driven based on wheel rotations. Visual odometry uses camera images to estimate the change in position and orientation, from images alone. We use this on all the rovers. On Perseverance, it's very fast, takes stereo camera images. Takes a stereo camera pair, then moves and takes another pair, and it matches features from those images, and from that, it can solve an estimation problem to determine how the vehicle moved and also changed its pointing. That's what visual odometry is. They used it on Ingenuity, and it was a little different. It was a single camera that looked down. It also had a laser altimeter, and by tracking features through the images, it's able to figure out how far it moved. On Sample Retrieval Lander, we're also going to have that visual odometry function at the very end of landing. We also have a laser altimeter, and if we're moving back and forth, we'll see that in the images, which will allow us to kill our horizontal velocity, and also with the altimeter, estimate the vertical velocity.

ZIERLER: Coming back to entry, descent, and landing for Mars Science Laboratory for Mars 2020, what was different in comparing it with MER? What were some of the different challenges? What was even dramatic about it?

JOHNSON: MER's rover was encased in airbags, so there was this tetrahedron of metal, then around that were airbags. The airbags were cut from the parachute, and then the vehicle bounced along the ground until it stopped bouncing, then the airbags opened up, and the rover drove out. I worked on the DIMES system, which tried to keep it from bouncing too far by killing the horizontal velocity when it would touch down, so that it would bounce less along the surface. That airbag system can only land a rover of a certain size. The mass of that system goes up super linear, I don't know what it is, quadratic, cubic, what have you, but it's a really dramatic growth. Airbags can only land so big of a rover. Curiosity wanted to take many more science instruments, and heavier ones. A wet organics lab, a bunch of spectrometers, things like this. We needed a larger rover for that mission, the habitability mission.

To do that, they needed to come up with a new way of landing. Huge trade study. They ended up with the SkyCrane. The SkyCrane was two things. There was the SkyCrane, and there's also this phase called entry. EDL. You're coming in, you're in the back shell in the heat shield, coming through the atmosphere. If you have the ability to change the lift, you can actually steer the vehicle slightly like a plane. The atmosphere's very thin, but you're going really fast. Curiosity, MSL, implemented that by having a center of gravity that was offset from the center line of the vehicle through the heat shield. Through that, they were able to roll the vehicle, which gave them the ability to change the lift to be able to guide the vehicle.

The result of that was, MER had a landing ellipse that was 100 kilometers long. MSL shrunk it down to more like 15 kilometers. They could stick Curiosity in next to Mount Sharp and not have this huge area that had to be free from hazards. They could put the landing ellipse in a much smaller location. SkyCrane, the radar associated with that, and then this guided entry were the two big improvements that Curiosity made over MER. Then, Mars 2020, we took the Curiosity system and added Terrain Relative Navigation to it. Now, you can put that ellipse on the hazards. It can't be fully hazardous. There have to be safe places scattered around. But you can definitely move it into much more interesting terrain for the science.

ZIERLER: Moving our conversation closer to the present, something we all dealt with, COVID and remote work. What did that mean for you and your team? How did that change things? What might've been even more productive during that time?

JOHNSON: It was a big impact. In March of 2020, the spacecraft was in Florida, had not launched yet, and we hadn't finished all of our tests on it. We had to remotely figure out how to conduct the final tests you do on the spacecraft. We had one called a phasing test, basically making sure all the instruments were pointed in the right direction, not pointed up when they should be pointed down. Basically, to generate the numbers that tell you where they're pointed are correct. We had to do that remotely, so we had to instruct the few people who were there in Florida. The team that puts the spacecraft together was isolated, kept away very strictly from the rest of the world. But that meant we couldn't travel to Florida to conduct the tests. We had to instruct them on how to do it. I did that from my bedroom. We also did all of our operational readiness tests. We did launch. People at JPL, including myself, don't do much for launch. As soon as it launches, there's a lot to do, but up until then, it's more someone else's job. And in cruise to Mars, we have to check our sensors out.

Those first checkouts, we actually were able to come on lab for that. I was able to go to the mission support area, the place where there are all the people sitting at the consoles, controlling the spacecraft and monitoring it. We were able to take images during cruise, test our computer, things like that. We did that three times. Then, landing was during COVID. [Laugh] We were able to go on lab for that. There were many fewer of us. There were very strict protocols. And it wasn't the grand celebration I had on MER and that was there during Curiosity's landing. It was much more subdued because it had to be. That was a difference. Then, we started the new project, Sample Retrieval Lander. I think I definitely heavily prefer working on lab and having meetings in person. Especially in an early phase of a project, when you're designing things, and you don't know the answer. You really need a lot of back-and-forth to do it. Not everybody feels that way, but that's how I feel. Now that we've really started doing that, I think things are moving much faster for us.

ZIERLER: Bringing the story right up to the present, for Mars Sample Return, are you operating on a timeline where this is going to happen in the near enough future that the technology and methods available today are relevant for planning? How do you think about those things?

JOHNSON: Oh, yeah, absolutely. Typically, it takes seven months to get to Mars, but for the Mars Sample Retrieval lander, it's going to take two years. That's because the vehicle is so heavy, it just doesn't have enough oomph to get there. It takes longer, but that's atypical for getting to Mars. When you start designing something, what you're going to build is what we're capable of right now. Where we are right now is what you can build because it takes so long to finish the design, put it together, and get it onto the spacecraft. It's just a fact of spaceflight projects. You can definitely take advantage of anything up to a point. For example, the LIDAR we want to use for the Mars Sample Return lander is something that was already demonstrated on another flight project, so we're taking advantage of that.

That wasn't true for past projects I've worked on. If we were able to wait, like, three or four years, we might be able to get a better computer, or at least a more up-to-date computer. But we don't need to do that. We have a computer that's totally adequate. The algorithms and software, we've changed. We're changing, adapting, updating them. That's not something that's obsolete. We sometimes actually have the opposite problem. We've built something before and want to build it again, but we can't because we can't find the parts to build it. We have this thing called obsolescence, so it's a different type of pressure that pushes you to move forward. You can't just build the same thing you built in the past.

ZIERLER: What are you concerned about, if anything?

JOHNSON: Technically, we have a very challenging problem to solve for Sample Retrieval Lander. We're adding another camera, and this camera is going to take images during the entry phase. It looks out the back of the capsule, and there's a door that's going to pop off. Then, it'll look out onto the ground and do the same landmark-matching job that we do later on in the timeline, but with a different camera. That has to work, and that's a new, difficult thing we have to implement. When you're higher up, which we will be, there's more dust in the atmosphere, so we have to deal with that effect. It's just something we haven't done before. We also don't have a way to measure our altitude, so we have to use the images to calculate the altitude, which is something brand new. I feel pretty good about being able to do it, but there's some development ahead of us. Also, on the final touch down, we have to measure the velocity of the spacecraft just with images. Okay, we get to use an altimeter, too. But that's a very challenging problem, too, that we haven't done before. Those are both big risks.

On Sample Recovery Helicopter, that is sort of more tolerant of risk, there are two helicopters that are going to fly. We have a bunch of new vision capabilities on there. One of them, I didn't mention, which is relevant is, we have to identify the sample tube that's on the ground. The helicopter lands next to it and needs to see it and figure out where it is so it can drive up and get this little, tiny arm to grab it. It has to very accurate to get on the sample tube. We're going to take an image, then use a machine-learning neural net to identify the location and pixels that are on the sample tube. We have to train that up. We've never used machine learning in any planetary missions that I know of. We've used it for data processing on the ground, the data we've collected. We've used it for some design work. But we haven't used it on board. This is going to be one of the first applications of ML. I'm very excited about it. But we need to prove to the skeptics that it's the right approach and that it will work.

ZIERLER: For the last part of our talk, I'd like to ask one overall retrospective question, then we'll end looking to the future. Looking back on your career at JPL, where have you had the most fun, either in terms of impact, in terms of doing something really out of the box? What stands out in your memory?

JOHNSON: I've got to say, that first flight project, MER with the DIMES system. That was the first really big impact, and that was very satisfying because we did it so quickly, and it had never been done before. And I also didn't know at the time, but it really made my career because that was a very quick and effective thing that had never been done before. We believed it could be done, and we did it. That made me quite happy afterwards, of course. And then, the thing that I enjoy most at JPL is when we have some prototype system, and we're going to go out and field test it. Collect data, fly the system on a helicopter or an airplane, put it on a vertical takeoff and landing rocket, and show that it works. That's so exciting. It's like a little flight project all by itself, but it has that same element of satisfaction associated with it. I just really enjoy working here. I love the people I work with, they're brilliant and have the same drive that I do. It's a pleasure to work here every day. There are ups and downs, but right now, it's definitely really exciting. I'm working on two flight projects, Sample Recovery Helicopters, Sample Retrieval Lander. We're designing them at the very beginning. It's very exciting, making these decisions, deciding how to build the system.

ZIERLER: Finally, last question. How far out into the future can you look? What is your time scale? Is there a sense of what comes after Sample Return and what that might mean for you?

JOHNSON: Yeah, I still am pushing on the LIDAR side, hazard detection and avoidance, using the LIDAR for position estimation, doing that for Enceladus, Europa. We're supporting the Human Landing System, which has commercial landers going to the moon. It's going to kind of happen in parallel with Sample Return, but I'm excited for that other path of NASA, which is going back to the moon, people on the moon, helping those companies out, and then seeing our efforts there, which I can't really talk about. Seeing it actually happen, seeing people land on the moon again, then exploring it and staying. I would be excited if NASA had some success on Artemis and then started talking about going to Mars with people. That's compelling to me. But the space science is really the most compelling. I want to see close-up pictures of Europa and Enceladus, go retrieve samples from a comet nucleus. There are a ton of projects that would excite me. I don't know which ones they'll be at this point because JPL is so focused on Mars Sample Return, the Europa Clipper–getting Clipper done and getting Mars Sample Return designed and launched eventually.

ZIERLER: The takeaway, though, is you're still having a great time, and there's a lot to look forward to in the future.

JOHNSON: Yeah, I think so, for sure.

ZIERLER: I want to thank you for spending this time with me. It's been a great discussion, and it'll be so great for this project I'm working on for digital astronomy. Thank you so much.

[END]