David Brady (PhD '90), Optical Physicist and Builder of the Gigapixel Camera
The starting point of David Brady's work is a challenge question: "Why haven't cameras improved as much as other information technologies?" While it is true that photography has made stupendous strides since its creation in the 19th century, these advances do not match those in computation and communication. Not satisfied by a frames per second improvement rate by the AWARE camera of 100 million over the original, Daguerre's camera, Brady is forging ahead to push resolution capacity ever further, aided by the latest advances in hardware, software, artificial intelligence, and computation. The Gigapixel Camera is just the beginning.
In the discussion below, Brady reflects on his attraction to the concept of applied physics at Caltech, and the benefit he drew from Amnon Yariv's work in optical communications, John Hopfield's interest in neural networks, Carver Mead's focus on neuromorphics, and his thesis advisor Demitri Psaltis's advances in optical computing. He notes the duality of the Beckman Institute at Caltech and the University of Illinois, and he relates the origin story of the Gigapixel Camera with support from DARPA, and the obvious national security value in dominating the field of super high resolution photography. Brady discusses the vibrant startup culture at Duke and the broader Research Triangle community, and his attraction to the University of Arizona, home to the world-leading Wyant College of Optical Sciences.
Despite the advances that characterized the modern world of optics, Brady emphasizes that its research community remains relatively small, and that is perhaps its greatest asset. And for as long as he remains active, Brady is committed to test the limits of Moore's Law, to harness the exponentially growing power of artificial intelligence, and to align photographic advances alongside other information technologies. And as he muses, in retirement, he will focus on photography. But because he's not impressed with the current capabilities of available consumer cameras, one might easily guess how he might focus his future efforts.
Interview Transcript
DAVID ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It's Tuesday, December 19, 2023. It is my great pleasure to be here with Professor David Brady. David, it's wonderful to be with you. Thank you so much for joining me today.
DAVID BRADY: Thanks, David. It's nice to meet you.
ZIERLER: David, to start, would you please tell me your title and institutional affiliation?
BRADY: I am the J.W. and H.M. Goodman Endowed Professor of Optical Sciences at the University of Arizona.
Optics Leadership at Arizona
ZIERLER: Administratively, where does that fit? Tell me about the Wyant College of Optical Sciences.
BRADY: Arizona is one of three main optical schools in the United States, in the case of Arizona dating back to the establishment of the Kitt Peak National Observatory and all the astronomy activity that is here. It has had a program in optical sciences for over 50 years. It became a college of optical sciences I think 10 or 15 years ago. Jim Wyant was a pioneer in optical metrology. He was both on the faculty here and served as dean, and then he did well in business so he endowed the College to make it a named place. Actually when he endowed the College, there was an effort to grow the faculty. Wyant was a graduate also of University of Rochester, which is the other main optics school, and he gave gifts both to Arizona and Rochester to grow their faculty by 50 percent. Part of that process, they created a number of endowed chairs, and I was recruited to take the chair that is named after Professor Goodman, who really was kind of the pioneer of the field that I work in.
ZIERLER: It's J.W. and H.M. Which of the two is Professor Goodman?
BRADY: Joseph Goodman is the professor and then H.M. is his wife.
ZIERLER: Tell me about Professor Goodman. What was he known for?
BRADY: He was on the faculty at Stanford. The field that I work in is optical systems, analysis of optics from a systems engineering perspective. Goodman wrote a book called Fourier Optics, which applied harmonic analysis to development of optical systems, that was probably the most influential book in optics in the last century. Actually, the prize in optical book writing is named after him that the Optical Society gives away. He created kind of the optical systems group at Stanford, and that led to all kinds of new ways—he was a real pioneer of holography and digital holography, and computational imaging.
ZIERLER: Is there also an affiliation with the Arizona Quantum Initiative?
BRADY: Yes. The Quantum Networks NSF center is based in the College of Optical Sciences. Saikat Guha and this group are in the same building that I am in here. The optical sciences has a group in optical physics that is more the quantum group, and there's photonics, and there's an optical engineering group. Probably well over half of all lens designers in the U.S. are graduates of Arizona. Then there's image science, which is kind of the group that I am in.
ZIERLER: You mentioned Kitt Peak as part of the origin story for how Arizona became a leading school in optical sciences. What is it about Kitt Peak? What is the connection or what are the capabilities at Kitt Peak that would help explain this emphasis on optics?
BRADY: Aden Meinel was the founder of the Institute here. He grew up in Pasadena and attended Caltech. Then I think he got his PhD at Cal. He was at Yerkes Observatory in the Midwest. I think going into World War II, strangely a lot of the major observatories in the United States would be in the Midwest or the East, which were not really well suited for astronomical observation. After the War, Meinel led a group at NSF that did a national survey of potential sites for astronomy. Of course now the major sites for astronomy tend to be in Chile and Hawaii, but at that time they did a national survey of where in the United States would be the most suited for astronomical observation. Of course you want to be on a high mountain peak, and Kitt Peak came out really well in that survey. Of course now, because of that, most mountains in Arizona have an observatory on top, because they've kind of put them everywhere.
ZIERLER: Some overall questions about your research—let's start with your expertise. All of the disciplines that you're involved in—there's optics, there's physics, there's electrical engineering, computer engineering—what is your home discipline, from either the things that are most important to you, your education, or just the central research that you do?
BRADY: Definitely optical systems, analysis of optics. My PhD at Caltech was in applied physics, but as a professor I spent 30 years on the faculty of electrical engineering. I would say I went into optics because—my undergrad was in physics, and optics is like the most applied branch of physics—I wanted to do something kind of applied—but it's the least applied branch of engineering, so I didn't ever really fit that well, I guess, as an engineer. The reason I came to Arizona was I'm most comfortable in an optical sciences group.
Optics Between Physics and Engineering
ZIERLER: Let's tease that out a little bit; it's very interesting. Optics is the most applied aspect of physics and the least applied of electrical engineering. What does that mean?
BRADY: Electrical engineering, even though it's called electrical engineering, the biggest emphasis in it is really in information science and systems, mathematical analysis of systems. I've been involved in starting probably five or six different optics company, and typically when we have an optics company, we might employ like five or ten optical designers, and for every optical designer we'll have like 50 or 100 software people. That's what I mean, is that on the system analysis, mathematical side of engineering, there's unlimited things to be done, but optics is very physically oriented. People who do optics can either be in physics or engineering departments. In physics, of course, cosmology and high-energy physics are far from applications, but optics is the branch of physics where people tend to do real things. Of course with quantum information science, that's all potentially becoming revolutionary where there's a whole new branch of where physics could play a practical role.
ZIERLER: Are you involved—is there an optical science aspect to the quantum computing revolution that we are somewhere in the middle of?
BRADY: Yeah. If you look at ion trap quantum computing or the neutron-atom kind of things, they are basically a form of optical computing, but I'm not involved. I'm a very practical person. Actually, my work at Caltech was in optical neural networks, physically how we build neural systems. Technology is very unexpected—you never know what's really going to take off—but I'm more certain that connectionist machines and neural processing is going to be a hugely revolutionary technology than I am that quantum information science is going to play that big of a role.
ZIERLER: Interesting. When you've been inspired or when you've seen an opportunity to start up a company, what has given you the confidence to pull the switch on that?
BRADY: I started a spectroscopy company 20 years ago, and that was really mostly coming from a—we do government programs, we develop stuff, and after you've developed it and demonstrated that it works, it's very irritating if it doesn't become something that people actually use. Usually it has been that we have something that has been working in the lab that we want to transition out of the lab. My main commercial interest—I should be, but I'm not a businessperson who is like, "Here's a great business opportunity, let's go do this." I'm more like just fanatical about—and actually, the work that I've done—I was involved in the launch of Evolv, which is a security screening company, and that was technology that was licensed out of Duke for security checkpoint screening. My technology led to the founding of a company called Quadridox that does x-ray diffraction tomography. Those are all things we had in the lab that just got out and it was time for them to transition from successful research projects to things that could be used in the real world. But my passion is just cameras. There is a business aspect to it, that we need to sell them to be successful, but I'm just really irritated that cameras are not as good as they should be. The business is more like a vehicle to making sure that cameras become the kind of instrument that they should be.
ZIERLER: That's why you're a professor and you're not in private business, I suppose!
BRADY: I suppose that's true!
The Centrality of Machine Learning
ZIERLER: Everyone is thinking about AI and machine learning these days. Is that relevant for your work at all?
BRADY: It's extremely relevant. People have two eyes, but we don't really ever think that we have two eyes. When you see the world, you see an integrated vision. That's because you have a visual cortex. The passion I have is to make computational imaging. We do measurements from instruments. The challenge in computational imaging is, first of all, what to measure, and then how to manage the data after you have measured it. There's four eras in how to manage the data. One is, make a measurement that looks like the object, so the first step is the photograph looks like the object. The second is make it a simple transformation, like a Fourier transform or a radon transform. The first one was like the dawn of photography until the 1950s. Then the 1950s to the mid 1970s, people would do like simple Fourier transforms for radar, or Radon transforms for x-ray tomography. Then from like the late 1980s until the mid 2000s, people would do just constrained optimization. It's like doing a linear estimator but applying some smoothness constraint.
Now with neural estimators, we can take data, and it can be unrelated. You can have like an infrared image and a visible image, you can have the time of day in a camera, you can have the knowledge that it's a horse in a camera, and you can combine those in a way that will create a better estimated image. Neural processes is the back end of computational imaging. It's what allows us to think about how we're going to manage whatever data that we've measured, and if it transforms what we think we measure, then we don't need to make the image look like what we want the scene to look like; we just need the data. We need to collect data that a neural system can turn into something useful. In fact, I was just working on something this week where it had to do with 3D imaging with a kind of fringe projection. It's a very complicated mathematical problem. There's a long history of people going to solve this problem and then it's like—a student was asking me how to do it this week, and it's like, well, why are you bothering with these mathematics? Just collect the data and train your neural network to solve it, and you're done. So, it changes everything about how we think about how we want to manage data.
ZIERLER: What aspects of AI make your work more efficient, and what is simply possible because of AI that wouldn't be possible without it?
BRADY: The thing that's possible is that we collect data from a bunch of cameras and we create super high—our goal is to create cameras that are mind-blowingly better than the human eye, like 100 times faster, 100 times higher resolution. That hasn't happened because people just don't know how to handle all that data. The neural processing makes it possible for us to have systems that can create those kind of images. The parts that freak me out is just—I spend a lot of time writing computer code, and the AI generators for computer code have gotten stunningly good. You're sitting there and it's predicting like the next 10 lines of code that you're going to write. As I was mentioning to you, in the optics business, whatever, the things that I make are computers that happen to be cameras. The camera part is like a veneer on the top, but the underlying technology is really a computer. The hard thing about computers is generating software, and if AI generates the software automatically, that changes the game dramatically.
ZIERLER: When you talk about computers that are x number of degrees better than a human eye, is this like biomimicry of bioinspired technology? Are you using the eye as a base point to make the technology better?
BRADY: Yeah. We would like to get there, actually. We're still pushing in that direction. The concept of a frame is a film concept. We have the concept of cameras. We're creating basically a new kind of media, right? You have photographs, there's a still image. A movie is a sequence of frames. The media we create is like you collect a lot of data, it goes into a neutral cortex, and it spits out visualizations that you're interested in. The concept of a frame is—this concept of snap-snap-snap, you take a picture—the eye doesn't work that way at all. The eye has a bunch of sensors that are asynchronously collecting data and streaming it out. So we are definitely thinking on the back end that we want to create a digital version of the neural cortex that will mimic what the brain does but do it with many orders of magnitude more data than the humans actually collect. At the same time, we want to make the sensor something like the eye where it's not a bunch of pixels that are sequentially reading out data; it's a very complicated sort of embedded neural processor—you could call it a neuromorphic sensor—that will sample data in a feature-based way.
ZIERLER: What level of data are you working with? Are you in the petabyte level, and how do you store all of it?
BRADY: I think last I checked, like 80 percent of all internet traffic is image data. Basically we'll use however much data there is in the world. I led the team that built the world's first gigapixel camera. That system worked at like 30 frames a second. We generated certainly hundreds of terabytes a day. Commercially we are selling cameras now that generate four terabytes a day of data per camera, but we have proposals to build cameras that will capture terabytes a second. The thing that you have to remember about this is that—I was at Caltech in 1984 when Blade Runner came out, and Blade Runner has this scene where Harrison Ford talks to the computer and says, "Enhance, enhance, enhance"—most of that scene has come true. You can talk to computers now. Actually he was on a CRT; we have much better displays. He asked for a hard copy; we would never do that now. But this ability to zoom indefinitely is something that we're still working on.
When I was at Caltech, we were using IBM PCs. They weren't like PC clones; they were the actual first IBM PCs. We had floppy disks. Since that time, for the price of what a floppy disk was then, you can buy a terabyte thumb drive for about the same price, so the price of memory is down by a factor of a billion. The price of communications is also down by a million or so. computing power is up by like a factor of a million. But cameras are kind of about the same. That's what we're trying to change, to definitely make it so that—people talk about this information apocalypse where within 50 years like every atom on Earth will be necessary to store all the data that we've generated. I guess we're trying to drive that.
ZIERLER: [laughs] In thinking about these exponential advances, is Moore's Law a useful frame of reference for you? Is it still relevant? Are we still pushing the boundaries of Moore's Law?
BRADY: Definitely. When we built the gigapixel camera, one of the motivations was that people had said, "Moore's Law doesn't apply to cameras because the sensor is already at the wavelength scale." But the sensor is just a really extremely small part of a camera. In a standard compression algorithm, every pixel in a camera gets touched like 100 times after it's read, so like 99 percent of the power in a camera is not the sensing; it's the information processing after. Basically, the limitation to what cameras can do is really not optical; it's how do you manage all that data and make it easy and convenient for people to get whatever information that they want. Definitely neural processing has changing everything, because the architecture—neural processing is just a continuation of the trend to make computing more and more parallel. Being able to process more data—like right now, a metric I think about a lot is what's the energy cost per pixel. Right now it's about a nanojoule, so like one nanojoule of energy is expended for every pixel detected. To get to where we would sense all the information you might want to sense, we would need to drive that down by a factor of a million or so. That will happen through continuing advances in processor architecture, which is effectively Moore's Law.
End Users from the Largest to Smallest Scales
ZIERLER: Some questions about the users of the technology that you create—are you one of your own users? Do you use these cameras to conduct fundamental research on your own, or is it really for others to do that?
BRADY: I don't, really. I'm a photographer. I like to take beautiful pictures. I have a dream, like in retirement, that I would do nothing but go take pictures. But I can't retire because I don't like any of the cameras that are available right now.
ZIERLER: [laughs]
BRADY: People have different interests, and my interest is in how to make cameras and sense information. I've been to a lot of the famous telescopes in the world. When you get up there, people will start talking about all the wonderful things they're seeing in space, and I'm always like, "I just really don't care. I just want to talk about the instrument and the telescope." I think definitely in photography, obviously people who create media are artists and are going to do amazing things. I'm not one of those people. I'm just interested in the technical challenge of building—it's similar, I think, to a computer designer. You don't expect a computer designer to design all the algorithms that are going to be interesting on a computer. I'm more interested in let's make this amazing instrument that would empower other people to go take pictures with it.
ZIERLER: Let's go to the end users, from the largest scale to the smallest scale. Obviously we'll start with the universe, astronomers. Where do we see your technology in telescopes and other imaging devices?
BRADY: Basically my interest is in making super widefield, high information capacity. Normally with telescopes they get bigger, they get higher resolution, but the information capacity stays about the same. Cameras all tend to be about 10 megapixels because it's possible to design lenses that can sample 10 megapixels. We've done telescope designs that are compact, super widefield, so they're not higher resolution than other telescopes but they can see the entire sky all the time. Currently I have some space debris tracking projects where we build telescope arrays that can survey the whole sky continuously.
ZIERLER: From there, what would be next on the scale? What's next down from astronomy and the universe? Atmospheric testing, planetary science?
BRADY: We do work on those kind of things, but a lot of what I work on is military-oriented projects like Earth surveillance, and then like putting on drones in planes. Basically, for a drone, the cost of an aircraft is really flight time, and so if we can put cameras that will see more on an aircraft, then you can get more information per unit flight time.
ZIERLER: I imagine then that there are some topics in your work that require a clearance and you can't talk about them.
BRADY: Definitely I work with people who have a clearance, and I have had one before. but then I worked in China for quite a while, so I had to get out of that secure game. But definitely we build cameras—again, actually I don't care what the cameras look at, so I build cameras for people who use them for stuff that they don't tell me about.
ZIERLER: At the smallest level, where do we see your cameras that are relevant for molecular or atomic-level study?
BRADY: At the atomic level—I work in computational imaging, so the general methodology that I work on has been applied for x-rays, and mass spectroscopy, and all kinds of instruments. Basically we've developed a methodology about how you go about building a forward model, physical description. The core problem is the real world is continuous, and measurements are discrete. We get into the details of how you build that interface and build computation around it. That has been applied in these x-ray systems and in mass spectroscopy. Then there's stuff like—Roarke Horstmeyer was an undergraduate student in my lab at Duke, but then he got his PhD with Changhuei Yang at Caltech. He was one of the pioneers of developing Fourier ptychography which is like a gigapixel sort of microscopy system. Now he's on the faculty at Duke and has built kind of a scanning gigapixel microscope at Duke using the same kinds of technologies.
ZIERLER: Is it useful to think of your research as having both classical and quantum applications?
BRADY: The beauty of optics is—that's basically the definition of optics. It's kind of where electromagnetic theory and quantum theory collide. Definitely you need to understand quantum mechanics to understand optics. In my case, I sweep most of the quantum aspects under the rug by—we have statistical models for the field that describe most of the quantum mechanics in a kind of simple way. Definitely people in this area of optical imaging can get into quantum-limited detection. This is one of the things where I diverge from quantum computing; I'm mostly interested in the kinds of systems where the data is massive, where we're going to collect gigapixels, terapixels, that kind of thing. When you get to this massive data scale, you just don't have that kind of interface in a quantum system.
ZIERLER: Some more technical questions, some terms of art in your field—"aperture synthesis," what does that mean?
BRADY: In optical systems, resolution is limited by the aperture size of the lens. If you get a bigger lens, then basically there's a diffraction theory that says you would see more. Aperture synthesis is phasing multiple apertures to behave as a single aperture. When you see something like the Very Large Array radio telescopes in New Mexico, you would collect data coherently from those radio telescopes to combine them to get a bigger effective aperture. People do the same thing with optical telescopes, where they can build interferometers between multiple optical telescopes and interferometrically combine their data to create a larger synthetic aperture than what you have from any one of the individual instruments.
ZIERLER: A larger aperture, is there a challenge there with frame speed? Is it slower the larger it is? Is that a technical challenge to overcome?
BRADY: No. Typically what people do is they make the aperture bigger but they'll make the field of view smaller, so that the frame speed would stay the same. One of the things that we have emphasized is not doing that, making the aperture bigger and keeping the field of view up, in which case the data load becomes larger and larger. Then how you are going to manage and compress all that data becomes an issue.
ZIERLER: What is the value of applying techniques from interferometry for your work?
BRADY: It allows us to create a larger effective aperture. We also use interferometry to see through turbulence, see through the atmosphere. With interferometric techniques, we can computationally process the field in ways that you can't really do with just a lens.
Math Inspired by Biology
ZIERLER: You mentioned the phrase "artificial neural network." What is its relation to a quote-unquote real or biological neural networks?
BRADY: A biological neural network is a bunch of cells that are connected through axons and all kinds of complicated chemistry. An artificial neural network is a mathematical structure that is loosely based on the biological version but it's a connectionist machine, so it's a machine where there's a—classical computers are built from logic gates, but an artificial neural network, the basic computational tool is based on what's called a perceptron. It's a weighted vector matrix multiplier that combines a bunch of weighted values and puts them into a threshold. By the way, my PhD thesis at Caltech was one of the first PhD theses to use the term "artificial neural network."
ZIERLER: Oh, wow.
BRADY: The thesis was Volume Holography in Artificial Neural Networks. That was in 1990. Caltech had just started this computational neurosciences division and it looked to be good times, and then the 1990s were kind of a dark era for that. But optics was relevant to this artificial neural networks because optics—basically, in an artificial neural network, you have a bunch of signal values, which you think of as the outputs of neurons. You need to implement a linear sum of those things and then put them into a threshold. Electronics is not very good at doing linear sums of values because you have the impedance and interface issues, where optics is very naturally a very powerful way to do these kind of connections. In this case, using an optical device to combine a bunch of things and get an output is extremely different from what happens in a biological brain, but mathematically a kind of similar thing.
ZIERLER: You mentioned the ways that cameras can now be hundreds of times more impressive than a human eye, and then you just likened a brain for a neural network with a computer in artificial neural networks. Can we start to think about computational power as being in the same ranks as a human brain in terms of what a human brain can do?
BRADY: Yes. I think that's a really, really interesting question. In terms of raw numbers of calculations, current electronic machines are much faster than anything that happens in the human brain. Already the raw processing power is more than the human brain. I think there are people now that are debating whether ChatGPT and these platforms are sentient or not. Some of what it shows is just most of what humans do is kind of superficial. Actually, the funny thing about human intelligence is that it's a small part even of what the human brain does. It's much harder to make a machine that can walk around and pick up things and eat and do all the things we do naturally than it is to make a machine that can think, the way we think of thinking normally. ChatGPT is a massive computational system with like a trillion lines of code, but in the same sense that computation has gotten a million times faster over the course of my career, I think these neural systems are going to get a million times more efficient in the next 30 or 40 years. I think almost certainly we will have machines that are much smarter than humans.
ZIERLER: What about tomography? What role does tomography play in your work?
BRADY: Tomography just means—it's Greek for slice selection. "Graphic is imaging and "tomo" is measuring slices. It just means multidimensional imaging. We think of imaging as not tomographic just because it came from this era where images were formed on planes. Actually almost all imaging is tomographic because the real world is not two-dimensional; it's three, four, or five-dimensional. In what I do, computational imaging, there's three major revolutions. One is recognizing that the world is multidimensional. I've kind of sequentially gone through all different forms of tomography and done what is called compressive tomography. Traditionally people think, if I have a two-dimensional measurement—the core problem is, measurements are usually on a two-dimensional plane and the object is usually higher dimension. If you look at x-ray tomography, people have these round gantries and you have to spin around the object to form a three-dimensional image. We've gone through all these different forms of tomography and developed ways to do snapshot tomography, where we don't have to give up time. We can reconstruct multidimensional objects in a single snapshot. Hyperspectral imaging is a form of tomography where you have x, y planes in color. Video is a form of tomography with x, y, and time. Computational imaging is about ways to estimate multidimensional objects from lower-dimensional measurements.
The second aspect of that is that traditionally, when you take a photograph, it's a photograph of the light field; it's not a photograph of the object. Computational imaging and really computational tomography focuses on, we don't want to image the field, we want to image the object. When we talk about what neural systems allow, they allow us to do that. We don't form an image of the light that happened to be generated by the object; we form an estimate of the actual object. The third aspect is just this aspect of information quantity, that traditionally systems have been really limited in the amount of information they can collect, and by being clever about ways that we manage the data coming off these things and implementing real-time compression in neural systems, we can increase the amount of sensed information by many orders of magnitude.
The National Security Dimension
ZIERLER: Of course all of this technology, all of these capabilities, it must be quite expensive. What are the key sources of funding, both government and perhaps private enterprise, that make all of this possible?
BRADY: My work has historically been funded mostly by the Department of Defense and the Department of Homeland Security. We built a variety of advanced checkpoint technologies for the Department of Homeland Security. Currently we have a project to build a pilotage system for the Apache helicopter. We're building high-end military imaging systems. Of course the work on the gigapixel camera was funded by DARPA. In the space that I work in, a lot of people work in biomedical kinds of imaging and so their work is funded by NIH. I've kind of avoided learning any biology, so I tend to work on military-oriented things. I think that now of course, the major imaging companies in the world are the cell phone companies. Graduates of my group, about a third work for either Samsung or Google or Apple. Then of course Meta is investing heavily. Apple has made a big bet around the concept of spatial computing, and I think in general there is an emphasis around how these technologies change the way we interact with the world around us. Actually here at Arizona, we have a program called OASIS. It's on human augmentation and combining advanced sensing and interactive displays, so basically very much in line with like Apple's concept for spatial computing and Meta's concept for the Metaverse. That work is funded a lot by Big Tech and corporate interests.
ZIERLER: Let's go back now and establish some personal history. When you were an undergrad at Macalester, were you already thinking about optics and photography and electronics, or that comes later on at Caltech?
BRADY: As an undergraduate, I studied physics. My undergraduate thesis was on dark matter and cosmology. At Macalester, we had a tiny little particle accelerator. It was more like an ion implant machine but we thought it was a particle accelerator. The guys were really tied in with the guys at University of Chicago. That summer I was a senior, we went down and visited Fermilab. That wasn't for me. I didn't want to work in a group of like a thousand people working on these huge physics projects. I knew I wanted to do something applied. Actually the fact that Caltech had this Applied Physics program was a real attractor at Caltech, because I didn't go into pure physics; I went into the Applied Physics program. I would say that optics was like the only thing I could do, because I didn't know electronics, and I did know and understand waves. When I got to Caltech, I took quantum optics, or quantum electronics, with Amnon Yariv, and he talked about how he got into optics because he likes waves, and that he was like a body surfer and stuff. That really resonates with me as well. Electrical engineers like to think in like equivalent circuits, and I never have liked that. I like to think in terms of waves.
ZIERLER: The term "applied physics," how did you understand that as an attraction point to coming to Caltech?
BRADY: That's a good question. I don't think I really knew what it was. Actually at the time when I was a senior in college, there was a Scientific American article on optical computing, and that looked interesting to me. To be honest, I was an undergraduate at Macalester in Minnesota, so number one, I had to get out of the north. I was going to go to California any way you looked at it. I was really choosing between Stanford and Caltech. My girlfriend at the time, who is now my wife, we met in undergrad, and I thought she was going to go to UCLA, and so I went to Caltech. Then she ended up going to Berkeley.
ZIERLER: [laughs] It worked!
BRADY: Everything worked out in the end.
The Foundation Point of Fourier Optics
ZIERLER: Tell me about the course of study when you got to Caltech. What were the key courses for you to take?
BRADY: The main, most influential course was taking Fourier Optics from Goodman's book but taught by Demetri Psaltis. That was the most influential course in my career. I also took the basic quantum mechanics and electromagnetic theory. I sat in a little of—Feynman at the time was teaching a course on fundamental elements of computation. Carver Mead was teaching neural processing in electronic design. Both those courses I sat in, but I didn't register for. Later in life, I was on the faculty at Illinois, where we would get a lot of Caltech undergraduates, and Caltech had a reputation for people taking esoteric stuff and not really learning the basic stuff. I think that was true of my experience as well. I took a bunch of really esoteric, wild ideas at Caltech, and we didn't spend that much time on kind of the, "Here's basic theory."
ZIERLER: Who ended up being your thesis advisor?
BRADY: Demetri Psaltis was my thesis advisor. He was a real pioneer of optical neural processing and optical computing. Knowing that I wanted to do optical computing, I went and talked to him, and he allowed me to join his group.
ZIERLER: In what ways was he a pioneer? What did he do?
BRADY: He came from Carnegie Mellon where his advisor was this guy Casasent, who had pioneered a lot of ideas of optical processing using CCDs. Optoelectronic sensors were really kind of new then, and people were using them for analog processing of data. Psaltis and a guy Nabil Farhat were kind of the first people to recognize that if you were going to build a connectionist machine like a neural processor, that building it in optics would be the most efficient way to do that. Since that time, Psaltis has done many, many amazing things. He worked on optofluidics. The neural work that I worked on ultimately became a basis for building very high capacity volume holographic memories. The neural processing is now popular again, but it has never been successful. Optical neural processing is still not successful. The work I did as a PhD student—which it's good that you do this for research—we talked about that it was kind of like BS squared, because the neural stuff wasn't really working and the optical stuff wasn't really working. Now, 40 years later, the neural stuff is working and the optical stuff is not really working. I guess it is only BS to the first power now!
ZIERLER: What were some of the obvious technological limitations from your time as a graduate student, both in terms of instrumentation and computation, where you would have to look way out into the future to see how any of this stuff would come to fruition?
BRADY: That's the thing when you talk about Moore's Law. Moore's Law was a planned thing. It's an amazing thing about technological development that anybody would commit to that, because you would think that companies should rush to do the very best thing. Of course, the semiconductor manufacturing industry is off-the-charts complex and amazing. They go through these stages. The thing that changed was computers got faster. In the 1980s, computers were not powerful and fast enough to do neural processing, but people were working on it. Carver Mead was working on neural circuits then. Jewell at Bell Labs was working on these things. It was a good idea. It would have worked then, but people would have had to invest billions of dollars to make it work. We had all these little components, and even the neuro-optical stuff—if somebody came and said, "We're going to spend $10 billion to make this work," it would work. It's interesting because now, like in quantum computing, people are spending that kind of amount of money, and it's not really working, but there are much simpler computing things that you could do that would work for that kind of investment. The neural stuff needed hardware that was neural-specific, and people were designing that in the 1980s but didn't really have the investment to do it in a serious way. It didn't take off until GPUs came along and people were able to use GPUs to do this kind of processing, but GPUs were not really made for this purpose. Of course, now that people understand what can be done, there's all kinds of different companies trying to develop neuromorphic hardware.
Holograms and Algorithms
ZIERLER: Tell me about developing your thesis research and how that fit in overall with what Demetri was doing.
BRADY: I would say by the time that I was working in the group, almost everything that Demetri was working on at that time was neural processing related. My specific thesis was around details of how we control connections in volume holograms. We developed neural training algorithms that we could implement in neural processing. I made an algorithm that could use a hologram to translate the written text of my wife's name to a picture of my wife. Demetri came and saw that and he made me do one of his wife, too.
ZIERLER: [laughs] What did the holograms look like?
BRADY: They're crystals, crystalline materials. They're very thick, and they don't look like—they're holograms for processing, not really holograms for display, so they form like connections in space. Actually, at that time, the IBM PC was relatively new, and I was one of the first people that really kind of automated everything in an optics lab. The experiments, everything was controlled with computers. The lasers were water-cooled. Actually at Caltech, I got into a lot of arguments with the plumbers, because the plumbing would go out and the lasers would fail. So my first experience with video was really—we attached video cameras to all the plumbing dials so we could show the plumbers what was fluctuating that would cause the lasers to fail. I spent a lot of time automating these experiments with early PCs. Those things, compared to today, were so extremely slow and archaic.
ZIERLER: When did you know you had enough to defend? What felt like a measure of finality to your dissertation research?
BRADY: Typically research is like three or four projects. I had plenty of material, and I was more like—Demetri was like, "You gotta get outta here," because I would have stayed forever. Those were good times. I was like, "I'm just going to stay here and work in the lab," and he's like, "You should apply for a job and get out of here."
ZIERLER: Life was good at Caltech for you.
BRADY: Yeah, it was very comfortable.
ZIERLER: How big was Demetri's research group and how collaborative were you with other grad students?
BRADY: There were probably about 20 or so people in the lab. We were in Steele Hall down in the basement. Those people, they are still my friends to this day. If you ask them about me, I think—I would steal other people's equipment, and whatever it took to get stuff done sometimes—
ZIERLER: [laughs]
BRADY: —but we worked together.
ZIERLER: Besides Demetri, who else was on your thesis committee?
BRADY: Bill Bridges, who developed the original argon laser. Amnon Yariv had done everything in early quantum computing. John Hopfield, who developed some of the early neural models that were most popular then was on the committee. Then Kerry Vahala.
ZIERLER: Did you interact significantly with Hopfield? Did you get a good sense of his approach to neural networks?
BRADY: I did, yeah. I took a class with him, and I had a good understanding of what he was doing.
ZIERLER: What were his ideas? How did he talk about neural networks?
BRADY: Of course the main thing that people were interested in then was how to train a neural network. Hopfield had a certain model for how you could create the weights in a neural network. He was very graphical; he used his hands a lot and explained connections.
ZIERLER: What did the job market look like at that point? Did you think about postdocs and faculty positions at the same time?
BRADY: There was a mini-recession back then. That was the Bush recession, it led the first Bush to lose the presidential election. Actually I had talked a lot with Tom Koch, who is the dean of the school where I am now. He was the Caltech recruiter at that time for Bell Labs. I thought about going to Bell Labs, but Bell Labs had a hiring freeze on. My number-one choice was never leave Caltech and just stay and postdoc with Demetri, but Demetri showed me—there was an ad for the University of Illinois, and he said, "Why don't you apply here?" So I did. I didn't really think about a postdoc for a postdoc's sake, but there were two or three groups that were doing super-interesting work that I would have wanted to go learn about what they were doing.
ZIERLER: Was there a two-body problem to deal with here?
BRADY: Not too much. My wife worked at JPL. She was on the image processing team at JPL working in Jerry Solomon's group. Whenever there was a planetary flyby and stuff, she was always part of the group that was processing images. Her research was in SETI, so she had done a lot of signal processing related to SETI. She was working at JPL, which was I guess why life was so comfortable for me at Caltech, because my wife had a good salary. Then when we went to Illinois, she got into the Computer Science PhD program at Illinois and then ultimately—she works in visualization. You might say I make reality vacuum cleaners, and she makes things that that display the reality back.
The Two Beckman Institutes
ZIERLER: [laughs] What department did you join at Illinois?
BRADY: I was in the Electrical Engineering department. It was interesting, by the way, that I was in electrical engineering, but I was one of the first faculty who was hired into the Beckman Institute at Illinois. The Beckman at Caltech was opened right when I was graduating, and then I went to work for the Beckman Institute at Illinois.
ZIERLER: Tell me about the Beckman Institute and what they made possible for your research.
BRADY: Oh, Beckman was a huge influence. First of all, it was very cross-disciplinary. I was located with a bunch of—in a group with physicists doing chaos theory and some ultrafast chemists. It was not only electrical engineering; it was different groups all mixed together. One of the biggest things about Beckman at Illinois was that—I was hired at Illinois to work on neural systems, but at that time, I didn't want to work on that. I wanted to get into something more practical. I went into a phase of just shining light on stuff and writing papers about what happened when we shine light on different materials. I thought I was kind of unstable because I kept changing research areas every three or four years. But in 1995 I was at a Gordon Conference where there was a talk on computational imaging, and I thought that—I was so much opposed to optical computing that I wanted to get optics out of imaging, that I would show that we could do all-digital imaging. I had an idea for making a camera that would be a lensless camera. I went to the director of the Beckman at Illinois and described it and he gave me like $75,000 to go build it. Having that kind of flexibility of an institute where people said, "Oh, that's a good idea, just go try it out"—that was a good investment for Illinois, because then that led to millions of dollars of DARPA programs down the road.
ZIERLER: What's the idea behind a lensless camera? What could you do without the lens?
BRADY: This particular camera was—basically a lens is a kind of interferometer, right? It brings all the light from one point in space to focus at another point in space. But also you can think of it as a coding element. You might like to make like a phased-array camera. Like with radar, you have these flat-panel phased-array things. They don't have a focus. They don't point in any direction. They just process the field and form an image. We built a version of that for cameras, but we had to use laser light. With coherent light we can build cameras that don't have optics; it's just a flat panel that forms an image. We subsequently showed that we could do that with natural light, with what is called a rotational shearing interferometer. The most interesting part—we formed an image that didn't use any lenses, just processed the light and formed a tomographic image—it was an image of a toy dinosaur. I'm proud of it mostly because it was published in Science, and I think it's the only picture of a toy dinosaur ever published in Science.
ZIERLER: [laughs]
BRADY: Lenses is a way to do optical processing, but I would say in that process mostly I learned why lenses are so important. Since that time, I have been more focused on joint design of digital sampling and lenses.
Biophotonics at Duke
ZIERLER: Tell me about your decision to move over to Duke. What were the motivations there?
BRADY: The dean at Duke was Kristina Johnson. The optics community is not very big. When I started working on computational imaging, I was really driven by work by Tom Cathey at Colorado. Colorado had an NSF ERC in optical computing, and Kristina was ultimately the director of that ERC. She was recruited to be the dean of Engineering at Duke. I was visiting Duke and happened to show this work I was doing. Then Mike Fitzpatrick had sold his company to JDS Uniphase, so he had a tax problem that he had to get rid of a bunch of money. He ultimately gave grants to Duke and Stanford. I was part of the discussion that created this Fitzpatrick Center at Duke and ultimately became the director of the Fitzpatrick Center at Duke. It was just an opportunity to create a new approach.
At Illinois, I was in an electrical engineering group, and at Duke I had the opportunity to create a center that was focused on optics and photonics. Ultimately we created a very big biophotonics group with Joe Izatt and all kinds of great people at Duke. It was that opportunity, to build a new building, a new center. It was interesting, by the way, that Mike Fitzpatrick was good friends with Joe Goodman, and Goodman had been behind a whole bunch of startups in Silicon Valley. About that time, I met with Joe Goodman, and he explained the—he was on the board of like 20 companies—this was right in the middle of all the dot-com explosion of communications and stuff, and Goodman was involved in all kinds of companies, and the future of photonics seemed super bright, so creating this center was important.
ZIERLER: What was it about Fitzpatrick's work that was so valuable that he had this tax problem, as you put it?
BRADY: They made the thin-film filters, like for WDM communications, the core technology that you needed for making large networks with WDM communications devices.
ZIERLER: What was the founding mission of the Fitzpatrick Institute, and what would you say was its niche in the optics world?
BRADY: What we wanted to do was exactly this kind of thing that I'm—making systems-oriented optics, things that would be focused around applications of optical systems. The irony is, you never know when you—in academic centers, they can go a lot of different directions. Our focus was on ultrafast communications and the future internet, but in the end what became the Fitzpatrick Center at Duke—of course, one of the people that we hired was Jungsang Kim. I was involved in bringing him there as part of the Fitzpatrick Center. Subsequently that became the Duke Quantum Computing Center. Jungsang was the founder of IonQ, and so the explosion in quantum information science at Duke had its seeds in what we were doing then. The other aspect was Duke became a super powerhouse in biophotonic imaging, including Roarke Horstmeyer who went from Duke to Caltech and then back to Duke. The idea was to make optical systems a big center, but in the end it turned out that 20 years later, its main things were biophotonics and quantum computing.
ZIERLER: For biophotonics, what are the contributions in terms of health sciences and biotechnology? What are the capabilities that it makes possible?
BRADY: I think the most successful thing has been optical coherence tomography, which is used mostly for retinal imaging, basically imaging of the eye in a variety of formats. We also did a lot of work in advanced spectroscopy. You can do pathology—obviously the pulse oximeter has been a major feature as well. Ways to measure chemistry within the body has been a major contribution of biophotonics. Then of course on the research side, photonic imaging is transformative for a wide variety of different things in medicine.
The Origins of the Gigapixel Camera
ZIERLER: Let's trace the origin story of the gigapixel camera that came out in 2012. How did that get started? Can you explain the revolutionary nature of this? How much bigger was it than the next biggest camera when it came out?
BRADY: When I was back at the Beckman Institute at Illinois, like I said, Illinois gave me this seed grant to make a lensless camera. That led to a seed grant from DARPA that was a $300,000 program or something like that. That led to another DARPA program. In the early 2000s, there were a sequence of DARPA programs around measurement science. This was mostly led by Dennis Healy who was a program manager at DARPA. We went from a seedling grant at the Beckman Institute to $300,000 to maybe a couple million dollars. Then we had maybe a $5 million project to make thin, compact cameras. It was called the MONTAGE program at DARPA.
As that program was coming to an end, Dennis said, "What have we learned about all this? What should be the next DARPA program?" What I had learned was that we could really push the physical limits of cameras, that a camera, the number of pixels that you measure should be the aperture rate divided by the wavelength squared. If you have a millimeter aperture, you should get a megapixel; centimeter, you should get 100 megapixels; 10 centimeters, you should get 10 gigapixels. Physically, we were just far away from that. I went to DARPA and said, "This is what I think is the most exciting thing, is that cameras should get higher resolution." I went and proposed this.
Tony Tether was the director of DARPA, and Dennis had gone and proposed and said, "We're going to build a gigapixel camera." Tether said, "That's not enough. You should do 100 gigapixels." Then we settled on like 50 gigapixels. In the end we built like a 10-gigapixel camera in the project. At the time, people really had thought of lenses as being the limit of what a camera could do. We were able to show in that program that optics is not the problem, that we can build optics that will resolve however many pixels you want. The main problem is this computational problem about how do you manage all the data on the back end. That has been kind of the emphasis since.
ZIERLER: Do you have gigapixel in mind as a limit or a threshold, as a goal, at the beginning of this project, or is more about just pushing the limits and then you realize that you're in gigapixel territory?
BRADY: No, it was about gigapixels from the start. We had this thing where people were talking about, "Well, megapixels don't matter." There was a Staples ad of Santa sprinkling megapixels around on a Christmas tree or something. Yeah, we wanted to go to gigapixels. That's the thing, is that still, to this day, you see now like 100-megapixel or ten—good cameras are about 10 megapixels still to this day outside of what we did. People are making 100-megapixel sensors, but they don't really work, actually. When you see these cell phones with a 100-megapixel sensor, they don't resolve anywhere close to that kind of real pixel value. You need to go to this kind of multiscale lens approach that we developed if you really want to be able to resolve gigapixels.
ZIERLER: Who was most excited? What were the research communities that were chomping at the bit for these capabilities?
BRADY: That's a good question. I don't think anybody really—still to this day, it's hard to use gigapixels. It's hard to use that much data. I think definitely our customer of record was the Navy. The Navy definitely has use for this, where they have a ship and you want to see in all directions all the time. Definitely there are gigapixel telescopes. There's the LSST telescope and the PanStarrs telescope. I think astronomers are definitely in things where—like LSST is supposed to see more supernovas in a year than all previous human history ever saw at once. If you want to see something dynamic, you need this super high resolution. But I would say we're still fighting that battle of making people understand why they want gigapixels. There's a value now that if you're going to put a bunch of security cameras out, it's better to put an integrated camera out, but I don't think—we've gone and created interactive media for sports, where you can zoom and look everywhere, but the infrastructure for people to really use that and think about what it means is still in development. And I mean, making gigapixel microscopes. Anything where there's something dynamic, you would like to have this kind of resolution. But it's still—until we develop—it's kind of like, before television, who was sitting around saying, "We just need television right now"? People play video games, right? Obviously people understand that they like to play video games. If we could make a video game except it's real-world data, that's something people are going to enjoy, but we need to keep working on making the infrastructure to where people are really going to understand how to use that.
ZIERLER: What about from a materials science perspective, so thinking about like the revolution of silicon and CCD detectors and all of the advances in pixelation there? What materials were made available that made the gigapixel camera possible?
BRADY: Definitely low-cost integrated camera modules. The sensors we used actually were the same sensors as were going into GoPro cameras. Definitely the revolution in optics has been the—the first 100 years of photography, photographers would all make their own cameras. You would get a wooden box and screw a lens to it. Then for the last 100 years, the standard has been removable-lens cameras. You buy a camera back, and you screw a lens on. Removable lens cameras are never going to get near the diffraction limit. The mechanical alignment is not there to really get high optical performance. Cell phone cameras use integrated modules where the lens and the sensor is manufactured together in one piece. This allows you to work with much more aggressive optical design, so you get spherical optics, you get molded optics, that first of all uses new materials but is manufactured in ways that is very different from traditional lenses. Then you integrate that into a high-performance camera module. Really that's a key piece of making gigapixel cameras is applying this cell phone lens manufacturing technology to high-performance optics.
From Digital to Computational Photography
ZIERLER: To stay on the CCD detector theme, in the way that that presented a revolution from analog photography to digital photography, does the gigapixel camera still fit in the digital narrative, or is it something different than digital photography?
BRADY: It's definitely digital photography but it's more than digital photography in the sense that it's computational photography. The image is not formed—there's a model of film photography going into the digital backs where the image is still the image that's sensed by the sensor. But with these gigapixel cameras—one of the things we've done in the last 10 years is really start to dig more into a diversity of sampling. We have multiple focal lengths, multiple different color sensitivities, multiple different frame rates, so the camera is just an information collection machine. Actually one difference is—traditionally a camera is a device that forms an image, but the modern camera is really an analog-to-digital converter. It takes a massively parallel stream of optical data and turns it into digital data on the back. The first digital cameras were not really like that. They were still basically film cameras with a digital back, where the gigapixel camera is really something that is designed to transduce as much information as possible and put it out in digital form.
There's a model with traditional cameras where basically the sensed image is the same as the display image, so the camera is like a pipe; you sense an image, you show that image. With these digital cameras with a visual cortex, you're just collecting an enormous amount of data and then you're using that data in a variety of different ways, and for different users you'll create different visualizations from the same dataset.
ZIERLER: I'm just looking at the chronology. Many of the companies that you started or helped to found happened during the Duke years. Was there a strong startup culture at Duke? Did you get involved in all of that at a broader level?
BRADY: Yeah, Duke has a—the Research Triangle is a great place to do business. The short answer is yes, there is a good startup culture, and you see like now IonQ was spun out of Duke, and Evolv was a company that I was involved with that spun out of Duke. One of the reasons that I moved to Arizona is I think that the sort of information science oriented—in optical—definitely Arizona is a better place for the optics business than North Carolina. North Carolina is super strong in health sciences and biomedical technologies.
ZIERLER: What has been the development of the AWARE camera since its inception? Are you currently building better prototypes, or are you onto totally different things now?
BRADY: The AWARE camera has kind of continuously been my focus the last 15 years. We built camera modules that came out of it. The image you see behind me is staring down the barrel of one of AWARE cameras. The thing that became clear through that process was that the computer architecture and the software was a bigger challenge than the optics. We now have like 10 years into development of an array camera operating system, so we've built infrastructure that can just manage massive amounts of data and turn into real video in real time. I've traveled the world since then working on the supply chain for camera modules that fit in this architecture that we can continue to build better cameras. We've been commercially selling array cameras for seven years or so.
ZIERLER: The theme you've been developing is the importance of software. Are you using software sort of off the shelf for the AWARE camera, or does this need its own unique, bespoke software?
BRADY: We've developed an operating system that manages data from array cameras. Imaging is the most computation-intense thing that humans do, basically. For cameras there is usually an ASIC to manage the computational load. For the AWARE cameras, we used an FPGA fabric, and we're using FPGAs in some of the current systems. By the way, we spent a lot of time on the software, but the details of exactly the electronic hardware, and of course of the lenses, is critical to the components as well. You need that little bit of hardware that is well designed, and then you end up with infinite software challenges after that.
Hardware and the Global Supply Chain
ZIERLER: When you talk about a world tour for supply chain issues, that's all on the hardware side of things?
BRADY: Yes.
ZIERLER: Where do you have to go? Why is this a global effort?
BRADY: Optics was never really an American thing. Lenses, traditionally—the main lens companies were in Germany and Japan. This is like Nikon and Tamron and Canon, and then Leica and Zeiss in Germany. With the development of this new model of integrated camera modules, that is basically all done in China. You can go way up the chain and find out who is actually building stuff. Basically optics is almost all manufactured in Asia. It's the same as with semiconductors these days. Once you get into where you really want to manufacture in volume, you end up talking to Asian companies.
ZIERLER: In launching all of these companies, what did you learn about yourself in terms of what you were good at and what you needed to outsource to others?
BRADY: That's a really good question. First of all, outsource anything that you can. Anything that you personally don't have to do, you should get somebody else to do. I've been very fortunate to have this opportunity to work with Demetri and understand optical systems and know the leaders, so I think in terms of understanding what it means to make a measurement and how you manage that data, I've been very fortunate to have a very good understanding of that. In terms of bringing everything together to really create this new kind of media, I'm still trying to find the right people for that. We have a great team now, but I've been through cycles of a whole bunch of different projects, and so far the technology that we would like to see isn't still on the horizon. But definitely from a business point of view, if you want somebody to manage a business, you should get somebody who cares about business. By the way, for me personally, I like to be in the lab and play with this stuff. When COVID came out, I was lucky, because I had been living in China from 2019 to 2020, and I had just moved back to the U.S. when—I was very fortunate that I had moved back right when COVID was coming out. Then I lived in a house on the coast in North Carolina where I didn't really talk to anybody for six months, and that was like a perfect life for me.
ZIERLER: When you were in China, was that a leave of absence? Was that a sabbatical?
BRADY: Duke has a campus in Kunshan, near Shanghai.
ZIERLER: Oh!
BRADY: I was assigned to Duke's campus in China for three years.
ZIERLER: Was that useful to you, being on the ground there?
BRADY: It was hugely useful. It was really, really, really exciting and interesting. I had a lab that was sponsored by the local government in Kunshan that we did all kinds of powerful imaging stuff. You actually can't see because I've got a background, but on my wall I have a picture of Daguerre's—one of the first photographs of a person—it's called Boulevard du Temple—that Daguerre took in Paris. It's a picture out his window in Paris. He just happened to have his camera set up. That's the thing camera developers do is you have cameras set up taking pictures all day. So, I have like infinitely many pictures of staring—I had a lab in Kunshan that was on the sixteenth floor of a research building, and we had banks of cameras set up there testing all day.
ZIERLER: When COVID hit and you were by yourself, that was a productive time for you?
BRADY: Yeah.
ZIERLER: Was it time to just think, to get through the literature? What did you do when you couldn't be at your lab?
BRADY: We started working on some of this aperture synthesis and creating—one of the big changes since AWARE—AWARE is basically an array of kind of a conventional camera that sees normal light. We've been thinking a lot about combining conventional cameras with lidar and other kinds of 3D sensing. During that time when COVID hit, I was working on a new kind of lensless camera that would do aperture synthesis again with coherent elimination that we could combine with other array cameras. Really the kind of resolution we can get when we start to get these advanced algorithms and more advanced sensing is mind-blowing. Every time you think you fully really understand this stuff, there's—and having that time to really dig into some of the theoretical aspects was really instrumental.
ZIERLER: In the middle of COVID, of course, you make the move over to Arizona. What was the interest there for you?
BRADY: One thing is, sometimes you feel like you don't so much as work for a university as work for the field that you're in. Actually, when I went to Duke, the reason to go was to create this Fitzpatrick Center for Photonics. It was a service to Duke, certainly, but it was a service to the community, to capture this capacity to have a center. The same thing here at Arizona. I think that I'm needed here, and that because of this growth in the School of Optical Sciences, certainly I can make a bigger contribution here than I would at Duke. Then really—it's a stupid, silly thing, but when you get old, stupid, silly things kind of are useful—but to have a chair that is named for Professor Goodman is just a lot of fun.
ZIERLER: Why is that an honor for you? What does Goodman mean to you?
BRADY: There's an intellectual thread to this community, the way we're trying to build things, and he started that story. A lot of other people were involved, too. I've had the honor of meeting a lot of them because the community is I guess not that old. It's not the most important thing in the world, but it's a good story and a good feeling, to feel like this is a community that we're building together.
ZIERLER: Did you come with administrative responsibilities like you did at Duke, or you're focused on your research lab, being a professor?
BRADY: Here I don't have any administrative role.
ZIERLER: Was that an attraction point to you, to shed those responsibilities?
BRADY: I had already gotten rid of them at Duke. I was only director of the Fitzpatrick Center for the first five years or so. My passion is building things, so I've built a lot of these companies, but building the companies for me is not about running the company, it's about creating the technology. I don't know if it's a fortunate thing or not, but passion for the business of academia has never been one that I really had.
Pushing the Limits of Array Cameras
ZIERLER: We'll bring the story right to the present. What are you currently working on, and more broadly what is interesting to you in optics and optical systems?
BRADY: Currently we're working on array cameras, pushing the limits, building bigger systems, going from gigapixels to terapixels. The thing that is powerful about that is integrated neural processing. Really, even what it means to measure a pixel is changing. When we make optical measurements, we still don't understand the most efficient ways to do that. I've been at this for 25 years or something in computational imaging, but cameras are a bunch of crap. I've just got to tell you, that they're terrible. But it's interesting, because I worked in neural processing in the 1980s, and that was terrible. What we were doing were really toy problems compared to what people are doing now. Then it became—now, neural systems are changing life on Earth. It's up there with the invention of the automobile and the invention of airplanes. Everything about the life of the future is different because of this ability to talk to computers. So, it takes some patience.
When I started working in computational imaging, for the first 15 or 20 years, people were like, "This is clever but it's useless. Why are you doing this?" But now, computational imaging has become central to the way cell phones form images. It's everywhere in the way that cell phones are forming images, but it's nowhere except for cell phones. It's going to be everywhere. It will make it so that the way we interact with the world is just going to be different, because we can just ask—like now, ChatGPT, you can ask it anything that has ever been written down by a human or has ever been explored, but imagine you live in a world where anything that could be known is known, because the sensing capability is there.
When I was at Caltech studying what Carver Mead was doing with neuromorphic sensors, that was an idea in the 1980s. The sensors are nowhere like that now. They don't use that at all. Part of my passion has been, first of all, to understand the ways that these things should be done, but also understand why it doesn't happen. Why, when we know that there's better ways to do things, we don't get to doing it. I guess in this phase of my career, I'm still hoping that we'll figure out a way to get this stuff actually operational in the real world. That's my focus, is building better and bigger imaging systems and combining them with spatial computing displays. Stuff seems to be happening pretty fast in that space.
I think beyond that, somehow there's a huge disconnect in the world. I think that like the James Webb Space Telescope is such an amazing thing. It's understanding life at the very beginning of the universe and everything. It's so beautiful and amazing, and you think, well, this is like why humans exist on earth, is to create something so beautiful and amazing. At the same time, somebody sent me a video today of a whole community that thinks that nuclear weapons are not real. How we live in world that simultaneously has such extreme amounts of knowledge and such extreme amounts of ignorance is just fascinating. Somehow we have to find a way to overcome that.
ZIERLER: That's right. David, we'll wrap with some retrospective questions about your career and then we'll end looking to the future. Of course what brings us together is Caltech. What has remained with you from Caltech? You have emphasized perhaps, while you were at Caltech, your ability to surround yourself with people who were really thinking about the future and what it took to get there. How have you incorporated that into your research career?
BRADY: I think that keeping it real, and keeping it kind of focused, has been a real lesson from Caltech. I grew up in Montana, a kind of rural—I think the people I grew up with, if you wanted to explain to them, "Somebody will pay you money to sit around and think all day," they would not be able to understand that. When I got to Caltech, we just—did stuff. Like one time—like these holograms that I made used these crystals. The crystals were $5,000 each. I was a young graduate student, and I sprayed freon on one, and it shattered, and I was devastated. I was like, "Oh, man, I just—" I went to Professor Psaltis and I said, "I destroyed this crystal." And he says, "Well, order another one." That kind of attitude of "let's just go do it" was amazing. The other thing was that in science, you have to stay honest. You have to believe—not overhype what you're doing. What Demetri did was that—I went to a conference, and somebody asked a question, and I just answered it directly. It was kind of like, "Why are you doing this?" And he said, "You did that right. That was right. Just tell the truth." That was a sort of core thing.
The other thing that he did was that when I was in the lab—our lab, everybody was super smart, and we were working all the time. It was real aggressive. We would argue and stuff. Then I went to a conference with Demetri, and he introduced me to some people, and he said, "This is Dave Brady. He's so smart. He's so—" I was looking at him like, "Who are you talking to?" Because when I was with him in the lab, he never said anything like that! All of a sudden you're at these conferences and he said, "This guy is so smart." He taught me that you have to believe in yourself. In academia, you've got to sell what you're doing and believe in it. That was an important thing for me to do.
ZIERLER: You've mentioned that, relatively speaking, the optics community is small. How has that been an asset for you over your career?
BRADY: It's nice that everybody knows each other and everybody hangs out together. It would be better, I guess, if it was bigger. But that's the thing about Caltech and in general, is that the people that you—I'm in touch with a couple people that I knew in high school. Undergrad, I'm in touch with maybe 10 people. Caltech, virtually everybody I knew at Caltech, I still know. I think it makes a big difference in terms of—and even when we decided to do the gigapixel camera, that was kind of a community thing, of people coming up with ideas and saying, "What's the next big thing we could do?" The community can come together and say, "This is a way for us to work together to make this happen."
The Boundlessness of Information
ZIERLER: Finally, David, you mentioned this idea before—we'll bring it all the way out into the future—that we're going to reach a point of creating enough data where we'll need all the atoms on Earth in order to store it. First of all, how do we get that to be science and not science fiction? And even if we're able to do it, why should we do it?
BRADY: That's not going to happen, by the way. We'll save a couple atoms for something else.
ZIERLER: [laughs]
BRADY: But we won't have to give up on the information. There's ways for us to get much, much more efficient about the ways we store information that we will have like a full history of everything. People have different opinions of the world, but as a scientist, I believe that we don't know what's going on in the world—there's all sorts of mysteries—but this pursuit of knowledge, that's what we're here for. Actually building things like James Webb and trying to figure out the universe, that's our—it's that process. It's not like we're going to discover things and we're going to know, but that process is what it means to be human.
ZIERLER: It's never-ending. There's no end point where we say we can wrap it up and we figured it all out.
BRADY: No, it keeps getting more interesting.
ZIERLER: On that note, this has been a wonderful, super-interesting conversation. I want to thank you so much for spending the time with me.
[END]
Interview Highlights
- Optics Leadership at Arizona
- Optics Between Physics and Engineering
- The Centrality of Machine Learning
- End Users from the Largest to Smallest Scales
- Math Inspired by Biology
- The National Security Dimension
- The Foundation Point of Fourier Optics
- Holograms and Algorithms
- The Two Beckman Institutes
- Biophotonics at Duke
- The Origins of the Gigapixel Camera
- From Digital to Computational Photography
- Hardware and the Global Supply Chain
- Pushing the Limits of Array Cameras
- The Boundlessness of Information