skip to main content
Home  /  Interviews  /  David J. Anderson

David J. Anderson

David J. Anderson

Seymour Benzer Professor of Biology
Investigator, Howard Hughes Medical Institute
Director and Leadership Chair of the Tianqiao and Chrissy Chen Institute for Neuroscience

By David Zierler, Director of the Caltech Heritage Project
January 7, 24, March 15, May 31, August 22, December 5, 2022

DAVID ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Friday, January 7th, 2022. I am so happy to be here with Professor David J. Anderson. David, great to be with you. Thank you so much for joining me today.

DAVID ANDERSON: Thank you. It's a pleasure to be here.

ZIERLER: To start, I know this is going to be a complicated answer, but could you tell me your titles here at Caltech? I say titles because I know you have more than one.

ANDERSON: My favorite title is the Seymour Benzer Professor of Biology. That's a purely honorific title. Then my official titles are the Tianqiao and Chrissy Chen Leadership Chair, and the Director of the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech. In addition, I'm a Howard Hughes Medical Institute Investigator, like Elliott. That's a dual appointment. It's a little unusual in that I'm technically an employee of Howard Hughes, although I'm still a professorial faculty member at Caltech.

ZIERLER: So much to discuss just in the titles. Let's start first with Seymour Benzer. It's always interesting to see how these honorifics come about. First, do you see any intellectual connection or heritage between your research and what Benzer did?

ANDERSON: Very much so. In fact, I should say that when Caltech—I think it was actually David Baltimore—conceived the idea of applying honorific titles to Howard Hughes investigators, who don't have official named chairs at Caltech, because their salary is paid by Howard Hughes, I was originally signed to be the Roger Sperry Professor of Biology, which was of course flattering, because Roger Sperry is Caltech's only Nobel laureate in neuroscience. But I had a much deeper connection to Seymour. It was Seymour who personally recruited me to Caltech in the early 1980s, and it was Seymour's example that finally persuaded me to switch mid-career from the study of neural development in rodents to the study of behavior in fruit flies as well as in rodents. After Seymour passed away, I asked whether my title could be switched from the Roger Sperry Professor to the Seymour Benzer Professor. That was cleared with Seymour's widow, Carol Miller, and since it didn't involve any money or funding, that was fine.

I very much feel like I'm a scientist in Seymour's spirit, which is I pursue things that interest me because of my curiosity, not necessarily because I'm trying to cure a particular disease or something like that. For example, Seymour was always fascinated by the fact that—first of all, Seymour was fascinated by sex, and he was fascinated by the fact that copulation in Drosophila typically lasted about 20 minutes, which is the average duration of copulation in humans as measured by Masters and Johnson, as Seymour liked to point out. Seymour was very fascinated when he was able to identify a mutant which he named "Stuck" for obvious reasons. The copulation durations were longer than 20 minutes. Although I think he never really figured out what "Stuck" was, later on in my lab, when we were doing an analogous kind of screen for neurons, we discovered a neuron whose silencing caused a stuck phenotype as well, and it became clear that this neuron controlled the relationship between the duration of copulation and the timing of ejaculation in drosophila. This is a topic that most—if William Proxmire were still giving the Golden Fleece Award for researching useless areas of science, I think the control of erection and ejaculation and copulation duration in flies would be at the top of his list.

ZIERLER: [laughs]

ANDERSON: Although Seymour by that time had passed away, I felt that it was my responsibility as one of Seymour's inheritors, even though I was never his student, to pursue this, and so we did, and it turned into a really interesting paper which has since been followed up by other labs, and it turns out that it's an extremely useful system in which to understand the control of internal motivational states, the duration of behavior, and sensory/motor feedback control. So even something that looks as superficially useless as studying the duration of copulation in fruit flies can actually tell us things about nervous system function. So, there was a conscious effort to follow in Seymour's footsteps by studying that problem.

ZIERLER: We'll certainly return more to Benzer's influence on your research, but moving down the list, as an investigator to the Howard Hughes Medical Institute, does that mean that all of your funding comes from HHMI? Does that free you up from grants that you would have to get otherwise from NSF or NIH or elsewhere?

ANDERSON: No and yes. Yes, it provides a cushion of funding that allows me to pursue whatever direction of research strikes my fancy, without having to write a grant to NIH or NSF to support the research. But no, it doesn't provide enough funding to support my research effort in the lab. It covers about 40% of the funding, and so it's a mixture of NIH funding and HHMI funding. But it has been extremely useful as basically a source of funds that I can use for any purpose I want.

ZIERLER: Being a medical institute, does the funding or affiliation with Howard Hughes come with any expectation that some of your work should have a translational or a therapeutic component to it?

ANDERSON: No, there's no expectation. Professor Meyerowitz, who works on plant genetics, is a Howard Hughes Medical Institute professor. The applications of plant genetics are primarily in agriculture, not in medicine, although clearly there are important plant natural products that have medical importance. The HHMI, Howard Hughes Medical Institute, has evolved. It was originally created as a tax dodge by Howard Hughes in the 1950s. He got a bunch of friends of his who were physicians, and he anointed them as the Howard Hughes Medical Institute and gave them money to do I don't know what, and he was able to write off some amount of money from his taxes in this way. In fact, HHMI was the first MRO—Medical Research Organization—to be recognized by the Internal Revenue Service. So, it's not a foundation. It doesn't give grants. What it does is it pays employees to perform scientific research for it. That's why all HHMI investigators have to be employees of HHMI. Even though we're situated at Caltech, we get our paychecks from HHMI, our healthcare benefits from HHMI. Our tax deductions are paid by HHMI. That relationship lasts until we get terminated or resign from HHMI, because we get renewed every five or seven years. If a Caltech professor gets terminated from HHMI and they're tenured, they revert to being a Caltech professor, and then they go back to getting their salary and their healthcare benefits and everything else from Caltech. The advantage for Caltech is that since I became an HHMI investigator in 1989, Caltech hasn't paid a dime of my salary since 1989, which is 34 of the 37 years that I've been at Caltech, so I think I've been a good deal for Caltech.

ZIERLER: From a service perspective, being essentially an employee of HHMI, does that affect what kind of committee work, divisional chair work, those kinds of things, that you would do if you were a professor?

ANDERSON: That's a really good question. In theory, it was supposed to free investigators up from all of that kind of stuff. In practice, that has not happened at Caltech, because I think the divisional leadership was always concerned about creating a sort of two-class system of people who have had to do committee work and teaching and other things, and people who didn't because they were HHMI Investigators. So, it has not decreased in any quantitative way the service that I've had to do for the Institute while an investigator. I've been on many committees, divisional committees, Institute-wide committees, teaching. But there is a limit, there's a cap, that you can't spend more than 25% on your time on non-research-related activities if you're an HHMI investigator. So although I'm the director of the Chen Institute, since that only really takes up about 10% or 15% of my time, that's not a bad thing. I probably could not be division chair because that is probably more than a 50% time commitment, and certainly not something like provost. Division chair is sort of on the border line.

ZIERLER: To flip that question around, are there responsibilities you have to HHMI in the service realm beyond simply doing the research that's the nature of your employment with them?

ANDERSON: No, not really. Not what you would call service. There are no committees. Occasionally, I am asked to review applications for new HHMI investigators, but A, that's not obligatory, and B, I get paid an honorarium to do that. It is made clear that this is a voluntary thing that is not part of the expectation of your performance as an HHMI investigator. When you are evaluated for renewal, it is based exclusively on your publications and your research performance. There's no aspect of service that comes into play.

ZIERLER: Maybe this is a petty question, but when you receive a major award, and certainly you have, does Caltech and HHMI want both to claim you, so that they are sharing in that honor for you?

ANDERSON: I have no idea. I don't think any award that I've won is major enough that they would be quibbling about that. I think the one thing they do contractually share in is intellectual property. Technically, any patents that come out of work that I do are owned 50/50 by Hughes and Caltech, and there is some sharing arrangement if there are any royalties on how that works. But Hughes pretty much lets Caltech decide what to do in terms of licensing IP to a startup, for example, and Caltech pretty much lets professors decide what they want to do.

ZIERLER: Moving on to the Chen Institute, first, were you present at the creation when the Chens were inspired to make this incredible gift to Caltech?

ANDERSON: I was present at the signing of the documents, which was an important event that occurred in Singapore in 2016. I was there together with Tom Rosenbaum and Ed Stolper, the provost, and Steve Mayo, who was division chair at the time. But I can't claim any credit at all for attracting the Chens to Caltech or in persuading them to make the very large and generous donation that they did. I think that is largely Steve Mayo's accomplishment together with people who were in the Development office at the time [in particular Brian Lee, who was VP of DIR].

ZIERLER: Do you have any insight as to why the Chens were inspired to partner with Caltech, of all the institutes in the world?

ANDERSON: That is a really good question, and that actually came up at the signing ceremony in Singapore. They made two points that I thought were quite interesting. One was that at the time, one of their advisors had a background in engineering, so they were used to dealing with engineers, and they felt comfortable talking to, negotiating with engineers. Also with the small and compact size of the Caltech administration. We don't have many layers of deans. We don't have medical schools, law schools, business schools. Basically the core decision-makers were the three people that were in the room—the president, the provost, and the division chair. That attracted them as a simple organization that they could deal with. Secondly, in China, Caltech enjoys a very high reputation, probably higher on average than it does in the United States. For example, most people in California, if they're not in science and I tell them I'm at Caltech, they say, "Oh, is that the same thing as Cal Poly?"

ZIERLER: [laughs]

ANDERSON: No cab driver in Massachusetts, in Boston, would ever ask that question about MIT. Caltech has succeeded in keeping a very low profile. Then the third thing that they said which I found very interesting is that they were impressed with the fact that Caltech had race-blind admissions. Because I think it was already at that time, 2016, there were a number of lawsuits ongoing at Harvard and other schools about systematic discrimination against Asians in the undergraduate application process. As Asians, Tianqiao and Chrissy were impressed by the fact that Caltech accepted people purely on the basis of merit and background, and as a result, there is a high fraction of East Asian and South Asian students in the undergraduate body, and that there was no discrimination. That also contributed to it [their decision to donate to Caltech].

I guess there was a fourth reason, which is that they really wanted to own neuroscience at whatever institution they made their gift. For better or for worse, most of our other major competitor institutions—Harvard, MIT, Columbia, Stanford—already had received major donations for neuroscience institutes that had the donor's name attached to it. We had never received any such donation in the past for neuroscience at Caltech, and so this gave the Chens the opportunity to basically brand neuroscience at Caltech with their name. Of course that makes it difficult now for us to raise additional funding in neuroscience from other people, but I think that was the other reason -- they saw an opportunity to make neuroscience at Caltech a Chen Institute brand.

ZIERLER: Are you aware if there is anything personal to the Chens that motivated them to fund research in neuroscience specifically?

ANDERSON: Yeah. Tianqiao has been quite open about this in interviews that he has given to publications; he suffers from panic disorder. In fact, at one point, he was the richest man in China, I think when he was in his early thirties, and he made his money from online gaming. At some point, because of his panic and anxiety, he just decided to withdraw from business altogether and to focus on philanthropy and investing. He's very interested in trying to develop new treatments for mental illness. It's still very difficult for him to get on a plane and fly anywhere because of his panic disorder. I think that's one of the things that has motivated him, number one.

The second is that he's an entrepreneur. He sees what he calls the fourth industrial revolution which is brain-machine interfaces and the ability to integrate control of human thought, consciousness, disease, treatment with software and hardware. In fact, to give credit where credit is due, the Chens were attracted to Caltech by the work of Richard Andersen—who is not related to me; S-E-N, not S-O-N. Richard has received a lot of publicity. They saw him in a BBC television special for this spectacular brain-machine interface work, where he works with patients who are paralyzed from the neck down, quadriplegics, and he implants electrodes in their brains that allow him to decode motion intention signals. He records from a part of the brain that is involved in motor planning. Basically if he asks the patient to imagine, say, reaching for a cup of coffee, as opposed to doing something else, he can train a computer algorithm to recognize the pattern of neural activity that is associated with the patient's imagining [they want to] perform a particular action like reaching for a coffee cup. He can then route that signal, that decoding, to a robot, a robotic arm, and control it to reach a coffee cup. Then the patient, using biofeedback over many months, can actually train themselves to control this robotic arm by thinking about what they're doing and then improving the performance of this decoder. This has allowed patients who basically can't do anything from the neck down to perform simple tasks like that.

Now, all of this is at very, very early stages of development, and it's nowhere close to something that is going to be deployed routinely in medicine, at least not for probably another decade or so, because it involves very cumbersome and expensive computer apparatus and robotics. Nevertheless, the Chens' imagination was captured by this technology, and they also see this as a future area for entrepreneurship. They've set up a number of institutes in China to try to expand the testing and development of this type of technology. For them, the support of neuroscience has been tightly connected with the idea of supporting the development not only of fundamental science but of intellectual property that could then be used and licensed by them if they chose to, to start companies that develop this sort of technology.

ZIERLER: Given the importance of Tianqiao's panic attacks, did that inform at a foundational level what aspects of the Institute would be directed towards basic or fundamental science and what would be informed or motivated by translational medicine?

ANDERSON: No. In fact, as the director at the outset of the founding of the Institute, I was actually quite concerned that I would need to somehow protect my faculty colleagues from pressure, whether implicit or explicit, from the founders to direct their research in particular directions. I'm happy to say that no such pressure was ever exerted or materialized, and that I think the Chens and Caltech now have established a very good working relationship of mutual trust. They trust us to choose the science that we find most interesting to do.

The one way in which it did affect the distribution of resources in the Institute is that there was a special center that was set up for Richard Andersen to study brain-machine interfaces. There's three centers within the Chen Institute. There's the Center for Systems Neuroscience, which I'm the Acting Director of. Doris Tsao was the director of that before she left and moved to Berkeley. There's the Social and Decision Neuroscience Center, which is directed by Colin Camerer, who is in the Division of Humanities and Social Sciences. He's an economist. Then there's the Brain-Machine Interface Center, which is directed by Richard Andersen. Those decisions of how to distribute the resources were made before I was recruited to be the Director of the Institute, so I played no role in that. Those were decisions made by the president, the provost, and the division chair. Presumably, the hefty investment in brain-machine interface was in recognition of the fact that it was Richard Andersen's work that first brought Caltech to the attention to the Chens.

ZIERLER: To go back to the idea that there's this mutual trust that has been built up between the Chens and Caltech, is there also a general understanding that the Chen Institute is working in an area of science that's really pushing the frontiers, and just by a matter of fact, there needs to be the basic science, the fundamental research, before anyone can even think about translations?

ANDERSON: Yes. Of course they totally get that. They have totally bought into that. I think they have evolved a broader view now of what I've called the Chen Institute ecosystem. They have their own Chen Institute which is an umbrella organization in whose ecosystem the Chen Institute for Neuroscience at Caltech is one component and one node. They realize that if they want to be able to control the direction of research to channel it into the development of intellectual property that they can license in order to found startups to develop the technology, that the university is not the right forum to do that, at least not in the United States. So, they're setting up other institutes and other models. They have one in China that they have set up which is more set up along those lines. Kind of like a Howard Hughes Medical Institute where the people that are working in that Institute are employees of the Chens, and then the Chen Institute will own any intellectual property that is generated and be able to license it.

ZIERLER: To go back to your titles, on a day-to-day, you're Leadership Chair but you're also Director of the Institute. What are the distinctions there?

ANDERSON: The Leadership Chair means that there is some actual funds that are given as part of the endowment distribution to whoever is the current Tianqiao and Chrissy Chen Leadership Chair. When I step down from being Director, that will go to somebody else. Those funds, they don't fund my laboratory; they go largely to supporting the salary of my executive director, Mary Sikora, who is the person who is excellent and makes everything happen in the Institute. Also, a part-time administrative assistant. Then any funds in that that are left over go to support other activities that the Chen Institute sponsors around campus. I can go through those. But those are the sorts of things that I do as the Director of the Institute.

ZIERLER: A solid half-hour discussion just on all of your titles and responsibilities.

ANDERSON: Sorry! It's not very interesting.

ZIERLER: No, it's wonderful. It's great insight into all of your responsibilities. Now let's turn more to the research. First, to take a wide-angle view of the course, the trajectory of your career and how it changed as a result of Seymour Benzer's influence, can we also say that Benzer himself became involved in emotion, in this kind of neural circuitry that was not part of his earlier career?

ANDERSON: Only to the extent that he was a coauthor—and a reluctant coauthor at that; I can tell you that story at some point later on—on our first paper on fly behavior that got into the area of emotion, what looked like quote unquote "fear" in fruit flies. It was based on an anecdotal observation that Seymour had made that we decided to follow up on. Seymour was a coauthor with me and my postdoctoral mentor, Richard Axel, from Columbia on that paper which was published in Nature in 2004. In fact, I think that was published just before Richard received the Nobel Prize for his work on olfactory receptors.

Seymour was not really interested in that particular topic. He was interested at that point in his career in neurodegeneration and did a lot of work on that. He had been interested for a long time in the question of neurospecificity; that is, how brains get wired together, how all the neurons know how to find their targets. Then the reason that he got into the fly in the first place is that he wanted to understand the relationship between genes and behavior, and so he was studying innate behaviors like courtship behavior or geotaxis behavior, simple behaviors—although courtship is anything but simple. But he never really identified emotion as a topic that he wanted to study in flies. It was something he took for granted. Most people, if you asked them, "Do you think that flies have fear?" they would sort of shrug and say, "I kind of doubt it. They're probably just little robots." As far as Seymour was concerned, it was a foregone conclusion. He said, "Of course flies have fear. You put them in certain situations, they have to crawl through little holes, they don't like to do that." So Seymour endlessly anthropomorphized his flies. It's very tempting to do that when you study them for a long time. They start looking like little people with more legs than we have and with wings. But that really wasn't the area that he was interested in.

ZIERLER: For you, I wonder if you can explain, first, just at a very general level, what neural crest stem cells are.

ANDERSON: Sure. For the first half of my career, basically from the time I got to Caltech in 1986 until 2005, I studied the development of the nervous system, focusing on how different types of cells are generated during the development of the nervous system. Of course it's a sub-problem of the most fundamental question, which is how does a single fertilized egg, a zygote, develop into an entire organism that has hundreds or many, many tens of thousands of different types of cells, and different tissues, and different organs, not just the different cell types in the brain.

The neural crest is a part of the developing embryonic nervous system which sits atop the developing spinal cord. The developing spinal cord is a longitudinal tube in a fetus, which is actually called the neural tube. It rolls up from an elongated sheet to form a tube, and that tube will give rise to the central nervous system, including all the neurons in the spinal cord and all of the neurons in the brain. It sort of balloons out into a bunch of vesicles at the front end that become your cerebrum, your cerebellum, et cetera. But there's also an important part of your nervous system that is outside the brain and the spinal cord, which is called the peripheral nervous system. That consists of sympathetic neurons and parasympathetic neurons that control things like heart rate, respiration, blood pressure, so-called autonomic functions, and also your enteric nervous system, the neurons that control your gut and digestion, which have become a very hot topic recently because of all of the interest in the gut-brain axis. All of these peripheral neurons come from this [neural crest] precursor cell population that delaminates from the top of this rolled-up neural tube, and the [neural crest] cells migrate away from their site of origin on top of the neural tube, and they spread out through the developing fetus and home to specific locations and become different types of cells in different locations.

As somebody that was very interested in these types of differentiation processes, coming from the work that I did in Richard Axel's lab as a postdoc, I wanted to understand whether those decisions, those developmental processes, were generated in some sort of a hierarchical pattern with some sort of "Ur" cell at the top [of the hierarchy] that was completely undifferentiated then sort of in a hierarchical way generating progeny that became more and more and more specialized; or whether to begin with, the neural crest itself was very heterogenous even though it looked in the microscope very homogeneous, and that there were predefined subsets of [neural crest] cells that were going to generate different types of [differentiated] derivatives after they migrated. So [in that alternative view], a parallel rather than a hierarchical mechanism of development. Of course, everything is hierarchical when you take it back to the level of the egg, because it's just one cell to begin with.

I was very influenced by another Caltech colleague, who also subsequently passed away, named Paul Patterson. He was a neuroscientist at Caltech. Paul was very interested in parallels between the nervous system and the hematopoietic system, the system that gives rise to various blood cell types. He focused on similarities in intercellular signaling molecules that were shared by these two different systems. There are many similarities, but I became fascinated by the question of whether the concept of a stem cell, which was deeply ingrained in and really evolved from the study of the hematopoietic system—that is, the idea of an Ur cell that sits at the top of this hierarchy that does two things: It spins off progeny cells with more restricted developmental options, but it also makes additional copies of itself, for example by dividing asymmetrically to produce one daughter which is another stem cell, so you always have a reserve of stem cells, and then a second daughter that starts to differentiate.

I was interested and fascinated by the question of whether the nervous system might develop according to similar rules. I thought that the neural crest might be an easier system to tackle that problem in than the central nervous system, because the central nervous system is very complicated, has many, many different cell types, and the neural crest seemed like a more manageable system in which one could study that. Indeed, many laboratories before me like Nicole LeDouarin in France and Marianne Bronner, who is my colleague here at Caltech, have done lots of work mapping out the fate of various neural crest cells. Marianne has now done much more work on molecular mechanisms. Basically, a neural crest stem cell was something that we hypothesized might exist in a speculative review article that I wrote in I think 1988 or 1990. It was called "The Neural Crest Cell Lineage Problem: Neuropoiesis." I coined or borrowed this term from hematopoiesis, which is the process that is used to describe how the hematopoietic stem cell gives rise to white blood cells, red blood cells, B cells, T cells, macrophages, granulocytes, and raised the question of whether a similar mechanism and a similar kind of cell might exist at least in the peripheral nervous system.

Then we set out to search for such a cell, and finally in 1992 we were able to provide evidence at least from studies in petri dishes that the neural crest did contain cells that had this dual property. They could differentiate into specialized cell types—neurons and glial cells, which are the non-neuronal cells of the nervous system—but importantly, they could also self-renew. They could make more copies of cells that retained that multipotency, the ability to differentiate into different derivatives. This was actually the first example of a stem cell that was described in any part of a vertebrate nervous system. It came right at the beginning of a field that then mushroomed into this area of stem cell biology in the brain and the central nervous system, et cetera. There have been lots of other people that have worked on neural crest stem cells, and beautiful work by Marianne and others that have confirmed that these stem cells do exist at least transiently in vivo. What's not clear yet is how long they persist. Do they actually last into adulthood, like blood-forming stem cells do or skin-forming stem cells do, or are they there just for a while and then eventually they peter out? I don't know the answer to that question, but people are probably working on that. That's the long answer to, "What is a neural crest stem cell?"

ZIERLER: You are talking about neural crest stem cells somewhat divorced from the animals in which they exist. What are those animals that you were studying? Then I wonder if you can talk a little bit about the ability to extrapolate from one given species to all animal species or not.

ANDERSON: Sure. I should have said that the neural crest is a vertebrate invention. It is not something that you find in fruit flies, for example, or in mollusks or other invertebrates, but it's pretty much found in all vertebrate species. A lot of the early work on neural crest development was done in avian embryos, quail embryos and chick embryos. We focused our efforts on mouse and rat, because we wanted to bring genetic tools to bear. We wanted to understand ultimately the genes that control these developmental decisions, if you can call them decisions, by neural crest cells, and mice were a better system for using genetics to study that problem at the time than chick embryos were. That has since changed. The things that we and Marianne and others have found apply to rats, mice, chicks. Marianne does a lot of her work now in zebrafish embryos, which are another vertebrate, and they almost certainly apply to humans as well. In fact, in humans, there are a number of disorders that are due to defects of various kinds in neural crest derivatives. Craniofacial malformations such as cleft palate are due to defects in the neural crest cells that give rise to the bones of the face. So, neural crest cells are interesting because they don't only give rise to neural derivatives; they also give rise to non-neural tissue like the bones of the face, cartilage, and also outflow tracks from the heart. There are also a number of childhood tumors, neuroblastoma tumors, that come from neural crest derivatives. Then there are various kinds of neurological disorders like Guillain-Barre syndrome, which is a demyelinating disorder, that affect neural crest derivatives. So, not only do humans have neural crest cells, but neural crest cells or their derivatives are the cellular targets of a number of different kinds of diseases, developmental disorders, cancers, and neurological disorders that affect humans.

ZIERLER: Obviously it wouldn't be part of your research, but have you seen other researchers, even doctors, take what you have found and applied it in therapies or translational research?

ANDERSON: I'm trying to think about this. Not yet. I actually cofounded a company in 1994 with Irving Weissman from Stanford (he was the first person to purify the hematopoietic neural stem cell) and with Fred "Rusty" Gage (who is now the president of the Salk Institute), who worked on brain stem cells, to try to see if we could develop therapies based on stem cell technology. We had a lot of patents as a result of the work that we did on the neural crest stem cells that we licensed to this company, and I was very frustrated and disappointed that nothing ever came out of this, because although I felt disorders of the neural crest like demyelinating disorders were some of the lowest-hanging fruit and where one could try to get proof of principle that a stem cell therapy could actually work, they [people with these disorders] were simply not large enough markets monetarily for a biotech company to be interested. What biotech companies are interested in, what venture capitalists are interested in, are disorders that affect lots and lots of people like Parkinson's disease, Alzheimer's disease. These are all disorders of the central nervous system, which the neural crest has nothing to do with. Whatever focus there was on "neural stem cells" (writ large) in that company went into a focus on central nervous system (brain) stem cells and spinal cord stem cells.

So I think sadly, there is yet no clear application of neural stem cell technology, but the fact is there are very few applications of any stem cell technology that have been FDA approved and are regularly used in the clinic, even for hematopoietic stem cells, which have been around a lot longer than neural crest cells. Hematopoietic stem cells have been advocated by Irv Weissman as a better and purer substitute for bone marrow transplants -- as a treatment not for a disease, but a treatment for a side effect of another treatment for a disease: e.g., patients who get radiation therapy for cancer and who have their immune systems wiped out, because the immune system is sensitive to the radiation. So these patients' immune system has to be replaced by bone marrow transplants. But if the cancer is in the hematopoietic system, if it's a lymphoma or some kind of hematopoietic cancer, you obviously can't transplant the patient's own bone marrow, because it could give you the same mutation [cancer-forming cell] all over again. The idea is that if you purify the hematopoietic stem cell, you can get rid of any contaminating cancer cells, and you could use the patient's own hematopoietic stem cells. You wouldn't need a first-degree relative, and you could transplant that stem cell to replenish their [the patient's] hematopoietic cells. There have been promising clinical trials, but it's not an approach that is widely in use [yet], I think, in the clinic, to be honest, because it's very cumbersome and very expensive and requires a lot of machinery, and bone marrow transplants [from first-degree relatives] are just easier and cheaper to do.

The only other stem cell technology that is showing any promise right now, and it's still very early days, is the transplantation of stem cells to make pancreatic islets to treat diabetes. In fact, you may have seen just in the last two or three weeks an article that one of the first human patients to receive such a transplant—this is an "n" of one, so it's an anecdote; it's not data—seems to be showing dramatic improvements in their diabetes. This is the work of Doug Melton, a Harvard professor, who started working on stem cells 30 years ago, just like five years after I started working on them. So, you can see the incredibly slow trajectory that it takes to get from the basic biology of stem cells into something that the FDA would actually let you put in a person and begin to conduct clinical trials. Most people don't understand how slow it is.

ZIERLER: I wonder if you see the relative lack of progress in the regulatory framework with stem cells as having a political dimension to it, given the controversies surrounding stem cells.

ANDERSON: Oy. Now you're bringing up all kinds of history that I prefer to avoid. But in the mid 1990s, I guess—stem cell biology has always been, as you imply correctly, a political hot-button issue, because some human stem cell research uses tissue from aborted human fetuses, and there are religious and political objections to that on the right. I actually wound up making a trip to Washington D.C. with a group of wealthy Hollywood film producers who had children who were afflicted by juvenile-onset diabetes, for which stem cell therapy was a potentially important treatment, to lobby various [moderate Republicans to convince their more conservative colleagues of the value of stem cell research]. At the time, there was [still] such a thing as a moderate Republican, and so we went through Capitol Hill. This was quite an eye-opening experience to me, because I realized how much money and recognition as these Hollywood industry types have opened up the doors of power to you. We met with Orrin Hatch and Susan Collins and other moderate Republicans to try to persuade them of the importance of human stem cell research. For a while, I was very involved in that.

Then that led to the passing of this proposition in California that—what did it do? It mandated $3 billion in state spending for stem cell research over a ten-year period. That led to CIRM, the California Institute for Regenerative Medicine. It led to Irv Weissman and Rusty Gage establishing major new centers and institutes for stem cell research. That was precisely the time that I decided to get out of stem cells and to move into a different field, when the money started pouring in. I remember during one of my many sleepless nights at 3:00 in the morning looking at myself in the mirror thinking, "Are you out of your mind, that you're getting out of this field, just at the time that resources and thousands of people are pouring into it?" But this was Seymour's influence again. Seymour made one of first what you would arguably say of many Nobel-worthy discoveries, although he never got a Nobel Prize, when he was working on phage, bacteriophage, to help crack the genetic code. He made some absolutely seminal contributions to that, and then just at the time when the field of molecular biology and the chance to crack the genetic code and establish the central dogma opened up and people came flooding into it, Seymour said, "That's enough for me. I'm going off to open up some new area."

Because Seymour just didn't like to work in a heavily competitive field, and neither do I. You're always faced, when you start something new in science, which then brings a lot of people into it, you have to decide whether you want to stay in the field and duke it out with all of the competition that you've spawned or go off and do something else. In switching from stem cells to behavior, I was very much trying to follow in Seymour's footsteps of doing the non-obvious thing and going into a new area. Which now of course has itself become highly competitive as did stem cells. So, you can run but you can't hide, as they say.

ZIERLER: David, of course this now gets us to neural circuits. Just by way of explaining the transition, how different of a field is this? How much of a leap? Are you hitting the textbooks again? How much are you teaching yourself? Or is there a natural bridge between the two areas?

ANDERSON: That's a really good question. There is and was a natural bridge in that I was a developmental neurobiologist. I studied the development of the nervous system. You could argue that one way to understand how the brain works is to understand how it's put together, but the promise of that has not really been realized. For the most part, it has been a very radical shift in the way I think about science and the fields. But we started off by trying to actually make that link between neural crest development and function by taking advantage of the fact that the neural crest gives rise to neurons that sense painful stimuli. We used that to bootstrap ourselves into identifying genes that marked very specific populations of pain-sensing neurons in adult animals and to begin to start to use those genetic tools to map the connections and the circuitry that those neurons were involved in.

In that little part of our work, which was at the very beginning of our transition from neural development into behavior and neural circuits, we did leverage our work on the development of the peripheral nervous system and started out working on pain. [There are different kinds of sensory neurons that detect either] noxious or pleasant stimulation to the skin and the body. That actually led us to an interesting discovery which was the discovery of a very rare subset of neurons—this is in mice—that send their fibers to the skin, but instead of responding to painful stimuli like a pin prick or the touch of a match, they respond to pleasurable stroking of the skin. We loosely referred to them as massage-sensing neurons, and of course these play a very important role in nature, since what animal mothers do in mammals, is they spend awful lot of time licking and grooming their offspring. That licking and grooming is in part, we think, detected by these massage-sensitive neurons. In fact, we got the cover of Nature for this paper in 2013, and for the photo, Nature chose a mother baboon grooming her offspring in that way. We were able to show that animals find the activation of these neurons a positive experience, because we could train the animals to return to a location where those neurons had been activated. That said that their brains detected something positive associated with that [the activation of those neurons]. [In that case], there really was a throughline that started with our neural crest work and took advantage of genes that we discovered in the course of trying to study the development of these neural crest derived pain-sensing neurons, and took us all the way into adult behavior.

ZIERLER: Just so I understand, kind of a chicken and the egg question, were you interested in neural circuits, and this got you to emotion and behavior, or were you interested in emotion and behavior and that got you to neural circuits?

ANDERSON: I was interested first in neural circuits and behavior. I have to say I try to be pragmatic and opportunistic in the research directions that I choose, and I saw this focus on pain and as it turned out later, pleasure, as a way to leverage the investment that we made in working on [the development of the] neurons that detect painful and pleasant skin stimulation to take us into the study of circuits and behavior in the adult. In parallel to that, I have to say I was interested from the outset in understanding emotion, fear in particular. Our first paper that was really studying, trying to begin to map the brain regions that control various types of fear, had nothing to do with our work on neural development or neural crests. That was published in 2003, and it was a study of how the brain processes sounds that are innately aversive to the animal. You can take a mouse that has never before been exposed to an ultrasonic tone at about 20 kilohertz and play it to them and they will freeze or they will try to run away from the place where the sound is being generated. We mapped the brain regions that are activated by those stimuli and tried to begin to understand what determines whether the animal's brain decides to freeze or decides to run in response to that sound. For that, I really had to go back to graduate school. I had to do huge amounts of reading.

I have to say that for a long time, the wind was very much in my face instead of at my back. I had achieved a certain amount of prominence in the stem cell field to the extent that it was very difficult for me, for example, to wander around a poster session in a meeting without people approaching me—students, postdocs—wanting to pepper me with questions, and I could never read the posters that I wanted to read. I remember going to my first meeting on the amygdala, the brain region implicated in fear, and nobody knew me from a hole in the wall, and it was actually kind of refreshing to be able to wander around these posters like a first-year graduate student and be completely ignored by everybody at the meeting. In fact, I was completely ignored by that field for the first decade and a half that we worked in it, despite the fact that we published some major papers in Nature on the control of fear by the amygdala. That involved a lot of retooling and learning.

I have to say I enjoy science the best when I'm learning something new. The learning process for me is really one of the things that drives me. For example now, fast forwarding to the last three to five years, my lab has been getting more involved in taking computational approaches to understanding emotion coding in the brain. I'm not a computational person; I was never very good at math. If you saw my math SAT scores, I would never have been admitted to Caltech, and in fact I have no business being on the faculty at Caltech. But the fact is, my father was a mathematical physicist and although I didn't get the mathematics gene, I do have a pretty good intuitive understanding of mathematical abstractions, as long as I keep myself out of the weeds of doing the actual calculations and computations. By recruiting to my lab a postdoc and some students who were computational neuroscientists, I have learned a tremendous amount about how to think in computational terms about the brain, even if I am not myself a practitioner of it. That has been incredibly exciting to be able to develop my thinking in that direction.

ZIERLER: A question that has philosophical and perhaps even spiritual components: when you're talking about studying emotion, are you strictly a materialist, where you think that all of these things have some scientific explanation in the sense that perhaps a poet or a clergyman would talk about the soul, or metaphysical aspects of emotion?

ANDERSON: Yes, I'm a materialist. I'm a strict materialist. However, it is important to clarify that we use the word "emotion" for scientific purposes in a different way than it is used colloquially, just as scientists often redefine terminology for research purposes. Colloquially, emotions are used to refer to feelings, which are conscious, subjective experiences. Those can only be studied in humans because in order to know what a subject is feeling, the subject has to be able to describe it in words, and humans are the only animals that are capable of verbal report. What we focus on is an aspect of emotion which you can think of—if you think of the overall emotion as an iceberg, the conscious feeling part is the tip of the iceberg above the surface of the water, but there's a huge amount of brain function in the part of the iceberg that is below the surface, which is unconscious, that we know very little about. In fact, there is even evidence in humans for unconscious emotional reaction. So even in humans, it has been possible for researchers to show that there can be emotional responses that don't necessarily involve feeling states.

The way I think about emotions, the way we operationalize emotions, is that they are internal states of the brain that evolved as a behavior control mechanism that controls not only movements but also controls other aspects of what the body does during a particular behavior like controlling your heartrate, your blood pressure, levels of hormones. All these things change when an animal is exposed to a threatening stimulus. I think of a feeling as the brain's subjective perception of its own internal state. I'm not going to live to see an understanding of that process of the brain's subjective perception of its own internal state. My belief that that subjective, that conscious part, can be explained materially by understanding the brain is purely a belief. It's not scientific knowledge. But I don't need to incorporate that belief into the way I study emotion because I study the part of emotion that does not involve conscious feeling and conscious experience, and we develop other approaches to studying emotion that don't make reference to or require the ability to measure subjective feeling so that we can study it in animals. That's really important, because if you restrict your study of emotions to humans, we will be forever left with a purely observational and correlational description of the relationship between brain activity and emotion. You can put someone in a brain scanner, show them a scary movie, ask them if they feel afraid, they can say yes, and then you can see that there may be activity, quote-unquote "lighting up" in certain parts of the brain, like the amygdala, for example, or the prefrontal cortex. But that's just a correlation; it's not causation. It doesn't tell you whether the brain activity is causing the subjective feeling of emotion that the subject reports, although that might be the tempting interpretation that most people would draw. It's equally possible that the emotion is causing the brain activity and the brain activity is a consequence of the emotion.

A third possibility is that there's something else that you're not even measuring which is causing in parallel the brain activity and the emotion state, and that there is no direct causal relationship between the brain activity and the emotion state. It's like violent crime is highly correlated with ice cream consumption. Why? Is that because violence makes people want to eat ice cream after they have murdered somebody, or that eating ice cream gives people a sugar high and turns them into violent criminals? No, it's because both things increase during hot weather in the summertime, and so they are highly correlated. So, if you want to get past correlation and understand the causal relationships between brain activity and emotion, or any other brain function, you have to be able to delve inside the brain and perturb brain function by turning particular brain regions or neurons on, or off, and asking, what are the consequences of those perturbations for the behavior of the animal or the state of the animal? Those interventions cannot be done systematically in humans, because they involve opening up the brain and sticking things into the brain, and injecting the brain with viruses and gene modifying the brain. The only way you can do that in a human is if it's medically justified. For example, neurologists stick electrodes into the brain of epileptic patients to identify the seizure focus so that they can cut it out surgically. So people do do recordings from the human brain under those conditions, and even some stimulation experiments. In any case, they are highly constrained and restricted in what they can do. If you want to study humans, you cannot choose to systematically stick electrodes all over the brain just to see what happens when you stimulate the brain or to see how those neurons respond to different kinds of randomly chosen stimuli. It has to be medically justified.

So whether you agree with separate ethical frameworks for studying biology in animals and humans or not, right now the ethical framework that we do work in and that governs research funding says we can do interventional experiments in animal brains in a systematic way that is not justified by trying to cure a disease, that we cannot do in humans, which means if we want a causal understanding of emotion, we need a way to study emotion in animals. That's why we've been putting all of this effort into trying to operationalize emotions as internal states, and to study them as internal brain states that have certain features or properties—I can get into these features and properties later on—but to be able to recognize those states and the behaviors that express them, to distinguish them from purely automatic reflexive behaviors. That's key, because just using your intuition to identify instances of emotional expression in an animal can fool you because you're just anthropomorphizing the animal. You encounter a squirrel in the park and the squirrel freezes; your natural inclination is to assume that the squirrel is afraid because you'd be afraid if something 200 times bigger than you suddenly appeared in front of you. But you don't know that; it could just be a simple mindless reflex. So we've had to develop ways of distinguishing whether an animal's behavior is just a reflex or involves some sort of an internal state, so that we can then study how the brain encodes those internal state properties. That's really the gestalt of the work. How [did] we get into this, we were talking about computation and—?

ZIERLER: You defined emotion in a scientific context, not feelings.

ANDERSON: That's right. We're trying to see if we can identify certain computational descriptions of the brain that could explain how certain features of emotion are represented by brain networks. Again, that's something I can get into in more detail later on. The idea is that you should be able to study emotion in animals just like you can study another other brain function in animals—decision-making, learning and memory, pain, vision, olfaction. All of these processes have been studied profitably in animals, yet because we experience emotion in our everyday lives, we think we know what emotions are intuitively, and so we think of them only as subjective feelings in the psychological sense. That has actually caused some psychologists to argue recently that we should not use animals at all to study emotions because the way they view emotions is purely in the subjective conscious experience way. There have been a couple of important books that have been published on that, and I have a book coming out in March which is aimed at a lay audience, whose focus is to try to rebut that line of argumentation and to describe why it's so vitally important that we develop ways of studying at least features or aspects of emotion that are common to humans and animals that don't depend on being able to objectively measure conscious feeling or subjective experience in an animal.

ZIERLER: Ironically enough, what you're saying then, if I understand, is that animal experiments allow us to ask much deeper questions about emotion than if we're strictly limiting ourselves to humans. That would be akin more to like stamp collecting.

ANDERSON: That's right. Well, I think stamp collecting is a little harsh. Since you've worked on the history of physics, you'll appreciate this. There are really two different epistemological frameworks for understanding the brain. One of them is based on purely observational experiments. I analogize that to astronomy. Astronomy is based purely on observations. The laws of planetary motion were based on observations. They were not based on perturbations [experiments]. You can't remove a planet from the solar system and see how it changes the motion of the other planets. You can't add an extra planet. You can't move a planet closer to or further from the sun or change the mass of a planet. Those are the kind of perturbation experiments that biologists do to understand cause and effect and mechanism in living systems. The epistemology of astronomy is one lens through which people study the brain, and from that perspective, putting people in brain scanners and correlating patterns of brain activity with verbal reports of emotion could be analogized to looking through telescopes at planets and trying to derive laws that explain planetary emotion purely through observation and correlation. But where biology has been successful in understanding disease, cancer, basic mechanisms of development, cell function, is through this causality-based approach that involves loss of function, gain of function, manipulations to ask whether certain genes or cells or proteins are necessary for a certain process to occur, sufficient for a process to occur, when activated outside of their normal time and place, or at higher levels. That's the whole causal framework through which Elliot Meyerowitz understands the development of plants, and it's the basis for using genetics to understand any kind of biological process. But that's a different epistemology, and if you want to apply that epistemology to studying brain function—that is one in which causality defined in that way is central—you need to be able to do perturbation experiments, and [therefore] you need to be able to work in animals.

I don't want to go as far as saying that human neuroscience is just stamp collecting; it isn't. But it's an epistemology that is ultimately going to limit our ability to develop new drugs and treatments for psychiatric disorders. There's a reason that there hasn't been a fundamentally new psychiatric drug in the last 50 years, even though it only took us one year to develop the COVID vaccine, and that is that most treatments in other areas of medicine arise from an understanding of what normal process goes wrong in a disorder. Once people learned that diabetes reflects an insufficiency of insulin, and that insulin is a protein that you can purify from pigs and inject into a person, it immediately suggested a treatment for diabetes—give the diabetic more insulin. There's no psychiatric drug that has been developed through that process of understanding a normal brain function and what goes wrong in a disorder like depression or schizophrenia. Every psychiatric drug that is used—Prozac, whatever—they have all been discovered purely by serendipity, and we have no idea of how those drugs even work to exert their functions! So it's no wonder that most drug companies have given up on trying to find new psychiatric drugs, because every time they do, they bomb out, when they try to do it on purpose instead of by accident. My argument is that until we achieve an understanding—this gets back to your materialism question—until we are able to understand the brain functions that underlie processes like anxiety, fear, and anger in the way that we understand that the pancreas controls sugar levels in the bloodstream by producing the hormone insulin, which binds to a receptor that takes the glucose out of the bloodstream, until we get to that level of mechanistic understanding, we're never going to be able to develop new drugs for treating psychiatric disorders.

ZIERLER: Last question for today—by definition, the questions that you're after must take an interdisciplinary approach.

ANDERSON: Yes.

ZIERLER: Either by individual collaborators or fields that are most important to you—physiology, pharmacology, radiology—what are the most important areas of expertise that you need to help get answers to these questions?

ANDERSON: That's a really good question, because this type of neuroscience is probably the most interdisciplinary aspect of neuroscience. Within my lab, we employ molecular biology to manipulate genes. We employ optical methods to perturb neuronal activity in intact brains to measure activity in intact brains. That means that we have to have anatomical methods for identifying the connections in the neurons in the brain regions. So, molecular biology, neuroanatomy. Behavior—we need to be able to measure objectively behavior in the animals whose neurons we are perturbing. We also use computation because we have to be able to analyze these incredibly high-dimensional datasets that we generate from measuring animal behavior, frame by frame, in videos at 30 frames per second for hours at a time, or to record from hundreds of neurons that are firing with millisecond time resolution. So, computation, anatomy, behavior, molecular biology, physiology, electrophysiology. And genetics, which is an extremely important component of our approach as well.

I have one or two people that represent each of those approaches in my lab, but I also have collaborations with other investigators at other institutions and also in Caltech to be able to provide the expertise that I don't have. I'm at a point now—and this is kind of embarrassing to admit, but it's true—I don't know how to do anything that the students in my lab do, except maybe cut sections through the brain and stain them with antibodies, and some of the molecular biology that we do, although even the molecular biology techniques in the last 30 years have progressed past what I was trained to do with them. But the other things—if you gave me $5 million, put me on a deserted island with no students or postdocs and told me to reconstitute my lab in situ, and "Here's all the money and construction workers you need," I couldn't do it! It's all because I'm able to recruit people and they can cross-train people, so that when they move on, new people come into the lab to do that.

Then being able to collaborate with people outside. Like I collaborate with Pietro Perona who is an electrical engineer at Caltech, and who is a specialist in AI, to develop methods for automatically measuring behavior in fruit flies and also in mice so that my students don't have to sit in front of a computer monitor eight hours a day, seven days a week, for months, to be able to measure all the parameters of the animals' behavior, but we can do that automatically with computer vision and machine learning. Then I collaborate with people who do things like electron microscope connectome reconstruction of all of the synapses in the fly brain and have tools that they use to analyze those connections so we can use that information to guide our experiments in tinkering with individual types of neurons in the fly brain. Of course, computational collaborators as well.

It's a five-ring circus. You have to understand what your collaborators are doing well enough to ask critical questions, and you have to develop a good intuition and radar for when things don't make sense and there might be something that is wrong. Because ultimately I have to take personal responsibility for everything that's in my paper, and even if it is a mistake that's made by a computational collaborator because they made a calculation incorrectly, I have to take responsibility for that. There is a certain amount of skating on thin ice here, but that's why I work with collaborators over many, many years that I trust and have a long relationship publishing with. I have been working with Pietro Perona since 2007 or 2008, and we have published many, many papers. In fact, we're almost like a scientific married couple at this point.

That's one of the things that makes Caltech such a great place to be, because I can reach out and find people with expertise, particularly in these mathematical areas that I'm not capable of doing myself but that are really important to try to put together an integrated picture of how the brain works. That's really the ultimate goal here—not to try to understand the brain just at one level of analysis, like at the level of populations of neuronal activity, but by focusing on specific brain regions and behavior, try to vertically integrate across scales, from the scale of genes and proteins to synapses to types of neurons to local connections and microcircuits to larger scale connections across brain regions all the way to behavior and to social interactions and social behavior. That's the admittedly very ambitious but I still think worthy goal that we're working towards, and that's why we need to take such a broad and interdisciplinary approach to the problem.

ZIERLER: That's a great place to pick up for next time, where we'll develop all of these issues further.

[End of Recording]

ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Monday, January 24th, 2022. I am so happy to be back with Professor David J. Anderson. David, good morning. It's great to be with you again.

ANDERSON: Good morning. It's great to be back here. Thank you for taking the time to do this.

ZIERLER: In our first conversation, we had a terrific tour of your approach to science and all of your responsibilities in and around Caltech. For today I'd like to establish some context and go back and learn about your family history. Let's start first with your parents. Tell me about them.

ANDERSON: My parents were both academics. My mother Helene was a professor of Latin American literature at NYU and Chairman of the Department of Spanish and Portuguese there for about ten years. My father was a theoretical astrophysicist at Stevens Institute of Technology in Hoboken, New Jersey. He actually just passed away at the beginning of December at age 95. Neither of them came from academic families. My mom was a first-generation daughter of Jewish immigrants from a shtetl in eastern Poland. In fact, she only spoke Yiddish until she was about four or five and didn't learn English until she started going to school.

My dad was adopted. Anderson is an adopted name. He was born in Chicago in 1926. We tracked down his parents. His mom was a Norwegian girl, and his father we found out through genetics—this is interesting, because my sister, who is not a scientist, tracked down my father's mother through genealogical records, and I tracked down his father through genetics. That was through 23andMe, which is the gene mapping company. Normally I don't pay much attention to the messages I get that "you have new relatives" because the customers of 23andMe are so disproportionately Ashkenazi Jewish that if you are Ashkenazi Jewish, or even if you are half Ashkenazi Jewish, as I thought I was at the time, practically every day you're getting a message that, "Here's somebody that is .033% related to you." One day for some reason I don't understand—I don't remember—I happened to open up one of the messages, and it was a message from somebody [a young woman] whose [DNA markers] were 12.5 percent identical to mine, which is enough to make them a first cousin or a first cousin once removed. I contacted her and she was kind enough to share her info, link her [DNA] information with me, and I used the [23andMe] software to literally line up her chromosomes and my chromosomes. I knew the phase of the chromosome from—because my parents did 23andMe also. The phase means each chromosome is a pair. One comes from your mom and one comes from your dad. I knew which chromosome was from my mom and which one was from my dad, from the information on 23andMe, so I could see that [the chromosomes of] this person [young woman] whose records I got lined up with my dad's chromosomes, which was very surprising. It was the first person that I had ever found on 23andMe who had any similarity genetically to my dad.

I asked her if her parents were still alive. Her father was still alive. She ran her father [‘s DNA on 23andMe] and her father turned out to be 25% identical to my dad. The only way—the simplest genealogical explanation for a 25% genetic relationship [between them] is that they are half-brothers. We knew my mom's mother, the Norwegian girl, so we figured it had to be that they had the same father. The [father of the young woman] that we contacted in Texas had their birth certificate, and [according to the certificate] he was born in Chicago just four or five years after my dad [was born in Chicago], and it [the birth certificate] had his father's name on it, and the [young woman's] father's name – my biological grandfather's name -- was Morris Gold. That confirmed the fact that when my [Dad] ran his DNA [through 23andMe], it came back as 50% Ashkenazi Jewish. Actually when I ran mine, it came back as 75-80%. I said it's impossible that it's that high, because if my dad isn't Jewish [as I originally thought at the time] and my mom is [100% Ashkenazi], I should only be 50% [Ashkenazi]. The only way to explain that [I was 75 or 80% Ashkenazi] is that my dad was actually half-[Ashkenazi] Jewish. And that agrees with the identity of his [my father's biological] dad, whose name we now know—Morris Gold--and all of the other genetic statistics [that show he must have been a 100% Ashkenazi Jew who passed on 50% of his DNA to my Dad].

Interestingly, even though in Judaism, as you know, whether you are Jewish or not is determined by matrilineal lineage - even though everybody ironically assumes whether you're Jewish based on your last name- because [in ancient times the rabbis knew that you can never be sure of the] father. There is one exception to that, again as you know, and that is for the Kohanim, which is the hereditary priesthood, and that is the one example of patrilineal transmission of Jewishness. It turns out that about a decade and a half ago, there was a study published in Nature identifying a particular flavor of Y chromosome, which is the male chromosome, which is highly enriched among Ashkenazi Jews whose last name is Cohen. I think it's more than 50% of Ashkenazi Jews whose name is Cohen have this particular flavor of Y chromosome. The Y chromosome comes in many different flavors, called haplotypes. It [this flavor of Y chromosome] is called the Cohen Modal Haplotype. Turns out my dad's Y chromosome has the Cohen Modal Haplotype, even though his father's name was not Cohen.

ZIERLER: You might be a Kohein!

ANDERSON: That's right. That's the story of my parents' background. They were both working class. My mom's father was a house painter in Brooklyn, New York, where she grew up. My dad's adopted father was a plumber. Neither of them had any experience or exposure to academics as kids. They discovered this on their own. Especially my dad, who was raised in a house that had no books at all, so he just had a natural affinity for mathematics and science. He found his own way. They met at Syracuse University where they were graduate students, my mom studying Mexican literature, and my dad was actually working with his advisor named Peter Bergmann, who was a longtime student and assistant of Albert Einstein's. That's where he got interested in general relativity and gravitation and cosmology. He actually knew Kip Thorne particularly well. I like to joke that when you cross a physicist and a humanist, you get a biologist.

ZIERLER: [laughs] That's great.

ANDERSON: That's the genetic origin.

ZIERLER: I'm curious if your father ever felt any sort of latent Judaism that might have been the basis for him and your mom getting together in the first place?

ANDERSON: No, I think he was just attracted to her looks and to her intelligence. But he certainly did pick up a whole lot of Yiddish during the years that they were together, and he adopted my mom's family. His parents divorced when he was ten, and his adopted father died very early, and he wasn't that close to his adopted mother, so he really adopted my mom's family. I was raised Jewish even though as far as we knew at that time, my dad wasn't Jewish. Anyhow, I just wanted to add that my dad is really I would say one of the main reasons that I ended up at Caltech, for better or for worse. When I was deciding where to go after my postdoc, I was very lucky to have job offers from a number of good places—MIT, Harvard, UC San Francisco, and Caltech. I have to say that I had a little bit of trepidation about Caltech, because it was an engineering school. The other places were much better known to me for their biology, and particularly neuroscience. But when I asked my dad for advice, of course for him as a physicist, Caltech was a very famous place and head and shoulders above all of the other places, in his mind, that I had job offers. He strongly encouraged me to go to Caltech, and I took his advice.

Some days I almost regret it a little bit, because you probably—well, I don't know if you know this—there's a sort of long-running self-imagined intellectual hierarchy among scientists, where pure mathematicians think they're better than applied mathematicians, who think they're smarter than theoretical physicists, who think they're smarter than applied physicists, who think they're smarter than chemists. And biologists are down at the bottom of the heap. That sort of attitude is rampant at Caltech, where when I got here [in 1986], I had the impression that biology was viewed sort of as a humanities subject by the rest of the faculty on the campus. But I had had to deal with that kind of joshing from my dad growing up my entire life. I would show him a paper and he'd look at it and he would say, "David, where are all the equations? How can you have a scientific paper if there's no equations in it?" So I grew up well prepared to deal with this sort of constant ribbing of biologists by physicists as sort of low man on the [intellectual] totem pole at a place like Caltech.

ZIERLER: On that point, it's out of historical sequence, but I wonder, when David Baltimore became president at Caltech, did that shift the balance in any way about how biology was treated across the Institute?

ANDERSON: I can tell you that when David Baltimore, who is a very good friend of mine, a personal friend of mine who we spend a lot of time with and [his wife Dr.] Alice [Huang]—when he accepted the presidency—in fact, I was a member of the committee that vetted and recruited him, the presidential search committee that Kip Thorne [chaired], I was the Biology [Division] representative—I felt like having the first biologist president of Caltech was like having the first Black president of South Africa.

ZIERLER: [laughs] That's amazing.

ANDERSON: Whether it changed perceptions of biology at Caltech, I don't know. I can tell you one instance that definitely did. The only Nobel Prize awarded to anyone in the Biology Division while I was at Caltech went to Ed Lewis in I think 1996. He shared it with two others for their studies of Drosophila genetics -- fruit fly genetics -- the genetics of development. There was a reception for him [Lewis] in the Alles Courtyard, on the day he received the prize. I was walking over there, and I was walking behind Ahmed Zewail, who is one of our Nobel laureates in chemistry. He was with a colleague in chemistry who I don't remember. At one point, I overheard Ahmed say to his colleague, apropos where we were going, "Well, I guess there are some smart biologists after all."

ZIERLER: [laughs] That's amazing.

ANDERSON: It's a good thing for a biologist to be at Caltech, because it puts you constantly around very smart, quantitative, hard-nosed people, but it's also very different than if you were in a biochemistry or neurobiology department at a medical school, where most of my colleagues in the field are. In a place like that, you're surrounded by people that take for granted the intellectual legitimacy of your enterprise and your endeavor. The fact that at Caltech you're not all of the time [around people who take seriously the intellectual content of your endeavor] I think is good, because it forces you constantly to justify what you're doing and why it's important and to justify its rigor and to explain why biology is not stamp collecting but that it actually is a source of unifying principles and mechanisms that govern the functioning of organisms across phylogeny.

In fact, Caltech has, as you know, an extremely rich history in biology. It has [had many faculty who have won] Nobel Prizes [in Physiology or Medicine]: Thomas Hunt Morgan for his studies of Drosophila chromosomes and genetics and mapping the chromosomes, to George Beadle for his one gene, one enzyme hypothesis from studying the bread mold Neurospora. Both Morgan and Beadle were Biology Division chairs. Then later there was Max Delbrück, who shared the Nobel Prize with Salvador Luria for his studies of basic genetic mechanisms in bacteriophage. David Baltimore of course, for discovering the enzyme in the HIV/AIDS virus that replicates its RNA genome and that violated the so-called "central dogma of molecular biology' [i.e., that information always flows unidirectionally from DNA to RNA to protein; Baltimore's work showed that it can also flow back from RNA to DNA]. Roger Sperry, who is our only neuroscience Nobel Prize winner, for his studies of split-brain patients. Many people feel, including me, that Seymour Benzer was unfairly gypped out of a Nobel Prize. The award of these prizes is very idiosyncratic. If you go into the Hayman Lounge at the Athenaeum where they have photographs of all of the Nobel Prize winners from Caltech on the wall, there's a photo of Seymour, because Seymour received a prize called the Crafoord Prize, which is awarded by the Swedish Academy, the same group that awards the Nobel Prize, but supposedly it's awarded for topics that are not covered by the Nobel Prize. (Yet a Nobel Prize was awarded recently for "biological clock genes," based on Seymour's ground-breaking discovery of circadian rhythm mutants.)

Now, there have been plenty of Nobel Prizes awarded for studies of how the brain works and studies of behavior, which is what Seymour did, so how somebody got the idea that Seymour's research area was not relevant to the Nobel Prize topics of physiology and medicine, I have no idea.

ZIERLER: [laughs]

ANDERSON: The whole point of this is that there is a very rich and important history of biology at Caltech, especially in the area of basic genetic mechanisms, using model organisms like fruit flies and bread molds [and bacteriophages]. I think many of the people outside of the Biology Division at Caltech—in particular [students in the] physics, [and] mathematics communities—are unaware of this history and legacy. Biology at Caltech is far from being a second-rate citizen.

ZIERLER: To go back to your father, what kind of physics did he do? What was he known for?

ANDERSON: He worked on general relativity. He was really a mathematical physicist. He did a lot of calculations to try to see, as far as I can understand and remember, whether—to make mathematical [calculate numerical] solutions to some of Einstein's equations of general relativity. I think he is best known actually for a textbook that he wrote in 1967 called Principles of General Relativity Physics. Kip Thorne told me after he [my father] passed that when he [Kip] and a colleague—I can't remember who that was—published their own book on general relativity and how its theoretical underpinnings related to his interests in experimental physics, that he constantly cited chapter four in my dad's book for the best account or explanation of the theoretical aspects of general relativity. How and why my dad got interested in this topic, I have no idea. I really have no idea.

His interests also—and this is important [to my history] for reasons that will become clear—my dad's interests also extended to plasma physics and I think as a result of that, to fluid dynamics, because I think there is some connection between plasma physics and fluid dynamics, at least maybe from the mathematical standpoint. I don't know, but whatever the reason, starting in the 1950s he began attending in Woods Hole, Massachusetts, a summer course in geophysical fluid dynamics that was held by the Woods Hole Oceanographic Institution. In fact, he attended that course every summer up until he was about 93. As a result of that, he introduced my mom to Woods Hole. They went to Woods Hole every summer. I went to Woods Hole every summer of my life. They built a house there in 1963. Ironically, Woods Hole is the mirror image of Caltech when it comes to biology versus physics, in that Woods Hole, at least in the summer time, is completely dominated by biologists who work at the Marine Biological Laboratory there. Almost all of my parents' friends and all of their children who were my friends growing up were biologists, and they all had jobs in the laboratory, washing test tubes, which I could never get, because they were all hired through nepotism, and so I was like the little kid with his nose pressed up against the pane of glass in the candy store, wanting to get into the biology labs but not having the correct family connections to be able to get in, but increasingly fascinated by biology because that's what I was surrounded by every summer. I could hear people talking about it at the beach and at parties and that sort of thing.

Finally, a friend of my parents was kind enough to get me a job washing test tubes in the Marine Biological Lab, I think when I was in ninth grade. I got my foot in the door that way. To me, it kind of felt like the way people must feel in Los Angeles about breaking into the movie industry. It was a very closed, clannish, nepotistic group, and very hard to break into if you didn't have some sort of connection. Getting a job as a test tube washer was a great thing. In fact, my first quote unquote "paper," which I never published but I actually wrote, was a satire of a scientific paper called "Synthesis of Glassware In Vitro." The reason that I called it that is because I had to explain how I would come in, in the morning, and I would find these huge plastic bins full to the brim with dirty test tubes that had been left by all of the graduate students and postdocs who were working all through the night on the experiments. I'd slave all day scrubbing them and cleaning them until the bin was empty, and I'd come in the next morning and—boom!—there would be the pile [of dirty glassware] again. The premise of this paper was that these tubes basically arose by self-assembly from subunits that were in these glass bins, and the reason for that is that there was a scientist who was working on the synthesis of microtubules in vitro. Microtubules are a kind of structural protein that make sort of like a scaffolding inside the cell, and they're polymers that form from the assembly of small subunits. Since "test tubes" sounded like tubulin and microtubules, I wrote this paper in the style of a scientific paper, even though I was only in tenth grade, and I put in fake graphs and talked about the self-assembly properties [of tube-ulin, the purported subunits of test tubes]. That was my first scientific paper effort.

I think the reason that I was attracted to that is that I inherited from my mom—my mom has an unbelievable ear for languages. She spoke—speaks—many languages besides Yiddish and English. She's fluent in Spanish and she speaks most other Romance languages. She has a very good ear for cadence, intonation, and rhythm. I sort of picked up the cadence and intonation of science papers from reading them and from hearing people give talks that I would go to. That made it into this—that made this joke paper on glassware synthesis sound better than it would have otherwise. I actually think it helped me when I started writing grants and started writing papers, that I had that ear for the type of language that you use when you're writing a grant and you're writing a paper.

ZIERLER: Did your mom work outside the home when you were growing up?

ANDERSON: Oh, yeah. They both were at the university all day. There was a housekeeper that took care of me and my sister. We lived in Teaneck, New Jersey, about ten minutes from New York City over the George Washington Bridge.

ZIERLER: What did your mom do?

ANDERSON: She was a Professor of Spanish and Portuguese, and she taught Latin American literature. She specialized in novels and poetry dating from the period both of the original Mexican Revolution at the beginning of the 20th century but also the period of upheaval in 1968 where there were a number of student riots in Mexico City following the upheaval around the world that was triggered by the riots in the American Democratic Convention in Chicago. But that was really her interest—in poetry and in literature from Latin America. She taught. She did not do any kind of research, but she was really a beloved teacher. Unlike my dad, who retired at 65, my mom kept teaching and working at NYU until she was 82, and I think she would have kept on going if my dad hadn't convinced her to retire. She started a lot later in her career than my dad did. She took time off to have her kids. She didn't get tenure until I was like ten years old. In fact, I thought tenure had something to do with "ten years old" because I was that age.

ZIERLER: So you grew up with a very good appreciation of what academic professors did, what their lives were like, for better or for worse.

ANDERSON: That's right, I did. The other thing that was influential is that they were both very much children of the Depression. My dad's family was hit very hard during the Depression; his family, they lost everything they had. My mom was very, very poor. When I understood what that was, I understood why my mom would always say, "When the next Depression comes." Because somebody who lived through the Depression, there was never a question of "if" [there is going to be another Depression]; it was just a question of when, and [then] everybody loses their jobs. The only people who won't lose their jobs are academic people with tenure. So it's a safe thing. It's a sort of safety net. That was very important to her, because she worried a lot about her kids obviously being able to take care of themselves.

ZIERLER: What was your Jewish connection growing up? Were you members of a synagogue? Would you have Friday night dinner? What was all of that like?

ANDERSON: No, we were not religious. When my grandfather immigrated from Poland in I guess 1916 or 1917, although he had been religious before he left the country, he rejected religion and became a socialist. So the family was split between half of it that was still religious and my grandfather's side who was socialist. We did not belong to a temple or go to services. I had a secular Bar Mitzvah. I didn't have a Bar Mitzvah in a temple, but I had a big coming-of-age 13th birthday party where all my mother's relatives came, the ones who were still alive. We celebrated Passover and Hanukah and Rosh Hashanah, but not by going to temple. That maintained a connection. The other thing is when you [live in] New York City and the [greater] New York City [metropolitan] area, so many people are Jewish that being a member of a temple or a congregation is not part of many people's identity. My postdoctoral advisor, as I said, was Jewish. He was not a member of a temple. The connection was Yiddishkeit, rather than the religion.

ZIERLER: Jewishness.

ANDERSON: Jewishness, yes. That was my connection. I have a large repertoire of sort of Borscht Belt jokes and Jewish humor, many with Yiddish punchlines. I think in another life I might have become what's known as a tummler—you probably know what that is—a person in the Catskills resorts whose job it is to keep people entertained with stand-up comedy jokes. Jewish humor and just general Jewish culture and Yiddish culture was the experience that I had growing up.

ZIERLER: Did you go to public schools?

ANDERSON: Yeah, I went to public schools [from primary school up] through Teaneck High School. I strongly identified as Jewish growing up, but I didn't feel—and my mother didn't feel—that one needed to be an observant person in order to have that Jewish cultural identity. That's the identity that I grew up with which includes food and all of that kind of thing. I have to say that since I did all of my graduate and postdoctoral training in New York City, and in academia in New York, there's a very high representation of Jewish people. New York is a city of immigrants. You mentioned Jew-dar. My Jew-dar is very accurate. I remember getting off the plane when I arrived in L.A. to move to Caltech in 1986 and just seeing all of these sort of "white bread American" faces pass me in the terminal, and just immediately being hit by how different the faces were from the immigrant faces that I was used to seeing in the airport at Kennedy Airport or LaGuardia Airport. It really was an adjustment coming out to Caltech, and particularly being in Pasadena, which thanks to its proximity to San Marino, the home of the John Birch Society on the West Coast, and of eugenics organizations, has been traditionally a very anti-Semitic place.

You may know from talking to Elliot Meyerowitz that early in the history of Caltech, there was a great deal of anti-Semitism and discussions about how to minimize the number of Jews on the faculty at Caltech. I hadn't prepared myself for that. I hadn't really expected that. But I do remember looking at a paper from one of the labs at Caltech, from the lab of Lee Hood [who] was the Biology Division Chair when I came out here, and the names of the [co-]authors on the paper—Early, Kavanaugh, Wall, whatever—they were not Jewish names. They sounded like [the names of] members of the "Hole in the Wall" gang from Western movies that I would watch, in contrast to the [Jewish] names on papers from labs at Columbia and [elsewhere] in New York. I'm sure most people who weren't Jewish wouldn't notice that, but I was sensitized to it. That was one of the hardest things to adjust to, that I no longer had this sense of being embedded in a rich Jewish culturally Jewish as well as religiously Jewish community. I once briefly considered joining a congregation, just to sort of have more contact with the Jewish community here, but I just was raised as an atheist by my parents, my dad was very anti-religious, and so I decided not do it, but I definitely noticed that. It was important for me also that David Baltimore was one of the few Jewish presidents of Caltech. He wasn't the first; Murph Goldberger was president, and he was Jewish.

ZIERLER: And Harold Brown, as well; that's a little-known fact.

ANDERSON: Oh, is that right! I didn't realize that. But even today, when you go over to the west side of L.A. which is heavily Jewish, people who live in west L.A. who are Jewish still associate Pasadena with being sort of old southern California WASP America. I remember being at a Jewish wedding at the Beverly Hills Hotel and sitting next to one of the guests who was a physician. We were talking, and it came out that I was Jewish, and he looked at me and he said, "What are you doing living in Pasadena? There are no Jews there." Between being surrounded by physicists and surrounded by goyim, if you'll pardon the expression, and generally being a creature of New York City and not of Southern California, I really felt like a fish out of water here, for I would say a very long time, easily the first 20 years or more of my time at Caltech.

ZIERLER: Wow. As a boy, did you always gravitate towards biology, even beyond Woods Hole, in your own explorations, in the way you excelled in school?

ANDERSON: I did. I enjoyed biology. I was very interested in biology. Because of my experience in Woods Hole, I wanted to learn more about it. I do not consider myself a science/engineering phenotype. I did not inherit my dad's talent for higher mathematics, although I think I did inherit his abilities in abstract thinking. But I had a lot of other interests besides biology. This is just to say it's not that I was enjoying my physics classes, enjoying my math classes. I would say my second favorite science topic after biology was chemistry. I was a biochemistry major in college. But I was very interested in things related to the humanities and literature. I used to write a lot, short stories. As I said, I finally have written my own book which is going to be published in about a month and a half. It's not fiction, but I wrote the book. I was interested in archeology for a long time. That was sort of my alternative academic career path. But I was also very actively participating in theatre and in filmmaking in high school, for sure, and then actively in theatre when I was in college. I think that was probably the most difficult decision that I had to make. At the end of college, I really had to debate, "Okay, am I going to just send my applications to Rockefeller and Stanford and MIT or whatever for a PhD program, or am I also going to slip in an application to Yale Drama School on the side, just to see what happens?" I'm ashamed to say that I chickened out and never sent the application to Yale Drama School because I was afraid I might get in, and then I would really be in deep trouble.

ZIERLER: When you got to Harvard—you mentioned biochemistry. Was that the plan from the beginning? Did you have a well-developed notion of what you wanted to study as an undergraduate?

ANDERSON: Yeah. I think by the time I was in eleventh or twelfth grade, I was pretty committed to a career in some aspect of biological research. In fact, the thing I was most interested in originally was marine biology. My freshman seminar independent research project at Harvard was a marine biology project studying how sea scallops, which are bivalve mollusks, how they detect and escape from starfish, who are their natural predator. Soon after that, I got sucked into basic cell biology, because I took a course in cell biology taught by a professor at Harvard named Daniel Branton, and I just became fascinated by biological membranes and membrane proteins and receptors. I rationalized the switch by thinking, "Well, if ultimately I'm interested in how chemical communication between animals controls their behavior, I have to learn about the molecules that mediate that chemical communication, and the receptors for those [molecules] are proteins that are located in biological membranes, so I need to learn about that." So I wound up doing my undergraduate research thesis in Dan Branton's lab on the structure of red blood cell membranes.

But I think it's ironic that now—I don't know how many years after that this is; I graduated in 1978, so I'm shuddering to think it's 44 years or something since then—I now am actually doing marine biology in my lab, studying jellyfish. Actually when I went to Woods Hole with my postdoc a couple of years ago and we were developing this jellyfish system, I actually repeated this experiment that I learned when I was a freshman at Harvard, where I extracted this substance from the tube feet of the starfish underneath their arm which contains this chemical that causes an escape response in the scallops, to see what it could do to the jellyfish. Indeed, it provoked an extremely robust defensive response from the jellyfish. So I really feel like although it was an extremely circuitous route that went through cell biology and developmental biology and stem cell and transcriptional regulation of gene expression and neural development, eventually I did come back to what I was originally fascinated by, which is really animal behavior and how animal behavior is controlled. It was not a linear path by any means, but it got me back to where I started.

ZIERLER: Was biochemistry its own department at Harvard at that point?

ANDERSON: Yeah, there was a Department of Biochemistry and Molecular Biology. There was also a Department of Organismal and Evolutionary Biology. Then there was a Department of Cell and Developmental Biology. There were two [undergraduate] majors. You could either major in biology or you could major in what was called biochemical sciences. Not biochemistry. I chose to major in biochemical sciences because it had a more rigorous curriculum in mathematics and physics and in chemistry. Most of the people who were biology majors were pre-meds and they took the watered-down general introductory courses in physics and biology. [To major in] Biochemical Sciences, you had to have the higher-level courses in those fields. Also, it was a smaller major, because it was known to be more difficult to get a good grade point average, and what pre-meds were most concerned about is their grade point average.

It was good to define myself from the beginning as somebody not interested in medical school, but interested in research, because as I'm sure you know, the competition between pre-meds at a place like Harvard is absolutely cut-throat. I even recall having one of my meticulous lab reports in the organic chemistry lab stolen, and somebody copied it or something like that, and then eventually returned it [to my mailbox] so that I was able to return it to the TA's mailbox. At least being committed to research kept me out of that intense competition in college. Now, little did I know ironically that fast forward 15 years to real life, it was the physicians who were going to be enjoying a life that was more or less free of competition, and it was the research biologists who were going to be involved in a highly competitive career for the rest of their professional lives. So if you're somebody that wants to avoid competition, as I was when I was in college, a research career is not the place to go.

ZIERLER: [laughs] In the summers, would you stay on for laboratory work or you went back to New Jersey?

ANDERSON: I stayed on for laboratory work for a couple of summers at Harvard. Before Harvard, my later years in high school, I had jobs as research assistants in Woods Hole in the laboratory. I think it was the last two years at Harvard that I stayed on over the summer to finish my thesis.

ZIERLER: You were there, of course, after all of the tumult in the late 1960s and early 1970s. At the tail end of that, as an undergraduate, were you political at all? Were there opportunities to be involved if you wanted to?

ANDERSON: Yeah, that's a very interesting question. I came of age in high school and junior high school through the peak of the Vietnam War era. In fact, I actually have a draft card with a very low draft number, and I would have been drafted if Nixon had not stopped the draft the year that I got my draft card. My high school was very politically active, my public high school, and I participated in student protests there. I went to antiwar rallies with my parents. My parents were actively involved in protesting against the Vietnam War. My mother was panicked that I was going to get drafted. She was trying to figure out if she was going to send me to Canada or what she was going to do. When I got to Harvard, Saigon was abandoned in the spring of my freshman year, April of 1975, and it was like the bottom fell out of the movement. There still were some of the radical leftist groups on campus like Spartacist Youth League and a couple of those groups. I was never interested in participating in that politics at that level. But really it was a very rapid turnaround, whereas I expected [that] when I got to college that, okay, now I'm in the antiwar protest big leagues. This is going to be like Columbia University in 1968, and somebody is finally going to be paying attention to these demonstrations that I'm participating in, unlike the rinky-dink ones at Teaneck High School. And it was like "fwoosh!"—everybody suddenly went into the library, studying for their LSATs and their MCATs. The only vestige of student activism at that time was trying to pressure Harvard to divest from its investments in South Africa. I can't remember whether I participated in any of those demonstrations. It really dropped off very rapidly.

ZIERLER: Given your circuitous path in chemistry and biology, what kind of fields were you thinking about pursuing in graduate school beyond of course drama? What were the programs or even the individuals that you might have been interested in working with?

ANDERSON: That's also a very good question. My interests were very much oriented towards the structure and function of biological membranes, because of my work with Daniel Branton, and so I applied to programs that were known for having a focus on that topic. For example, I applied to Yale's Department of Cell Biology, which was run by George Palade, who was a Nobel laureate in cell biology for the discovery of ribosomes. Actually I wound up doing my PhD thesis at Rockefeller with his former postdoc, Günter Blobel, who also went on to win a Nobel Prize. But originally—again, my first choice was Rockefeller. I applied there to work with a very specific person named Norton Gilula, Bernie Gilula. He had been a postdoc of Daniel Branton's, so he was somebody that Brandon had trained, and he worked on a type of intercellular junction called a gap junction, which is a unique type of communication where you basically have a water-filled channel that runs between two cells, because the outer parts of the channels lock into each other. So whatever small molecules or ionic changes happen in one of the cells immediately get communicated to the other cell. Gap junctions are widely used in the nervous system, and at the time, they were thought to be important in development.

I was interested in that because the model that had drawn me into cell biology when I took Branton's class was a new model of how biological membranes were organized. Before 1972 or 1973, people conceived of biological membranes as having this lipid core with the proteins sandwiched on the outside, like slices of bread around a thin layer of mayonnaise, because the conventional view is that proteins are water-loving, they're hydrophilic. They don't like fats; fats are hydrophobic [and membranes are basically made of fats]. Nobody could imagine proteins being inside membranes; they would be sandwiched around them. As a result of discoveries made not only by Dan Branton but by others, that view was radically revised to a view where membranes consisted basically of proteins floating like icebergs in a lipid [membrane] sea. This was called the Fluid Mosaic Model of Cell Membranes, and it was promulgated by S. John Singer I think at UC San Diego, and by Garth Nicolson. It totally changed the view of membrane structure and function, because now the idea was that proteins were integral components of the lipid bilayer that makes the [cell] membrane, that they were free to diffuse laterally in the plane of the membrane to participate in various reactions, with the exception of things like gap junctions, where you almost had a phase separation of proteins that were aggregated together in two dimensions in the plane of the membrane for a particular purpose. The same thing is true at synapses as well.

So, I became very interested in the question of, well, if this Fluid Mosaic Model of the Structure of Cell Membranes is true, and it's all protein icebergs floating in a lipid sea, how does the cell organize and maintain aggregated domains of membrane-protein functions where the icebergs are sort of all packed together and the lipids are almost excluded from that region? That's why I was attracted to work in Bernie Gilula's lab. But as always, all of my best-laid plans in science went astray. I got to Bernie's lab—and I can say this because Bernie passed away many, many decades ago—but the question I was interested in was fundamentally a biochemical question, and Bernie was not a biochemist; he was an electron microscopist. It rapidly became clear after about four or five months in his lab that he simply could not give me the advice I needed to carry out the type of biochemistry-focused project that I wanted to do. So I left his lab, much to his consternation, even though I had written and gotten an NSF predoctoral fellowship specifically to work on this problem in his lab.

Now, where was I going to go? Because when I applied to Rockefeller, I basically only applied to work in Bernie's lab. I didn't even know what the other faculty were doing at Rockefeller. This is why I always counsel undergraduates who come to me at Caltech who want advice on graduate school, and I tell them, "Make sure there is more than one person you're interested in working with, wherever you decide to go to graduate school." I basically wound up in Günter Blobel's lab because I had become friends with a student at the time from Germany named Peter Walter, who is now a Lasker Award winner and one of the most famous biochemists in the world of his generation. In fact, he's heading one of these Altos institutes that has been in the news that Yuri Milner and Jeff Bezos and others have invested billions of dollars in. Peter and I were friends. We played practical jokes on each other. Peter was at this lab in an empty room with lots of extra benches, and it was bright, shiny, and new, and there was space, and so I thought it would be interesting to work in the same lab with Peter, who I knew.

I concocted a project to propose to Gunter that would be aligned with both my interests in membrane proteins and synapses and the nervous system and Günter's interest in understanding how these icebergs get inserted into and threaded into the membrane in the first place when they are synthesized, which was his main focus and what he won the Nobel Prize for. I proposed to do this standard type of in vitro experiment to study the assembly and synthesis of the receptor for a neurotransmitter called acetylcholine, the acetylcholine receptor. That's a complicated molecule because it's made of four different subunits. It wasn't clear at that time whether each subunit came from a different RNA or whether they were all produced [from one RNA] in one chain, one long protein that was chopped into pieces [to make the subunits]. That was the problem that I proposed, and that's what I worked on, and that's what I did my PhD thesis on. The acetylcholine receptor became—not because of my work necessarily—but it became one of the most intensively studied molecules, membrane proteins, in all of neuroscience. In fact it engaged Caltech in a very direct way, in that Norman Davidson and Leroy Hood and Michael Raftery were the first to sequence the subunits of the acetylcholine receptor and to propose models for how it was structured and organized in the membrane.

One little amusing fact here regarding Caltech lineage is that Henry Lester, who you know is [also] a professor at Caltech —he is about seven or eight years older than me—Henry Lester grew up in Teaneck like me. He went to Teaneck High School like me. He went to Harvard College like me. He went to Rockefeller University like me. And for the first part of his career and even to this day, he works on the acetylcholine receptor. So when I was in the Kirchhoff and Alles buildings and Henry was down the hall from me, we had this sort of Teaneck acetylcholine receptor contingent. You always come back to your roots, at least in my experience.

ZIERLER: Was there some advantage to being in such a small and well-endowed place as Rockefeller for graduate school?

ANDERSON: I think so, because you could do whatever you wanted. You could join any lab that you wanted to. You never had to worry about [whether] your professor could afford to support you. Even back then, all graduate students were paid for from an endowment established by Brooke Astor at the university, whether it took four, five, or six years to graduate. Caltech, by the way, in Biology, we only have enough money to support our graduate students for the first nine months. After that, professors have to pick up the tab, which is quite expensive. It's like $65,000 a year now. We really had the run of the place as graduate students. What impressed me when I visited Rockefeller and talked to graduate students there was how mature and independent they were. Many of them came to Rockefeller already knowing what they wanted to do, extremely sophisticated and knowledgeable about their field, and basically found a lab where they could pursue their research. It was not the kind of place where people just showed up, went into their professor and said, "Give me a project. I have no idea what I want to work on." I would say many of the graduate students at Rockefeller were already functioning at the levels of postdocs because they had had so much research experience and so much knowledge of the field.

ZIERLER: As a graduate student, of course you probably weren't thinking along those lines, but retrospectively, what were the big questions in the field at that point, and how do you see your thesis research being responsive to some of those questions?

ANDERSON: The major question in the field at the time that I joined Günter's lab was the question of how are membrane proteins, particularly these icebergs that are floating in the lipid sea—particularly if you think of the protein not as a blob like an iceberg but as a chain of amino acids, ones that are threaded back and forth across the membrane bilayer, how are these proteins inserted, how do these icebergs get into this lipid sea? This is an important question because most proteins are synthesized on ribosomes, and ribosomes are in an aqueous media. It was known that membrane proteins and secreted proteins are synthesized from ribosomes that are closely apposed physically to the lipid bilayer [membranes of the cell], but the big question was, what is the mechanism that allows this protein, which does indeed have a lot of hydrophilic surfaces on it, that don't want to go in to the lipid bilayer, how does that protein insert itself through the lipid bilayer? How does that iceberg get in there, given that the iceberg, if you think of the lipid bilayer as a thin layer of water, there's a part of the iceberg that is dissolved in the lipids that's hydrophobic, and then there are parts on the outside of the cell and the inside of the cell that are hydrophilic, which means that in the process of inserting this protein into the membrane, the hydrophilic part that goes inside the cell has to be shoved through the lipid bilayer, which is energetically unfavorable because it's oily and hydrophobic.

There were two schools of thoughts about this. I'll call them the physicist's view and the biologist's view just to exaggerate the difference to make it clear. Günter's view, the biologist's view—and this was really based on a little bit of data and a lot of his intuition—is that this process had to be catalyzed by tunnels, protein tunnels, that transiently assembled in this lipid bilayer and which had a core through them through which the growing protein chain, the polypeptide chain, could be threaded, and then that these channels would dissemble after the hydrophilic parts had been let through the membrane, and then the protein would fold properly in the membrane, so that this was a process that needed lots of other proteins to carry out. Of course, those proteins themselves would have had to have been inserted into the membrane by similar processes in the history of the cell. That was the biologist's viewpoint. That is, this was a catalyzed process that involved biological specificity, that the proteins that need to be inserted in membranes had particular amino acid sequences in them which Günter named signal sequences that were recognized like a ligand by these protein tunnels in the membrane so that they could dock the ribosomes and open it up. It's a whole thing with a Rube Goldberg-like machine.

The physicist's approach said, "This is all nonsense. You don't need to invoke all these complicated non-parsimonious Rube Goldberg protein machines to explain how a protein can get across a membrane. You can just calculate from bioenergetics that if the protein unfolds while it's being synthesized, it has some hydrophobic segments to it, and those just sort of pop into the membrane almost like a hairpin, and they're happy in the membrane. Because that free energy is favorable, it will compensate for the unfavorable energetics of dragging across the membrane the hydrophilic parts. In other words, in a nutshell, the physics explanation is, "You can explain all of this by thermodynamics. You don't need to invoke biological specificity. You don't need to invoke a machine." This was the debate that was raging. Günter turned out to be correct. The physical chemists turned out to be wrong. I guess my minor contribution to this was when my benchmate Peter Walter, who really if they gave Nobel Prizes to graduate students, should have shared the Nobel Prize with Günter Blobel, he discovered the first molecule that actually recognized one of these signal peptides and brought the ribosome to the membrane. He discovered a protein complex called signal recognition particle. He showed that it would control the complete transfer across the membrane of proteins that needed to be released and secreted from the cell. They were synthesized on the cytoplasmic side of the membrane, and then ejected completely across the membrane into this endoplasmic reticulum space. Peter and I collaborated to show that a membrane protein—in this case, one of the acetylcholine receptor subunits that I was working on—used the exact same signal recognition particle, thereby unifying this mechanism for transferring proteins [all the way] across membrane with mechanisms for embedding proteins in membranes. But just to show you how complicated and difficult the problem is, there are still faculty, including faculty at Caltech, like Assistant Professor Rebecca Voorhees, who are working on the problem of how these tunnels assemble and disassemble during the process of protein translocation, membrane protein translocation into membranes. So this is a problem that people have been working on since the late 1970s and early 1980s.

That wasn't really the main focus of my thesis. The main focus of my thesis was that I proved that each of the four subunit proteins of the acetylcholine receptor—the alpha, beta, gamma, and delta subunits—were made from separate messenger RNAs that came from separate gene products. That meant that if you wanted to clone the gene for the acetylcholine receptor, there were four genes you needed to clone, not just one gene that you needed to clone. Originally, I was very ambitious. I wanted to follow up on the work that I did on the membrane insertion by cloning the genes for the acetylcholine receptor. Gene cloning in those days—this was the early 1980s—was just getting started, and Günter's laboratory really did not have any expertise in cloning genes. At that point, a number of very large laboratories including the ones at Caltech, [and also] a huge lab in Japan, were zeroing in on cloning the acetylcholine receptor. As somebody who has spent his career trying to avoid competition, that didn't seem like the best place to devote my effort. In fact, Günter had a famous quote that almost everybody in the world now knows, but I was the person that heard him say it originally in the lab to one of his postdocs. The quote is—I'm exaggerating Günter's German accent. You have to imagine that Günter looks like a Wagnerian character. He was this 6'4" handsome Teutonic guy with snow-white hair. He looked like Siegfried in the Wagner's Ring Cycle. Günter is saying to this postdoc, "You are one, but they"—meaning the competition—"You are one but they are many. You have to work day und night or you will be crushed like a cockroach!" So I decided that although I worked very hard, I didn't want to work day and night, and I didn't want to risk being crushed like a cockroach, especially given that Günter's lab was not the right lab to clone a gene at that time. I shifted my project and pivoted it in a different direction. That was my main contribution in terms of understanding how the acetylcholine receptor was assembled and synthesized. Again, this is a process that is still not fully understood 45 years later.

ZIERLER: In the graduate lab, what are the instruments that you're working with?

ANDERSON: I was using a system that Günter was famous for, which allowed one to actually reconstitute, in a test tube, the earliest events or stages in membrane protein synthesis and the insertion of those icebergs into the lipid membrane. He did that by combining extracts either made from red blood cells or from yeast, cellular extracts that had ribosomes at which you could add messenger RNA to program them to make a protein, and then to which Günter added small lipid vesicles that were prepared from the endoplasmic reticulum of dog pancreas. So these were not artificial membranes. Those vesicles had all of the machinery and tunnels and receptors necessary to accommodate the insertion of these protein iceberg into a lipid sea. At that time we didn't know the identity of those proteins. So, I would carry out these reactions. Basically my tools were micropipetters with little plastic yellow tips, and little plastic Eppendorf test tubes. I tell my wife I received my PhD for learning how to pipette vanishingly small quantities of liquid into tiny little plastic tubes. That's basically what I did every day. I would assemble these complicated reactions in little tubes, each of which had to get ten different components added to it to make the gemish that would allow this biological process to work. I would mix them [together] to make it go, and then I would analyze the protein by a process called gel electrophoresis, where you run these mixtures of proteins through what I would describe to my non-science friends as basically a slab of bean curd. You drive them through this thin slab of bean curd using an electric field, and the proteins would then separate in their lane of the gel according to their mass and their electric charge. Then you could visualize them, see how many [different proteins] you had [synthesized]. There was lots of radioactivity that you would put in to label. So, this was real biochemistry. The guts of my training is biochemistry and cell biology, and it is about as far away from what I do in my lab, what we do in my lab now, as you can possibly imagine. I mean, I really have no business doing the kind of research that I'm doing now given my PhD training. Or my postdoctoral training.

ZIERLER: Was there a quantitative or even a computational approach in your graduate research?

ANDERSON: It was semi-quantitative in that we would measure the amount of newly synthesized protein on a gel by optically scanning the gel and then we would get a Gaussian curve of the scan, and you could integrate under the curve, but mostly all of the analysis was qualitative. There was very little quantitative analysis, and no computation. I didn't even use statistics in my PhD work or in my postdoctoral work.

ZIERLER: Last question for today—who was on your thesis defense, and what was the defense process like at Rockefeller?

ANDERSON: [The head of] My thesis defense [committee] was my PhD advisor Günter Blobel. There were a couple of internal Rockefeller faculty members, and then there was one external faculty member from Columbia who was an expert on the acetylcholine receptor. Basically we sat in a room and they grilled me. Actually there was a computational component for my thesis, because one of the last papers I published was trying to test a particular hypothesis for how these icebergs were assembled, and it used ultracentrifugation to measure the mass of the complexes before they were inserted in the membrane. I remember having to go to the board and write down the Stokes-Einstein equation for diffusion of proteins and to explain how one could calculate the mass of the protein from the [location and] number of fractions [that contained] the protein when you centrifuged it into this gradient. So there was a little bit of math and computation in my thesis defense but not much.

ZIERLER: Anything memorable from the conversation when you got grilled?

ANDERSON: Only the one about having to write the equations down on the board. There is a memorable thing from another grilling session at Rockefeller. There was a biochemistry course that everybody had to take, and it was known to be a really unpleasant course where you had to read these papers, come into the class, and it was run by a couple of hard-nosed biochemistry postdocs. What we knew is that if there were any chemical compounds used in the paper, we would have to go to the board and write down the chemical structure of the compound. I was called to the board to do this for a paper that was studying the carbohydrate residues that are connected to proteins that are recognized by other proteins called lectins. They wanted me to write the structure of the particular carbohydrate moiety, which is a ring structure, that was the structure of this carbohydrate, and I didn't remember the exact structure. I hadn't studied it and memorized it. I knew the general structure of these—six carbon rings with oxygene atoms in between—but—so I decided to be a smartass, and I remembered from my organic chemistry class that these ring structures, if you build a molecular model, they're kind of flexible, and you can bend them into shapes. There's one shape thing looks like this which is called a chair. There's another one that looks like this—imagine that my fingers are connected by a horizontal line—which is called a boat. Then there's a third one that looks like this which is called twist-boat. So they asked me to go to the board and draw this structure, and as a stalling tactic, I said, "Do you want the chair, the boat, or the twist-boat conformation?" They threw me out! [laughs]

ZIERLER: David, on that note, we'll pick up next time for postgraduate life.

[End of Recording]

ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Tuesday, March 15th, 2022. I am delighted to be back with Professor David Anderson. David, it's great to be with you again. Good to see you.

ANDERSON: Good to see you too, David.

ZIERLER: First, David, congratulations. Literally hot off the press, the publication of Nature of the Beast. Tell me about that. Tell me how that project came about and what you hope to accomplish in reaching a broad audience.

ANDERSON: I had wanted to write a book for the general public for a long time, because I've been told I'm pretty good at making complicated concepts accessible to non-scientists. I hadn't really had an opportunity to do that, although I had written a book proposal well over a decade ago, and sample chapters. I just never got around to it. Then two things happened, really, that made this possible. The first is that Ralph Adolphs and I collaborated together on what was my first book, or half-book, which is called The Neuroscience of Emotion: A New Synthesis, which was published in 2018 by Princeton University Press. That was a book aimed at an academic audience—at our colleagues, at graduate students, postdocs, maybe even some undergraduates—to try to explain to them our emerging view of how one could integrate the study of emotions in humans and in animals. That was the first exercise, and it forced me to assemble my thoughts about all of this. Then in I think it was 2018 through fortuitous events I met a literary agent who handles books for scientists. She became interested in publishing a book I would write aimed at a general audience. Given that I had an agent to represent me, she walked me through the process of writing a book proposal back and forth and then helped me find a publisher. I think without her help, there's no way that somebody would have agreed to publish a manuscript from me. Most of the publishers that she sent the book to passed on it. There were only two that were interested, and the first one was Princeton. The second one was Basic Books, which is not a mass market publisher, but they are what is known as a trade publisher. I decided to go with Basic because I thought that they would do a better job with marketing, as indeed they have, in comparison to Princeton. That's the process as it unfolded.

In terms of the content and why I felt it was an appropriate time to speak out, there has been a sort of upheaval in the emotion field over the last I would say now starting ten years ago but really peaking in around 2017 with several—one noted emotion researcher, Joseph LeDoux, who is at NYU, in particular, essentially disavowing his previous 30 years of research on the role of the amygdala in fear in rats, on which he built an international reputation and on which he published a number of popular books himself such as The Emotional Brain and The Synaptic Self. Starting in 2012, LeDoux reversed position and said that what he was studying in rodents was not emotion, it was simply defensive behavior, and that we should reserve the word "emotion" for discussing subjective feelings, conscious feelings. And since humans are the only animals that we know who experience conscious feelings—we have no objective way to know that in animals; it doesn't exclude the possibility, but we just have no way of knowing it—it follows from that that if we say that we're studying emotions, we should only be working on people. Essentially saying that studies of emotion in animals are at least conceptually off limits to neuroscientists.

Then in 2017 a book was published by a psychologist at Northeastern University named Lisa Feldman Barrett called How Emotions Are Made. She had hired a publicist, and she got a lot of attention for this book. She presented this book as a new "theory" of emotion. In this book, she argued that not only are emotions a unique attribute of humans, or something we can only really study in humans, but also that there are no fixed regions of the brain that control specific emotions. That rather, emotions are something that are made up "on the fly" by our brains every time we experienced an emotion, and that the pattern of activity in the brain during two different episodes of what we might label anger were completely unrelated to each other, and that the whole representation of emotions was very dynamic and flexible, and it was wrong to try to identify regions in the brain that control emotion. This again flying in the face of decades of work done by people including Ralph Adolphs, my colleague, showing that in humans, the amygdala is at the very least necessary for the subjective experience of fear, and lots of other data on that as well. In addition to her book, she published several influential Op-Eds in The New York Times, in the first of which she argued that activity in the amygdala has nothing to do with emotion and is not a neural signature of fear. In the second one, she published an Op-Ed on anger, saying that anger is not a unitary state in the brain, that anger is so diverse that you can't pinpoint a pattern of activity that is relevant to anger.

This book got a lot of attention and sold a lot of copies, despite the fact that I can't really understand what she's saying, and neither can Ralph (Adolphs), and Ralph works on humans, and Ralph is a very smart guy. If Ralph can't understand what Lisa is saying, I don't see how I can understand what she's saying. Lisa and Ralph engaged in all kinds of public debates about this back and forth, and back and forth. So, I decided, since many people in the emotion field were very disturbed by LeDoux's volte-face and Lisa Feldman Barrett's position, and its implication, which is potentially far-reaching, that one can't or at least shouldn't study emotion in animals (at least defining emotion using its colloquial meaning, as feelings), that it was important to articulate an alternative point of view [in lay language]. I come at this as a biologist and as a neuroscientist, not as a psychologist. Like Darwin, I think that emotions are biological functions of the brain and therefore that they have appeared through phylogeny as a result of natural selection just like any other brain function, and that they didn't just pop up in humans. But I agree that we can't study subjective feelings in animals, because in order to determine whether an animal has subjective feelings, you need to ask it, "How do you feel?" The only measure of subjective feeling is verbal report, and humans are the only creatures capable of verbal report. It follows from that that we can't assess subjective feelings in animals that can't talk, and no animal can talk except for us.

You might think that if feelings equal emotion, and we can't study feelings in animals, there's nothing left to study about emotion in animals. That's where I think I part ways with Feldman-Barrett and LeDoux, in that I see emotion as analogous to a huge iceberg, and the conscious feeling part of emotion is just the tip that you can see above the ocean, and there are a whole lot of other things going on in the brain that give rise in humans to those conscious feelings (and maybe or maybe not in animals), and that's the part of the iceberg under the surface, and that's the part that we have in common with animals, and that's the part that we should be able to study in animals. If you accept that premise, it requires two things. One is that you have to be willing to accept a scientific redefinition of emotion, not as conscious feelings but as a function that the brain performs, like learning and memory, decision-making, sensation, perception, action, et cetera. The second thing is that you need a way to distinguish whether a particular behavior that an animal exhibits, which reflects an internal emotion state.

I forgot to say the obvious as usual, which is that we conceptualize emotions as internal brain states, meaning there are patterns of electrical and chemical activity in the brain that both can be triggered by sensory stimuli or by our own memories or experiences, and that in turn influence the way we respond to other stimuli behaviorally. They are an internal processing step, sort of like if you think about an old-fashioned telephone switchboard with operators plugging and unplugging cables, the emotion state is the manager who is directing which cables to be plugged in and unplugged as information flows through the brain and gets relayed out to behavior systems. So the critical problem in animals is how to identify instances of behavior that are not simply reflexive and automatic. That's important, because if we just look at animals, we are very tempted to project our own anthropocentric feelings or anthropomorphic feelings, and we can be easily fooled into thinking that an animal is exhibiting an emotional behavior when in fact it's just a robotic reflex. The clearest demonstration of this was by an MIT cyberneticist named Valentino Braitenberg who published a little booklet called Vehicles in the 1980s which I highly recommend. He showed that you could wire up very simple vehicles that consisted only of sensors that detected things like light, wheels, and motors driving the wheels that were connected to the sensors. Depending which wheels the sensors were connected to—in other words, if you think of this as a rectangular box, wired so that the left sensor at the front of the box connected to the left wheel or the right wheel (in other words, did the wires cross or were they parallel), and was the effect of stimulating the sensor to speed the wheel up or to slow the wheel down, that you could build robots that could be attracted to a light and then stop, they could avoid a light, or they could move towards light and run past that. In a playful trolling of psychologists, he labeled the robot that approached the light and stopped as showing "love", the robot that avoided the light as showing "fear, and the robot that approached the light and ran over it as showing "aggressiveness".

When I first started thinking about this problem and this [Braitenberg's] book was brought to my attention, I realized that this was a serious problem and one that prevents you from identifying instances of emotional expression [in animals] without some sort of objective criteria. Because if I watch a Star Wars movie, you can easily be fooled into thinking that a robot [e.g., R2D2], which basically looks like a tin can with a dome on top and wheels, expresses emotions. In fact, we can easily be fooled into thinking that certain people have emotions by how they behave, when they don't really have those emotions. If they're really good at that, then tens of thousands of people come to watch them do it, and we pay them millions of dollars; they're called actors. From a scientific standpoint, it becomes necessary to develop some sort of criteria to distinguish reflex behaviors from behaviors that reflect an internal emotional state. That's where Ralph and I came up with this concept of what we call "emotion primitives". These are building blocks of emotion states. They're properties of both emotional behavior and also of the circuits that give rise to emotional behavior—this is a conjecture—that distinguish those behaviors from automatic reflexes.

I'll give you two examples. One is persistence. Reflex behaviors, like when the doctor taps your knee with his little hammer and you jerk your leg out, are time-locked to the stimulus onset and offset, whereas emotions often tend to outlast the stimulus that evoked them by minutes or tens of minutes. So, [imagine] you're hiking on a trail in the San Gabriels, you hear a rattle of a rattlesnake in the bushes, you jump in the air, and then even for minutes after the snake slithers off into the bushes, your heart is still pounding, your mouth is dry, your palms are sweaty, and if you see anything on the ground that looks even remotely snake-like, you're going to stop or you're going to jump in the air, and you're going to avoid it. Persistence is that long duration tail of the behavior and of the [associated] internal state, and is one kind of emotion primitive. Another one is what we call scalability. Not in the engineering sense of being able to scale up production of something to a large extent, but in the sense of escalation. Emotion states tend to escalate in their intensity in a way that reflexes do not. Reflexes, again, they're all or none, you either show them or don't show them, whereas in a state of unhappiness, you can escalate from sniffling to sobbing to wailing, or in a state of aggressiveness, you can escalate from threats to physical aggression and in a state of fear, you can escalate from avoidance to freezing to panicked escape. This is a problem actually that Dean Mobbs, a Caltech professor in HHS, has been working on in humans.

So, there's a set of about six or seven of these emotion primitives that Ralph and I could think of that we discuss in the 2018 book. We actually proposed this originally in a Perspective piece that I wrote for Cell with Ralph in 2014. The point is that this is not a theory of emotion; it is a way of thinking about emotion and deconstructing it in a manner that you now can study it in animals and ask questions that you wouldn't previously have thought about asking. How is a persistent emotion state encoded by the brain? Given that neurons fire on a time scale of milliseconds, how can the nervous system continue to generate activity in response to a brief stimulus that lasts for tens of minutes? [How is] scalability [encoded by the brain]?. What is escalating, when a state of aggressiveness is escalating? Is it the level of activity in some neurons? Is it the number of neurons? Is it which neurons that are being active? Is it some chemical that is being released into the brain? We don't know anything about this!

So the concept of emotion primitives is that they are building blocks of emotion in the same way that a carburetor and a transmission and an internal combustion engine are building blocks of an automobile. That makes them useful [for understanding emotion] from an evolutionary perspective. Just as humans had to invent wheels and axles and internal combustion engine before they could invent an automobile, evolution, we think, had to "invent" these properties such as persistence and scalability—valence is another one, and generalizability-- in order for animals to become more flexible in their behavior and evolve beyond simply collection of hard-wired reflexes into behaviors that express internal states. [These internal states] give the animal much more flexibility in how it is going to respond to a particular situation. That view, in turn, if it's true, or a way of testing [it], is to look in contemporary animals of different species—and we work mainly in fruit flies and mice—to see if we can see evidence of any of these emotion primitives, not only in mice, but also in fruit flies, when they exhibit behaviors that in humans are associated with internal emotions states, like fighting, or mating, or fleeing from a threat. Part of the reason for doing that is to try to understand how evolutionarily ancient these emotion primitives are, when they first appeared in evolution. Another reason is that studying a neural process in flies allows you to investigate its causal and mechanistic basis with a degree of precision that is not possible in mice for a variety of reasons which I can get into.

That's also a very important reason for trying to study these internal states in different organisms. In fact, that's why we started working on the jellyfish originally, because I reasoned that, look, if you want to see if there is an animal on the planet, a contemporary animal species that really does work like a Braitenberg vehicle—it's just a series of pre-programmed reflexes—probably your best bet would be to look at a jellyfish because they don't even have a brain. Is it possible that a jellyfish could display at least some emotion primitives like scalability and persistence in its behavior? To be clear, the fact that we can observe evidence of emotion primitives in an animal's behavior doesn't mean that we're saying that that animal has full-blown emotions in the way that we have them. It means that they have components that we can study using causal neuroscience. That's critical because if we can only study emotions in humans, then we can only study them primarily using brain scanning techniques which are non-invasive, and brain scanning techniques only reveal correlations between patterns of brain activity and emotion, and therefore they can't distinguish cause and effort. If you see a pattern of brain activity in the amygdala that is correlated with a verbal report of fear, you don't know if the brain activity is causing the fear, or the fear is causing the brain activity, or some other thing that you're not even measuring in another region of the brain is causing both the fear and the amygdala activity and they have nothing to do with each other.

The only way to distinguish cause and effect is to perturb the brain, to turn neurons on and turn neurons off, and alter their levels of activity in specific regions of the brain and at specific times, and ask how that influences the animal's response to a stimulus that elicits emotion-like behaviors or that shows emotion primitives, and we can't do those experiments in humans because they're not medically justifiable, and they are not technically possible at this point. That means that if we restrict ourselves to studying emotions in humans, we will never have a causal understanding of the relationship between the brain and emotions as we have for other brain functions like decision-making and cognition and learning and memory and perception and all of the other wonderful things our brains do. I'm just not ready to relegate emotion to a slag heap of mysterious brain functions that cannot be studied in animals and therefore are off limits to causal neuroscience. That was a long-winded answer to your question of why and what.

ZIERLER: In conveying these ideas to a popular audience, to break out of the traditional academic arguments, and in light of the fact that with a popular book, there's always the possibility that these very complex ideas get boiled down into sound bites, what are your goals in terms of the national conversation, perhaps even the public policy implications, of the ideas that you are conveying?

ANDERSON: Oy!

ZIERLER: [laughs]

ANDERSON: This is what I'm terrified of. In fact, things are starting on this. National conversations, as you put it, are started. I have an essay coming out next Sunday in The Wall Street Journal and I have an interview with NPR about this tomorrow. The goals are, first of all, to emphasize to people that it's really important—first of all, that emotions are biological functions of the brain, and that they evolved, which means that they are present in animals at least in some form, as well as in us, and that there is more to emotion than just subjective conscious feeling, and that we have to be able to objectively identify instances of emotional expression in animals so that we can study it in animals using causal neuroscience, because if we don't do that, we're never going to develop new and improved medications for mental illnesses. Part of the reason that psychiatry has lagged so far behind other disciplines in the development of medications is that psychiatric disorders are defined by their symptoms, not by their causes, so there's nothing in our understanding of psychiatric disorders that helps us better develop a medicine for the mind. It would be like defining COVID as a runny nose and scratchy throat and a bunch of other symptoms including a loss of a sense of smell, without knowing that it is caused by a virus. That would make it in fact very difficult [to develop treatments or vaccines for COVID]. Since not everybody with COVID loses their sense of smell or taste, there could be a whole bunch of different causative agents and viruses that are lumped in that rubric of cold-like symptoms. In fact, sometimes even now it can be difficult for people who get mild COVID to know if they have COVID or a head cold.

So, if we don't know what causes a disease, a disorder, how can we rationally develop a medicine to treat it? Once we learned that diabetes was caused by an insufficiency of insulin, we realized that we could treat diabetics by giving them more insulin. That kind of causal knowledge will never emerge in the brain without studies of emotions in animals. I want people to get that and to understand it, and to let go of the idea that feelings are the sine qua non, of emotions. They're a part of emotion, and they're the part that right now can only be studied in humans, but there are a lot of other parts of emotion that are not conscious feelings that we can study in animals. There's even evidence in humans for unconscious emotion in humans in some research projects and papers that have been done. That's the issue. As I'm sure you appreciate, it treads a very thin line, because the temptation is to boil this down to a sound bite which is like "Even fruit flies have feelings." And the last thing that I want to have happen as a result of this is for all the people in my lab who work on fruit flies to have to spend hours and hours and hours writing animal protocols to the Office of Laboratory Animal Resources to allow them to do an experiment in fruit flies like they do when they want to do an experiment in a vertebrate animal like mice. Even though there is no rational objective reason for concluding that mice have feelings and flies don't—there's no evidence for that—the fact is that public policy is based on the assumption that there is a difference, and that vertebrate animals have feelings, that invertebrate animals don't. If it gets distorted into that, then I've done my colleagues a huge disservice and I should be taken out and shot.

ZIERLER: [laughs] One more topical question before we go back to your personal narrative. We're all watching the unfolding horror in Ukraine right now. If we boil all of that down, your expertise, your deep thinking in the emotions at the heart of neuroscience, and we look at war basically as an emotional breakdown—hatred, tribalism, confusion—the takeaway obviously—war is so primitive, it's so animal-like. Long-term, what might your research, the research that's happening more broadly at the Chen Institute, what impact might that have on a quote-unquote "translational impact" of understanding and maybe even preventing war in the future?

ANDERSON: That's a very difficult question. The short answer is I can't say. I would love to discover some drug that we could spray out of a crop duster airplane as it flew across a battle field and that instantly pacified all the combatants so they would throw down their weapons. I don't see that happening anytime soon. On the other hand, brains have a lot of mechanisms to keep aggression in check. There's no question about that. The more that we learn about how brains keep aggression in check, the better positioned we will be to think about how that might feed into aggression in the context of complex social interactions. But I don't want to pretend that neuroscience is the solution to all problems of violence and aggression. There are many other disciplines that are important that have to come into the discussion—psychology, social psychology, sociology, historical factors, all of those sorts of things. But the fact is that there is widespread thinking, for example, that a hormone called oxytocin increases trust. It's even popularly known as The Love Hormone. We don't really know how it works and whether and how it functions to inhibit aggression. We don't know how, when an animal fights—when an animal is mating, it shuts down its aggressive instincts. These are things that we're trying to study in the lab, because these are powerful natural mechanisms for keeping aggression in check. That's the best I can say.

The analogy I would make is that if we want to develop cures for cancer, since cancer is a disease of uncontrolled cell division, we need to understand how cells divide. If we can do that in a yeast cell more powerfully than we can do that in a human cell, so much the better. Indeed, most of the Nobel Prizes that have been awarded for work on cell division have been awarded for work done in yeast, not in mammalian cells, let alone human cells. That's sort of the way I think about the potential translational aspects of aggression research. Even that is a really fraught topic, because there are people that feel that particularly if you are able to identify biological predispositions to violence, that that will be used to marginalize certain people and change how they are treated so that it becomes a self-fulfilling prophecy. It may be used in an inappropriate way to damp down the behavior of people who may be more spirited and volatile than others. So it's like any technology; if we learn the science and we propose to translate it, we have to be very, very careful that that knowledge is used in an ethical manner and not abused.

ZIERLER: Of course, it's an obviously intense topic of speculation right now—I assume you haven't thought about this much—but just shooting from the hip, when you look at Putin and his decision-making, what do you see that can help us explain his motivations and more importantly the underlying emotions that drive those motivations?

ANDERSON: I really can't answer that question. That's like asking a psychiatrist or a psychologist to make a diagnosis from a distance. Not only am I not a psychologist or a psychiatrist, even psychiatrists have a strong ethical code about not diagnosing psychiatric disorders in politicians just by observing their behavior. This is called the Goldwater Rule because it came out of the time of Barry Goldwater's presidency, and it was something that was vigorously debated during the Trump presidency—"Is he crazy or is he not?" So I can't offer any professional interpretation of Putin's behavior. I can offer a layperson's analysis of what he is doing. If I had to categorize his behavior as a type of aggression—so there's many different—and this is something I do know a little bit about. There's different types of aggression. There's offensive aggression, sometimes referred to as appetitive aggression. There's defensive aggression. And, there is predatory aggression.

The emotion states that underlie those different forms of aggression are different. Professional soldiers fight because they're paid to, not because they're mad at the enemy. I would say the type of aggression that is motivating Putin is a type of predatory aggression or territorial aggression, and a type of offensive aggression, but that's really all that I can say. And it is so deeply embedded in a cognitive calculus that he has, really any sort of effort to explain it in terms of animalistic tendencies would at best fall short of the mark, and at worst be completely misplaced. My own personal view is that he wants to try to reconstitute the original Soviet Union, and this is the first step, and it will be followed by annexation of all other former Soviet satellite countries that are not in NATO. Then once he has done that, he maybe will take a little—put his toe in the water and try to annex a small NATO country like Romania and see whether the West is willing to put up a fight or not. But that's a lay opinion; it's not informed by any knowledge of neuroscience.

ZIERLER: Let's go back to the 1980s, a question I've been looking forward to asking you since our last conversation. To close the thread on Rockefeller University, of course famously Rockefeller had no system of cultivating junior faculty. It was senior faculty and their enormous fiefdoms of laboratories. From your perspective as a graduate student, what were the advantages and disadvantages of that system?

ANDERSON: My perspective as a graduate student—

ZIERLER: To compare it to Caltech, where obviously there is a wonderful culture of promoting junior scholars.

ANDERSON: The first thing I have to say is that that's not true anymore at Rockefeller, largely as the result of efforts by David Baltimore when he was the president of Rockefeller.

ZIERLER: The thankless efforts, at the time.

ZIERLER: Yes, the thankless efforts. Rockefeller now I see very much operates in the same mold as Caltech does in trying to promote its independent junior faculty into tenured faculty positions. I didn't really experience any benefit of that system in the lab that I worked in, because at the time, that lab—Günter Blobel's lab—was actually run in a very horizontal manner. All he had was postdocs and graduate students. He did not have assistant professors and associate professors within his group, like Gerald Edelman, who was a Nobel laureate, and another professor at Caltech, did. I can speculate that maybe one advantage of that type of ladder structure is that it affords long-term continuity of domain knowledge and expertise and technical knowledge that can more easily be passed on to each successive generation of students and postdocs that come into the lab. If you don't have that, basically you rely on students and postdocs training each other as they come through the laboratory and you have no long-term repository of technical expertise. So there are things that my lab used to know how to do, and right now there's nobody in my lab who knows how to do them, which is really unfortunate. That sort of a pyramidal system, the Max Planck type system, can help. Also the other thing about the Max Planck system that can help with that is having long-term dedicated professional staff scientists, either at the PhD level or at the pre-PhD level, to maintain that continuity of technical competence. But I didn't sense really any benefit to me of that being in Günter's lab as a student.

ZIERLER: What advice did you get, if any, about thinking for your next opportunity after you defended the thesis?

ANDERSON: That's a really good question, because my thesis was at the interface between cell biology and neuroscience. I was applying some of the tools that were developed in Günter's lab to understand basic cell biological functions to understanding proteins in the neurons. In fact, my advisor used to denigrate my research as quote-unquote—he said, "Neurobiology? That's just applied cell biology. One of these days, you have to do something fundamental. It sharpens your thinking." In a sense, he was right, but that put me in a position that when I defended my thesis, I really faced a fork in the road, which was, do I continue in cell biology and maybe switch to working on something more quote-unquote "fundamental," meaning a process that happens in all cells, not just in neurons, or do I take the path into neuroscience and the emerging area of using tools from cell biology and molecular biology to try to understand the brain, as opposed to just electrodes and electrophysiology and electrical engineering.

I recall a very important phone call in which I discussed this with a senior professor who was I guess at Yale or at the Salk Institute [at the time] named Charles Stevens. Chuck Stevens. He was a physicist turned neuroscientist. I had gotten to know him at various meetings that I had attended and presented my work. I was very fortunate, parenthetically, to get a lot of exposure to people in the field as a graduate student, because my advisor really didn't care about the research that I was doing, but it did have an impact in neuroscience. When we published our papers, he started getting lots of invitations to go to neuroscience meetings, and since he had utterly no interest in neuroscience, he sent me. There I was, this young pisher without even a PhD mixing with hotshots at meetings and presenting my research. Anyhow, Chuck Stevens, I asked him point-blank this question. I had an opportunity to work with somebody that was focused on the next frontier in basic cell biology or to head more into neuroscience. Chuck said, "You need to stay with neuroscience. That's where things are going to be. That's where the excitement is going to be over the next decade." He gave me that advice, and it was definitely against the advice that I got from my advisor. I have a whole book of quotations from my PhD advisor, who unfortunately passed away several years ago, but he had a fairly heavy German accent, and he was this Teutonic Wagnerian figure with a shock of white hair and a florid complexion, almost like a Siegfried type of character. He would constantly reproach me every time I mentioned going into neuroscience and say, "David, you have to come back to the Catholic Church of cell biology."

ZIERLER: [laughs]

ANDERSON: He viewed me as an apostate of cell biology. But really that was the one important piece of advice that I got, and I'm glad I followed that advice. Not that there hasn't been a lot of things important in cell biology, but I don't think I had as much to contribute personally, given the kind of scientist that I am, to cell biology, as I did potentially to contribute to neuroscience.

ZIERLER: Institutionally or what labs—what was available to you where this interdisciplinary approach was celebrated, where you would feel at home?

ANDERSON: There were very few places that were doing it at this time. One of the main places that was doing this was Columbia. Columbia had two people who would both eventually go on to win Nobel Prizes who were collaborating with each other. One was Eric Kandel, who was famous for his work on learning and memory and won a Nobel Prize for it, and who was sort of the czar of neuroscience at Columbia. The other was Richard Axel who became my postdoctoral advisor, who was a young Turk, brash, brilliant, somewhat abrasive guy from molecular biology, who thought more crisply and deeply about problems in neuroscience than I think most neuroscientists did. Richard and Eric had gotten together to collaborate in trying to apply molecular biology to neuroscience. There were other places where some of this was happening, but nowhere where the mix was as exciting as at Columbia. That's what eventually attracted me to Richard's lab.

In fact, parenthetical story—Eric was so desperate to get people in this field at Columbia, this emerging field of molecular neuroscience as they called it, that he tried to recruit me as an assistant professor right out of graduate school. His idea, since I really didn't know anything about how to do molecular biology—I was a cell biologist; I was not a gene cloner, I had never touched DNA in my entire PhD career—he wanted to get me trained as a molecular biologist, so Eric introduced me to Richard, with the idea that Richard was going to train me to be a molecular biologist, and then after that training, Eric would hire me into [an Assistant Professor position at Columbia]—Eric almost had a Max Planck-like fiefdom at Columbia. That's how I got to meet Richard. Then things just developed. I did get a job offer at Columbia, but I decided, for better or for worse, to go to Caltech instead, and so that was the trajectory.

ZIERLER: What was Richard like?

ANDERSON: Richard was—he's about nine years older than me, so when I came to his lab, that was 1983, so I was 27, and he was 36. He and his postdoc Michael Wigler had just invented a technique for introducing foreign DNA into animal cells, which turned out to become a foundational technique for the biotechnology industry called transformation, and was already famous for that. I think he was one of the youngest people ever elected to the National Academy of Sciences. As I said, he was brash, he was brilliant, he was a New York Jewish street kid from Brooklyn, and took no prisoners and had no patience for anybody with sloppy thinking, and was brutally critically, and was challenging to work for, but he really became my lifelong mentor, and he still is.

ZIERLER: How did your research slot into what the lab was doing overall at Columbia?

ANDERSON: I started out on a project that Richard suggested that was in the main line of the work that was being done in collaboration with Eric Kandel. In fact, I spent the first nine months there doing spending a lot of time in Eric Kandel's lab cutting up these sea slugs that he worked on—Aplysia californica—and taking out neurons and trying to find gene markers for different kinds of neurons to see how genetically different they were from each other. This is something that I have been doing up until the present day. There's an entire field devoted to doing that, using much more sophisticated techniques than were available in 1983. Initially, that was really in the main line of what Richard was doing. But for a variety of reasons—I mean, the project succeeded, but in a way that was not very interesting, and that was in direct competition with one of Richard's former postdocs who had just gone on to start his own laboratory at Stanford, and I didn't feel like working on a project that was in head-to-head competition. So at some point, I went in to Richard and I said, "Look, I spent the first nine months here working on one of your ideas. It's not really going very well. I'd like to work on one of my ideas." That's when I started the project that became the basis of my first 20 years in research but that really was tangential to the main thrust of what was going on in Richard's lab. It was about studying the development of the nervous system and identifying stem cells and progenitor cells and molecular markers for developing neurons, using some of what I had learned in the first part of the process. I guess in both Richard's lab and in Günter's lab, my work was tangential to the main thrust of what was going on in the lab. That's why I sometimes tell people that my claim to fame is that I made little or no contribution to the work of not one but two Nobel Prize winners during my time there [in their labs].

ZIERLER: [laughs] What was the intellectual spark that set you on this 20-year path?

ANDERSON: What was the intellectual spark? I think it was my early exposure to work by a brilliant French embryologist named Nicole LeDouarin who should have won a Nobel Prize but didn't, who mapped the development of a part of the developing nervous system called the neural crest, which is something that my colleague Marianne Bronner has devoted her entire scientific career to working on. Unlike me, she didn't jump ship and switch fields halfway through her career. I was fascinated that this small population of cells that detaches from the top of the developing spinal cord and migrates through the embryo, almost like parachute troops dropping out of a transport plane as it's flying over a landscape, these cells crawl all over the embryo and they give rise to all the different neurons in the peripheral nervous system, the associated glia—sorry, that's my cat howling—as well as the bones of the face, blood vessels in the heart, and all kinds—[to cat] come on! I'm here!—[to Zierler] this is my emotional cat, Serafina, who is a star in my book. [to cat] Come on! [to Zierler] She's deaf, so she can't hear me.

Anyhow, I was just fascinated by how you could generate all this diversity from such a small initial population of precursors. I had always been interested in developmental biology and the burning question in this field was whether these different cell types arose from subsets of neural crest cells that you couldn't tell apart by eye, but that were already intrinsically different from each other, as in the case in the parachute analogy where each parachutist that jumps out of the plane already knows what they're supposed to do before they hit the ground. Or, was it the case that those cells were relatively undifferentiated, and they only figured out what they were supposed to differentiate into and how they were supposed to develop after they got to where they were migrating? By analogy, that would be all the parachutists that jumped out of the plane were exactly the same and they had no idea what they were going to wind up doing in the war until after they hit the ground and looked around and figured out where they were.

The answer, of course, as everything in biology, is that it's a mixture of both. My lab was the first to show that there are some cells in that population that are multipotential and self-renewing and can give rise to at least two different kinds of cells, neurons and glia, and so they are stem cells in the nervous system. That was a big deal because people were not really thinking about the developing nervous system in terms of stem cells at that time, which was in the late 1980s. They were thinking about neural development in other ways. But I was strongly influenced by work that was done on the blood system, largely from work of my colleague at Caltech Paul Patterson, who is now deceased, who was interested in the parallels between the immune system and the nervous system, but mainly in terms of the molecules they used. I pushed that in the direction of thinking about the patterns of the developmental trajectories that cells take. On the other hand, Marianne Bronner has been at the forefront of showing that there are indeed different subpopulations of neurons in the neural crest before they even migrate that constrain what those cells are able to do. So, I would say the transport plane has several different—whatever you want to call them—squadrons or platoons of paratroopers that are supposed to do different things, and they know sort of different categories of things before they land, like one is supposed to set up artillery and the other one is supposed to do combat missions on foot, but within those squadrons or platoons, there's still a lot of room for diversification depending on where the soldiers land when they hit the ground. To push that crude analogy.

ZIERLER: When it was time to go on the job market, why not stay at Columbia? Why was that not a consideration for you?

ANDERSON: Because I felt like I would be forever overshadowed by Eric and Richard. Eric in particular had a very dominant influence on the younger faculty at Columbia including roping them into helping to write chapters for the massive textbook that he and his colleague Jimmy Schwartz started publishing back in the 1980s called Principles of Neural Science, which has been the main neuroscience textbook in medical schools for generations. I already had been roped, as a postdoc, into working on one of these chapters for this book. I knew from that experience that if I stayed at Columbia, I was going to be asked by Eric to do a lot more work on the book, and that was going to interfere with my ability to do my science. I guess I really wanted the freedom to explore and decide what I wanted to do, without Eric looking over my shoulder, and Richard nudging me.

I have to say that it was a decision that I questioned many times in retrospect. In fact, my first year or two after coming to Caltech, I don't know whether it was homesickness or just missing the style of science at Columbia, but I really wanted to go back. In fact, over the last 30 years, I've had two or three opportunities to go back to Columbia, one of which I even signed on the dotted line. This was like 1996. I really felt like I was ready to go. Then for personal reasons including what my wife was doing at the time, it just seemed like the wrong idea in retrospect to go back to New York. Maybe that was a mistake, maybe it wasn't, but it was very difficult, because the style of science at Caltech was and is very different from the style at Columbia.

ZIERLER: Was there a point of contact at Caltech? Were you recruited by somebody specifically?

ANDERSON: Yeah, I was. I was recruited by Seymour Benzer. Seymour was the chair of the search committee that recruited me, and Seymour—I remember this vividly—Seymour sort of buttonholed me at a Cold Spring Harbor meeting after I had given my talk. This was before I was applying for jobs. He very pointedly asked me, what were my plans, and was I applying for jobs, and that sort of thing. In fact, [I had a] very, very unusual experience with my job searches, in that people started asking me, even right after I had started my postdoc, before I had published anything for my postdoc, if I was interested in a job in their institution, because this was a hot area, and I was just in the right place at the right time, because I had taken some initial steps in that direction, which in retrospect were not particularly profound. Anyhow, I had people coming after me and I sort of felt like I was forced into making a decision before I wanted to. So I took a job at Caltech, but then I just continued my postdoc for another couple of years, because I wanted to get something done as a postdoc. So, Seymour really dug his claws into me, and he figured out all of the things that I was interested in. He just called me relentlessly. He was an ex-New Yorker also. He grew up in Brooklyn and he knew my trepidation about moving from New York City to the wasteland of Southern California, which I sort of viewed as just an endless desert of shimmering asphalt and gas stations with no public transportation and no culture. Seymour took me around when I came out to visit in his 1962 Dodge Dart convertible to various places in downtown L.A. to show me that there was an urban life there. He really whipped up a lot of support for hiring me at Caltech.

I think in the end, three things had a lot to do with the reason for choosing Caltech. One was that— the enormous amount of enthusiasm that not just Seymour but lots of his colleagues showed for recruiting me to Caltech. The second was that as somebody that had crossed over into neuroscience from cell and molecular biology, and who was working on a developmental problem, I didn't really feel like a sort of card-carrying neuroscientist, and it was very important for me to maintain close intellectual and personal contact with people in cell and molecular biology who were not neuroscientists. For example, Richard, the lab where I did my postdoc, Richard was in the Department of Biochemistry and Molecular Biophysics at Columbia. He's not in the Department of Neuroscience. Every job offer that I had, except the Caltech job offer (and one other at UC San Francisco), was in a neuroscience department. But I felt that a job in a neuroscience department was not going to be a good idea because I would be intellectually isolated, and it would be too parochial.

Caltech didn't have a neuroscience department. All it had was the Division of Biology. Everybody who did anything biological all the way from studying cell division and DNA replication to studying motor control in monkeys were all in the same division. I found that very attractive, that there were no barriers to interacting with people. For example, two of my closest colleagues in the early days when I was at Caltech were Barbara Wold and Ellen Rothenberg, who worked on the development of muscle and the development of the immune system. My work had much more in common with what they were doing, as well as with the late Eric Davidson, than it did with what other neuroscientists at Caltech were doing, except for Paul Patterson. That was reason number two. Then reason number three was because my father, who was a theoretical physicist, of course knew about Caltech, and from the perspective of a physicist, he knew what a famous place it was, and so he really thought I should go to Caltech. Whether that was the right advice or not—I say this with all [due] respect to my colleagues in Biology at Caltech—but I think by any objective criteria, the Caltech Biology Division [since I arrived in 1986], while it's highly competitive with its peer institutions, places like MIT and Harvard, it does not enjoy the sort of singular reputation that Caltech's Physics [PMA] and Chemistry [CCE]—and also Geology [GPS]— [Divisions] enjoy, where they're widely considered to be the best or one of the two best places to go to study those topics. Biology is different, although the history of biology at Caltech is extremely illustrious, from T.H. Morgan to George Beadle, to Max Delbrück to Seymour Benzer to Roger Sperry. It has a very illustrious history, but somehow, that hasn't translated into the kind of recognition, at least among young graduate student applicants, I think that physics, chemistry, and geology have had.

ZIERLER: Obviously Seymour Benzer had designs in his insistence in recruiting you. Did you have a sense yourself in your early interactions with him that this would precipitate a revolution in your own research agenda?

ANDERSON: No, I really didn't. Seymour worked on fruit flies. What I will say is that from the beginning of working on development, I had tremendous fly envy.

ZIERLER: [laughs]

ANDERSON: In fact, if I look critically at my choice of organisms in my career path, rather than studying development in mice and rats only, and then switching to behavior and studying behavior in fruit flies and mice and rats, I would have been smarter to study development in fruit flies and then switch to studying behavior in mice and rats, in retrospect. Because I think that—and this is certainly true in hindsight—that studying genes that control development in fruit flies had much more general applicability or provided, afforded, a more direct way to break into mammalian development than studying neural circuits in fruit flies provides a way to break into studying mammalian brain function. I hadn't really thought about that before. Anyway, I had fly envy. Seymour was the god of fly neuroscience, or I think of him more as the Yoda of fly neuroscience. Eventually, I did switch, and I think in retrospect it was a good decision. Did I tell you the story about my collaboration with Seymour when I first switched into fruit fly neuroscience?

ZIERLER: I don't think so.

ANDERSON: This is great. Seymour was constantly nudging me, "When are you going to see the light? When are you going to see the light?" Meaning, "When are you going to realize that you should work on fruit flies instead?" I resisted this while I was studying development, because I was an assistant professor and the thought of taking on a whole new organism was too intimidating. But once I switched into behavior, I made such a radical change in fields that to add another organism didn't seem like such a big deal. Since I was basically on the brink of committing professional suicide anyway, who cared if I tied a weight to my ankle in addition to a rope around my neck as I jumped off the Brooklyn Bridge? So I recruited somebody to start working on fruit flies in my lab. I was fortunate enough to do that. When he got to my lab, the project was to follow up on an anecdotal observation that Seymour had told me about, which you might have interpreted as evidence of quote-unquote "fear" in flies. That is, Seymour had found that if you put flies in a tube, a confined space, and shock them, so that they ran out of that tube, and then you gave a fresh cohort of flies the opportunity to choose between the tube that had previously held the shocked flies and a fresh tube, they would always go into the fresh tube and avoid the tube that had contained the shocked flies, as if the shocked flies had left some residue or smell of fear in the tube.

I thought that was a really interesting way of starting to get at the question of fear and flies. That is, if you want to know whether flies are afraid of things, you should ask other flies, and that was a way to do that. So, this postdoc came to my lab, and I called up Seymour, and I asked him if he wanted to collaborate on this. After all, it was his observation. He had recruited me to Caltech. I finally saw the light. I was working on flies. I called up Seymour and asked him if he wanted to collaborate, expecting him to say, "Great!" He said, "No." I was just flabbergasted. I said, "What do you mean?" He said, "No!" I spoke to a former postdoc of Seymour's, Larry Zipursky, who is at UCLA and has been my very close friend and colleague for the last 40 years, and he told me, "Don't take it personally. Seymour hates to collaborate. Seymour doesn't want to be constrained in anything that he does by what anyone else thinks or is doing. He's a loner. He hates to collaborate." I was very depressed but, all right, okay, that's the way it was. Unbeknownst to me and Seymour, my postdoc, despite this rejection, went into Seymour's lab at night and collaborated with a postdoc of Seymour's, a French woman, to see if they could replicate Seymour's anecdotal observation. And, they could. It worked spectacularly. They showed Seymour the data the next morning, and they showed me the data. Then I get a call from Seymour, and he says to me, "Well, I guess we're in bed together."

ZIERLER: [laughs]

ANDERSON: Because having seen the data and seen the result, there was no way Seymour was going to let go of that. In fact, to the contrary, he basically recruited my fly postdoc into his laboratory. My fly postdoc [Greg Suh] spent most of his time as a postdoc in Seymour's lab, which was great for Seymour because I was footing the bill for the postdoc. It was a classic Seymour maneuver. That's how the first paper that had Seymour and me and also Richard Axel, who became involved in the collaboration for other reasons, came out. It was a Nature paper published in 2004 on which Richard, Seymour, and I were the three senior co-investigators. It came out just before Richard received his Nobel Prize in 2004. I was very grateful that Seymour and Richard allowed me to be the senior author on the paper, because I was at a stage in my career where that made a difference, and it didn't make a difference to Seymour and Richard because they were so well-known. But the fact is that all of us contributed the same--or contributed the least. In the end of the paper where there is—in Nature, you have to have an author contribution section which says who did what. You say which postdocs did which experiment. Then it says "DA, SB, and RA—David Anderson, Seymour Benzer, and Richard Axel—made equally minimal contributions to this paper." And that's in print. Originally, believe it or not, when I sent it in, I wanted—and this was by mutual agreement with Richard and Seymour—we wanted it to say, "These three Jews made equally minimal contributions to this paper." [laughs]

ZIERLER: [laughs]

ANDERSON: Nature would not let us put that in. But it does say "DA, SB, and RA made equally minimal contributions to this paper." I figured no one would ever see it, but about five years ago, I was at a scientific meeting, and there was a talk given by a professor from UC San Francisco who was one of Seymour's first postdocs in his fly behavior phase. He put up a slide at the end of his talk, when he was giving the credit slide, and talking about his role in the project, and he put up [a slide containing just] that quote from our paper, and he said, "I think all PIs should have a statement like this at the end of their paper, that they made minimal contributions to the work described in the paper." That was very gratifying.

ZIERLER: In the way that for your father, as a physicist, Caltech obviously loomed very large, for you, before you met Seymour Benzer, coming up in biology, did you have an appreciation of Caltech Biology, even if it didn't have the same status or stature as Physics did?

ANDERSON: I was well aware of the important work that had been done in biology by people who were at Caltech. This is something you learn about even in your advanced placement biology course in high school—Thomas Hunt Morgan's work on gene mapping in Drosophila, George Beadle's work on the one-gene, one-enzyme hypothesis for which they each won the Nobel Prize. Max Delbrück's work on using bacteriophage to map genes and understand gene control. I knew of Seymour's work on fruit fly behavior. In fact, I think I still have a copy—I had a copy—of his 1967 Scientific American article where he wrote about how you could measure behavior in fruit flies, and he talked about individual fruit flies [within a population of flies] as atoms of behavior, as he called them. But what I don't recall was knowing that that work was done at Caltech. That is, it was associated in my mind with the people, but it wasn't—but I have to say in fairness, none of the basic biology that I learned about was associated with a particular institution. It was associated with the people that did the work. There were some very famous experiments done at Caltech during what's called the Golden Age of Molecular Biology, even by people that didn't win a Nobel Prize but maybe should have. The famous so-called Meselson-Stahl experiment that was done by Matthew Meselson and Frank Stahl, where they used ultracentrifugation—it was basically biophysical chemistry and heavy isotope labeling—to show that when DNA replicated, it replicated what's called semi-conservatively. That is, if you imagine the DNA double helix as having a Watson and a Crick strand, instead of at the time of cell division each of those strands being copied into two new strands which went into one daughter cell and then the two original strands went in the other daughter, each strand was copied, and so the cell separated into each daughter having one of the original template strands and the other strand being a copy of that strand. So one daughter gets the original Watson strand, and the other one gets the original Crick strand, plus a copy of the Crick strand or a copy of the Watson strand, respectively. That famous experiment was done at Caltech. As a result of that technology also, Stahl and Meselson and François Jacob and Sydney Brenner discovered messenger RNA at Caltech. They did the first experiment that provided solid experimental evidence of messenger RNA, even though Francis Crick had speculated for a long time that messenger RNA did exist. So, I think it was more maybe after I came to Caltech that I became more aware of its rich history in biology, and maybe that's just because I hadn't really read that much history of science in biology, other than Horace Freeland Judson's classic book The Eighth Day of Creation. I knew about the people; I just didn't know about the place.

ZIERLER: What about this joke that you told me in our first discussion that Caltech is such a quantitative place that biology is almost a humanities discipline?

ANDERSON: Yes.

ZIERLER: Was that a reputation that you appreciated before you came, or only afterwards?

ANDERSON: It was really only after I came. In fact, I actually was teaching an introductory elective biology course for non-biologists in my first seven or eight years at Caltech, and I remember I was having lunch or dinner with one of my students in the Rathskeller at the Athenaeum, and she asked me, "Why did you come to Caltech if you're interested in biology?" It's like, why would a biologist want to come to Caltech? Caltech is about physics, astrophysics, particle physics, theoretical physics, engineering, math. Why would you come to Caltech? In other words, the implication also is it has no reputation in biology. I think that showed me the amount of ignorance that even Caltech undergraduates had at the time, of the rich history of biology at Caltech. I don't know if you know this, but it wasn't until I think maybe between 15 and 20 years ago, or 10 and 15 years ago—I can't remember—that an introductory course in biology, a one-quarter course, became a core requirement for all Caltech undergraduates. At the time that I came to Caltech, every undergraduate had to take two full years of math and physics. That is six quarters of math and physics, and physics up through quantum mechanics and waves, and differential equations and linear algebra, kind of like the Greek and Latin of Caltech. But it was possible and common for a Caltech student to graduate with an undergraduate degree in science and not know that DNA was a double helix, or maybe not even know what DNA was.

It took a huge amount of fighting at the faculty level to shoehorn in even one quarter of an introductory biology course. Because it's a zero-sum game (the number of quarters of required courses), and so if you're going to add a new required course, something has got to give, and something had to give in the sixth quarter required physics and math curriculum, and I forget exactly what it was. Eventually they relaxed the Institute-wide requirement for two full years of math and physics, and they reduced the number of quarters that all students have to take, but then they left it to the individual majors to determine whether the students had to go on and take quantum mechanics and waves. I guess if you're foolish enough to come to Caltech to be an undergraduate economics major, now you don't have to take quantum mechanics. I can't imagine that quantum mechanics is a required course for a degree in economics here. But who knows? It could be. I just encounter this constantly, this view that biology is an innumerate science, that it's all about description and memorization, and there's no concepts or principles, and no quantification in biology, which couldn't be further from the truth.

ZIERLER: Last question for today. When you got to Caltech, Lee Hood was at the height of his laboratory powers. His group had gotten enormous. He had fully embraced all of these engineering marvels. Famously, he got pushback from that, starting with Murph Goldberger, who admonished him to focus on "small science, because that's the kind of science that we do at. Caltech."

ANDERSON: See, I didn't know that.

ZIERLER: Obviously that did not register with you at the time?

ANDERSON: No, although I was well aware, as was everybody else, how enormous Lee Hood's lab was. I think at some point it had over 100 people in it. There was a joke that went around—maybe it was apocryphal; maybe it's true—and in this story, Lee Hood is walking through his lab, and he sees somebody at a microscope that he doesn't recognize, and he walks over to the person and says, "Look, I want you to know that just because I haven't spoken with you in a while it doesn't mean that I'm not interested in your project and I'm really excited about your research and what you're doing and we need to get together and have a meeting soon" and the guy looks at him with this puzzled expression; it turns out it was a microscope repairman.

ZIERLER: [laughs] That's great. His embrace of the coming biotechnology revolution, really bringing genetics to the forefront, was that on your radar at the time? Did that seem like a promising avenue of research that would one day be relevant to you?

ANDERSON: I thought so, and certainly the background that I came from, cell biology and molecular biology, benefited from that enormously. I could see that, and that was one of the things that attracted me to Caltech, that there was all this technology being developed for microsequencing of proteins in particular, although I didn't have a reason to take advantage of it when I was in the early phases of my research. I certainly thought it was exciting, but at the same time I was intimidated and a little put off by this huge machine that Lee Hood had in his lab. In fact, not only did Lee not recognize this person as a microscope repairman, I remember going to a conference in Colorado within about a year after I arrived at Caltech, and seeing Lee there, and saying "hi" to him and having Lee look at me like he had no idea who I was. I know now that there's actually a neurological condition, and I think Pamela Bjorkman told me she has this too, that some people are only able to recognize the faces of people they know in certain contexts, and outside of that context, they can't recognize the person, because the memory of the face is inextricably glued to the memory of the context. So, Lee ignored and didn't recognize me at the meeting in Colorado, and two weeks later I ran into him at a meeting in Kerchhoff Lecture Room and he was all, "Hi, how are you doing? Nice to see you." He clearly recognized me there, but he didn't recognize me at a meeting in Colorado.

ZIERLER: On that note, we'll pick up from 1986 going forward for next time.

[End of Recording]

ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Tuesday, May 31st, 2022. It is great to be back with Professor David J. Anderson. David, finally, great to be with you again. Thank you so much.

ANDERSON: You're welcome, and thank you for your interest.

ZIERLER: We're going to go back to 1986. You join the faculty at Caltech. We've touched on this before. It bears a little more detail at this point. How well set were you with your research agenda coming to Caltech? Did you have a solid plan in place, or did you know just by virtue of Caltech and all of its idiosyncrasies, particularly in biology, that you would be well served to have an open mind about different areas of research you might undertake at the beginning of your faculty career?

ANDERSON: That's a good question. I think it was really more the former. I came with a fairly focused view of what I wanted to do, and the area that I wanted to work in. I certainly hoped that I would have influence and input from my Caltech colleagues, but I thought that it would be a mistake to come in and just sort of drift wherever the winds blew me. That's not why I was hired here. But, it did happen over the course of my time at Caltech. It's just that when I first got here, I felt like I was under a lot of pressure, like any new assistant professor, to get their lab up and running, and to produce papers and get people in the lab. I wanted to focus on what I knew I could do, at least in the short term.

ZIERLER: Neural crest cells, was that the umbrella area through which everything else fit at that point?

ANDERSON: More broadly I think, it was understanding how embryonic precursors to neurons and glia select their fate—what kind of neuron are they going to become, what kind of glia are they going to become. I focused on the peripheral nervous system because at the time, it was more experimentally accessible and simpler than the brain and the spinal cord -- the central nervous system. The peripheral nervous system comes from the neural crest, and so that's the context in which my studies fit. But I wasn't only studying the most primitive neural crest stem cells. In fact, we didn't identify those cells until after I had been at Caltech for six years. I was studying later-stage developmental precursors that were present in embryos long after the neural crest had come and gone.

ZIERLER: This idea that at Caltech, biology is not front and center in the way that physics and chemistry is, how do you weigh that against the fact that Caltech has this tremendous history in biology going all the way back to Thomas Hunt Morgan? What were some of the dualities as you experienced them as a young faculty member?

ANDERSON: What you say is certainly true about the view of biology at Caltech. In fact, I remember I was teaching a course in introductory biology for non-biologists, and I was having lunch with an undergraduate student in my class in the Rathskeller, and she said, "Why did you come to Caltech if you're interested in biology?" That sort of just encapsulated everything for me. I think that there is this disconnect between the public perception of Caltech, which is mainly earthquake science and rocket science; Caltech's own perception of what it's famous for, which is physics and astronomy; and then the fact that some very important biological research has been done in the history of Caltech, absolutely essential. Many Nobel Prizes in Physiology or Medicine were won here in the first half of the 20th century. I think it reflects the fact that within the biology community, Caltech was recognized for a long time as a place where important schools of thought and experimentation were developing. I'm talking about Thomas Hunt Morgan's fruit fly genetics approach, and later Max Delbrück's phage genetics approach which attracted Seymour Benzer here. But these were not areas that most people in physics and engineering at Caltech were I think interested in or paying attention to. That probably accounts for some of the disconnect. Certainly, it was reflected in the fact that the majority of undergraduates who attend Caltech don't come here to study biology; they come here to study engineering or physics and math.

ZIERLER: Tell me about setting up your laboratory. What were some of the key research questions that served as a guide for how you wanted the lab to function, what kinds of instruments were most important to you?

ANDERSON: I wanted the lab to function at two levels of biological analysis. I wanted it to function at the level of cell biology and at the level of molecular biology. Different questions were asked at those levels, and different instrumentation were required. An example of a cell biology question of the type that we would ask is, "Can we isolate precursors to a particular type of neuron or secretory cell from developing rat embryos at a certain stage of development?" If we could isolate those cells and culture them outside of the embryo in petri dishes, we could ask how committed they were to one direction of differentiation, or could they be pushed into other directions of differentiation by adding appropriate signals to the culture medium, and what were those differentiation factors. So we did a fair amount of that type of research for the first 20 years that I was here. I would say that one of the key pieces of instrumentation for that is a fluorescence-activated cell sorter, which is a workhorse tool in cellular immunology that people like Ellen Rothenberg depend on. In fact, Ellen [and her lab manager Rochelle Diamond] ran and continue to run the Cell Sorter Facility here. This is a device that allows you to pluck out rare populations of cells from a mixture of cells according to what types of proteins they have on their surface, if you have antibodies that recognize those proteins. Actually, they don't have to be proteins; they can be lipids or carbohydrates. But the idea is that you look for antibodies that detect specific cell types, those antibodies are fluorescently labeled, and then you run this mixture of cells, a suspension of cells, some of which are the ones you want which are fluorescently labeled, and some of the ones are not labeled, and they're the ones that you don't want. Then basically each cell comes out in a little droplet and a laser shines on it to determine if the cell in the droplet is fluorescent or not, and then if it is, the droplet is directed to be deposited in one tube, and if it isn't, it's deposited in another tube. You can multiplex this. You can use more than one antibody to make your selection more specific. You can isolate multiple populations at the same time. That was really a workhorse instrument that we set up, and that I started working on as soon as we got here.

Then the second level was molecular biology. There, we were trying to understand the genes that act inside these cell populations to control the decision of whether the cell is going to become a neuron or a glial cell or a non-neuronal cell of some kind, a neuro-endocrine cell. That involved being able to isolate and clone pieces of genes that we thought had key information that dictated where and when they would be turned on and turned off in specific cell types, and then to use those pieces of information as a sort of fish hook to pull out proteins that interact and recognize those regulatory sequences, which were, we anticipated, the sort of master control factors that would guide cells in one particular direction [of differentiation] or another. That required all of the instrumentation for molecular biology, high-speed and ultracentrifuges for purifying isolated DNA, facilities for radiolabeling DNA, making [DNA] libraries and basically that type of molecular biology.

Then both approaches relied heavily on tissue culture, that is, systems where we could grow particular cells in petri dishes where we could either manipulate them from the "outside," by adding various signals or growth factors to the culture medium and asking what it did to the cells, or manipulate them from the "inside" by genetically modifying the cells to try to push them in one direction [of differentiation] or the other. The questions were really—What are the precursor cells that are present in the developing peripheral nervous system in different parts of the embryo at different stages of development? What are their developmental capabilities by the time they've migrated there from the neural crest? What are the factors that act to push them in one direction of differentiation versus another? And what do those factors that act outside the cell do to the genetic instructions inside those cells to compel them to choose one pathway or another of differentiation?

ZIERLER: To what extent were these questions a function of sequencing based on what you had done with Richard Axel at Columbia?

ANDERSON: What do you mean by sequencing?

ZIERLER: This is the next logical thing to work on.

ANDERSON: Yeah, it was definitely sequencing based on what I had done with Richard at Columbia. It was fortunate, at least from my perspective, or maybe unfortunate, that I was working on a project with Richard that was something that I had concocted that was really outside the mainstream interests of his work, and so he didn't really care to follow up on it after I left, and I was free to continue on it. Actually I did the same thing when I was a PhD student in Günter Blobel's lab, and that's why I'm fond of saying that I made no contributions to the Nobel Prize-winning work of not one but two of my mentors.

ZIERLER: As I'm sure you've heard, junior faculty, when they come to Caltech, they need to appreciate, because of Caltech's smallness, they're not going to find five or seven likeminded faculty. There just isn't the numbers here. Generally did you find that to be the case, and if so, how did that affect your research agenda?

ANDERSON: There was really one or at most two people that I interacted closely with. One of them was the late Paul Patterson, who was my next-door neighbor in the Beckman Behavioral Biology Building. He was one of the main reasons that I came here. He was not a molecular biologist; he was a cellular neuroscientist. His work had laid the groundwork for the things that I was trying to pursue in Richard Axel's laboratory and that I was pursuing as a starting assistant professor. He was more senior than me. He had been hired as a tenured professor from Harvard Medical School a few years before I came. The other person that really turned out to be a very close colleague was Barbara Wold, who had also come from Richard Axel's lab as a postdoc, but who had arrived about two or three years previous to me. She and I had shared interests in regulation of gene expression but also the shared experience of having been through Richard's lab and trying to establish our independence 3,000 miles away in Southern California. She was a very important colleague.

Also another important person was the late Norman Davidson, who had come to work originally in chemistry with Linus Pauling on aspects of DNA biochemistry, and who by the time I got there had drifted over more into biology and molecular biology, and was a close collaborator of Henry Lester's. Norman was the sort of pater familias of the Biology Division, and he took a lot of time to make junior faculty feel a little more at home, less stressed. He would organize Sunday trips to go see movies on the Westside of Los Angeles, because there weren't really any places to see movies, at least anything other than Hollywood blockbuster movies, in Pasadena at the time. So Norman was a very important influence as well. But Barbara was the person that I really had the most interactions with; not daily, but many times a week.

ZIERLER: On the personal side, were you prepared for how un-Jewish a place Pasadena and Caltech was?

ANDERSON: No. And it really didn't hit me until I got off the plane in LAX and was walking towards the exit how—if you'll forgive the expression—how "white bread" everybody looked around me, after coming from New York where, first of all, people looked much more European. They're much more heterogeneous and diverse. I was not really prepared for that. I sought out and was friendly with people that had sort of made a similar journey from New York out to California and who were Jewish, like Seymour Benzer, although he was in a generation before me, and then later Mel Simon, who was chair for a while and had come from the Bronx. But I did feel like a fish out of water and it has taken me quite a while to adjust to that. I don't know if I ever really adjusted to it.

ZIERLER: Did you take on graduate students right away?

ANDERSON: Yes, unfortunately I did. Many of them, I have to say, were not so good. Many of them were students that were switching laboratories, because their first lab that they had joined didn't work out for them. I realized only too late that that was more often a function of the student than it was of the laboratory. This is I would say part of the problem with Caltech. That is, they give you so much space as an assistant professor that you feel obligated to fill it, so you are trying to recruit people into your lab probably faster than you should at that stage. Particularly when you're starting out, you're not necessarily going to get the best students and postdocs. I would say both with respect to students and postdocs that I took, the first few years were pretty rocky. Two pieces of advice I give my own postdocs when they're going to start their lab—number one, just because a student wants to work in your lab doesn't mean you have to take them in your laboratory. You're allowed to say, "No, I don't think this is a good fit for you." Secondly, no postdoc is better than a bad postdoc, because a bad postdoc can really turn into a huge sink on your time and energy. Moreover, they're really difficult to get rid of, because they can't get jobs, and so basically you have to fire them or encourage them to find another postdoc.

ZIERLER: What did you learn from those first few years of experience in terms of screening for good graduate students and postdocs? What do you look for?

ANDERSON: That's a good question. I think it has taken me a long time, too long, to trust my gut. It turns out in retrospect that my gut is very accurate about predicting how well somebody will do in my lab. It is a subjective, unconscious reaction, and it doesn't matter how smart or articulate the person seems; if I take them against what my gut is telling me, nine times out of ten they don't work out. Being smart, being book-smart and articulate, is a necessary but not a sufficient condition. The student really has to be hungry and ambitious and want to accomplish something, and really want to be a scientist.

ZIERLER: What about creativity? Where do you rank creativity among the values you're looking for, the attributes?

ANDERSON: I rank it very highly, but it's something that's hard for me to screen. I don't know what sort of question to ask or test to give somebody to measure their creativity, and there are many different kinds of creativity. I have to say at the beginning, I was less concerned about creativity than I was about finding people who could at least execute on the vision that I had for the experiments that I thought that I wanted to do. Not that those were necessarily the best experiments, but that's how I was thinking about it at the time.

ZIERLER: On the undergraduate side, first, what were the teaching expectations prior to tenure?

ANDERSON: The teaching expectations were that I was going to teach two courses in alternating years, each of which were undergraduate or mid-level lecture courses that I was going to share with a senior professor. One of those courses was a course in developmental neuroscience, which I taught with Paul Patterson. That was a 100-level course. Then I was also tasked with teaching a course in biology for non-biology majors, because there was no requirement at that time that Caltech undergraduates take any biology as part of their core curriculum requirements, which is again a symbol of how the Caltech faculty and administration saw biology fitting into the larger Caltech mission. It's kind of like a humanities or a social science, something that you took as a distribution requirement if you were interested, but certainly not something that was fundamental that everybody needed to know.

ZIERLER: Were there senior faculty in biology who fought against that bias?

ANDERSON: I don't know if I could identify any who did. I think most of my senior colleagues basically just wanted to be left alone, and they were not interested in taking that on. I think there were some, like Ellen Rothenberg, although she was a junior faculty member at the time that I first came. But she has always been very interested in undergraduate education, and I think she was one of the people that spearheaded the drive to shoehorn in one quarter of a required biology course for all Caltech undergraduates. Just to put in perspective what that means, we're on a quarter system here. We have three terms a year. The Caltech undergraduates were required to take two full years, which I guess means six quarters, of physics and math, up through quantum mechanics and waves. We're talking six quarters of required physics and math, versus one quarter of biology. And even that one quarter of biology didn't get instituted until I think ten years or maybe 15 years ago. You should look that up; I don't know the exact date. But it took a long time. I used to joke when I was teaching this course that liberal arts schools have Physics for Poets, and we don't have any poets at Caltech; what we have is "Biology for Physicists."

ZIERLER: [laughs]

ANDERSON: I was teaching Biology for Physicists, which was kind of a thankless job. I had to find those areas of biology that met the intersection of, one, being important and fundamental in my opinion, and two, having some sort of quantitative basis or formal logic to them that the physics, math, and engineering students in the lab could relate to. A lot of that turned out to be classical genetics, but I had to teach everything in that course. I taught classical genetics. I taught photosynthesis and metabolism. I taught endocrinological feedback control mechanisms because they incorporated some engineering principles and some aspects of basic cell and molecular biology. It was a mishmash.

ZIERLER: In showing that biology could be taught quantitatively, did you ever convert any physics or engineering students to be biology majors?

ANDERSON: No, I don't think so. I think at best I may have surprised some of them into realizing that biology was not just memorization of random facts as they might have been exposed to in high school, but that there actually were concepts and fundamental laws that were important to understand.

ZIERLER: Were computers important in setting up your lab?

ANDERSON: To some extent, but nowhere near as much as they are now. I'll just say right off the bat that very few of the types of experiments that we were doing required anything more sophisticated than an Excel spreadsheet or a package to do statistical analysis. Very rudimentary. Computers were mainly important at that time for—I used them for teaching. I used the MacDraw programs. I was working on one of the first—when I came to Caltech, I was still working on a PC platform and I discovered the Macintosh platform after I was here, and it was just like a revelation, that instead of just doing everything with command line entries and dot backslash filename dot star close bracket, I could actually draw on something, and draw a picture on the computer and print it out in a transparency and show that to my students. At that time, there weren't even good graphics packages available. Many of our papers had photomicrographs in them, pictures taken of cells through a microscope to illustrate the kinds of things that we were looking at qualitatively in addition to the quantitative data. We would take the pictures on film, on the camera attached to the microscope, have them developed and printed. Then when we were preparing figures, we would get the big piece of poster board and tape or glue the figures, the photos down to the poster board, and put lettering on, rub-on letters or other sorts of letters, as best we could, and then photograph those, and they would be submitted with the manuscript. There was a transition—and I don't even remember when this was—when suddenly everything became digital, and we no longer had to take rolls of film to the photo shop to be developed, and we no longer had to bring our huge montages across campus downstairs to the basement of Spalding to be photographed by the Graphic Arts Department under a huge camera on a stand so that they could be reduced to an 8x10 or 8.5x11-sized photograph for submission.

ZIERLER: Another question that I'll ask you to compare what's happening today to when you first joined the faculty—nowadays, there's so many collaborations with the social scientists at Caltech, specifically with biology. Was that cross-pollination happening when you joined the faculty, or that was still a little too early?

ANDERSON: I think it was still a little too early. Most of that collaboration as it exists now is around the area of cognitive neuroscience and affective neuroscience, and since I was a developmental neuroscientist, there was not much reason for me personally to have interactions with people in HSS. I just can't remember off the top of my head if there were any colleagues of mine who did. I suspect not, but I'm not sure.

ZIERLER: If you'll indulge me, you have a list of awards from when you joined the faculty through the early 1990s. There's a question at the end of it. I'll just run through them to jog your memory. You come as a NSF Presidential Young Investigator. 1987, you're a Searle Scholar. 1988, you get a Sloan Fellowship. Also 1988, a Pew Fellowship. You're a Javits Investigator for NIH in 1989. Then you win the Herrick Award in 1990. Among those, which are the ones that are really a feather in your cap and helpful for building a national reputation, and which are really fundamental for allowing you to do the kind of research that you want to do?

ANDERSON: I would say all but the last one were grant awards. They were competitive awards that are available to junior faculty to apply for and which are really critical, because it's very hard for junior faculty to get NIH grants, although I was very fortunate in being able to get the first NIH grant that I submitted, which I did even before I came to Caltech. I came to Caltech when I was 30. Nowadays, the average age at which most young investigators get their first NIH grant is 40 to 42 or something like that. These things like Pew, Sloan, Presidential Young Investigator, all those other things, are just a standard set of junior faculty research awards that people apply for. They're fairly prestigious. They're highly competitive. The good thing about being at Caltech at that time is that they only really hired one assistant professor a year, or at most two, and since for most of these awards the university or the department has to put up one of its faculty for these awards, I didn't have any internal competition, because there weren't five other junior faculty members who all wanted to get these awards at any one time. I was basically the only game in town, and that allowed me to apply for all of these things. Some of them I got, and some of them I didn't get. I think that people continue to get these awards now, our best junior faculty, even today.

The Charles Hudson Herrick Award in Comparative Neuroanatomy I thought was pretty funny, because the last thing I ever thought of myself as was a neuroanatomist. Anyhow, it wasn't a research grant; it was an actual award with a plaque or something like that. It comes from the American Society for Neuroanatomists, and I think the reason that I got it is that I was nominated by—I don't know this for a fact, but I suspect—I was nominated by Story Landis, who was one of my course instructors in Woods Hole in the summer Neurobiology course in 1979. She was a card-carrying neuroanatomist, electron microscopist, and she has been a huge source of support throughout my entire career. I think she was trying to help me by getting this award. In the scale of awards in neuroscience, it's not a particularly important one, but it was a nice thing to get.

ZIERLER: What were the organisms you were working with in the early years of your lab?

ANDERSON: Pretty much only rats, not even mice. That shows you—we were not doing any kind of genetics at that stage. We were doing molecular biology, molecular genetics, but not allele-level classical transmission genetics. For all of the cell biology and molecular biology experiments that we were doing, rat cells work better than mouse cells. They grow better in culture. They're heartier. It wasn't until we started knocking genes out and testing DNA regulatory elements that we started working in mice, in transgenic mice. That I think kicked in starting in the late 1980s, early 1990s. Even there, the mouse work was apart from everything that we were doing in rats, in that all the mouse work was in vivo. We would construct genetically modified mice lacking a certain gene, for example, and then look in the embryos of those mice to see what aspect of neural development was perturbed and how. But the tissue culture work was all done with rat work. It wasn't until later that we started to try to combine those things. Even then, it was hard, because mouse cells were and are—primary mouse cells—so difficult to grow in culture.

ZIERLER: Drosophila is not part of the equation yet?

ANDERSON: No. I had great Drosophila envy because it's such a powerful system [for studying development]. The thing that was so hard about what we were doing, I realize in retrospect, is that we had no systematic approach that we could take to solve the problem that we were trying to solve. We had to guess at what genes might be important, say, based on homology to Drosophila, and test hypotheses, many of which were one-sided hypotheses. That is, if the experiment turned out to support the hypothesis, that was great, but if it didn't, we didn't really learn anything and we had to pick another hypothesis. I recall it being a very frustrating and stressful way to do science, because I was constantly having to rack my brains to think about how to take the next step, whereas when you have genetics as your guide, the genetics tells you what the next steps are, and you follow the results of the genetics. That's why it's a systematic approach and such a powerful approach. I just didn't have the confidence at that stage to start a whole new system [organism] in the lab.

ZIERLER: From those early years, building up the lab, figuring out how to get good graduate students and postdocs, is there either a paper or an experiment that you associate in your memory with having the lab really become a successful operation?

ANDERSON: Yeah, two papers in particular. One paper published in 1992 in Cell was the isolation of a stem cell for neurons and glia from the neural crest. It was the first paper to report the isolation of a cell with stem cell-like properties, meaning that it was multipotential and self-renewing, where such cells had previously been described in the hematopoietic system, in the skin, and in the gut, which are tissues that are known to turn over in adults, but they had not been described in the nervous system before. That was the first of a series of papers from a number of labs that opened the whole field of neural stem cell biology.

ZIERLER: You mean labs beyond Caltech?

ANDERSON: Yeah, but I think our paper was the first to report the isolation of a multi-potential neural stem cell.

ZIERLER: I wonder if you can just explain the significance of that in the history of the field.

ANDERSON: The concept of a stem cell is one that was developed for tissues that turn over [like blood and skin] and it allows you to think about the diversification of cell types during development as occurring from a primitive cell that spins off and divides to produce progeny that are more restricted in their developmental capacities, in particular ways. In the immune system, for example, you have a sort of an "Ur" stem cell which is the hematopoietic stem cell that produces two main lineages, one that produces T cells and B cells, the white blood cells in your bloodstream and your immune system, and the other lineage produces myeloerythroid cells, red blood cells, platelets, all of those other things. Then it's like a genealogical tree. At each stage, there's further narrowing of developmental options. People had been thinking that the nervous system might operate according to such rules, but they hadn't really tried to show that. The value of thinking about things that way is that it provides a conceptual framework for asking questions about how the system develops. What are the progenitor cells that are present at each stage of development? What are their developmental capacities? What are their precursors? What are their progeny? What factors act on them? How do they divide? How do they change? I think that was very important.

It was also important in terms of thinking about the nervous system in regenerative medicine, because the dogma at the time was that unlike the gut, unlike the skin, unlike the blood which turns over ever 120 days, the brain doesn't turn over. That's true in 95% of the brain, but there are two areas of the brain, in rodents at least, in part of the hippocampus and the olfactory system, where cells do turn over, and there are stem cells. So I think having the ability to establish the concept of stem cells in early embryonic neural development, and then other labs extended that into the adult, had a lot of potential for impacting regenerative medicine. People have been trying since that time to treat disorders like Parkinson's and Alzheimer's disease by transplantation of neural stem cells into the brains of people who have these diseases or disorders. The bottom line is it has been pretty disappointing. It hasn't really worked. But I think the stem cell concept has been useful for understanding neural development.

There were two review articles that I wrote that I saw as sort of bookending this period. Both were published in the journal Neuron. One was a review article called "The Neural Crest Cell Lineage Problem: Neuropoiesis?" I coined the term or borrowed the term "neuropoiesis" as the sort of neuronal version of hematopoiesis which is the process by which all of the blood cells are formed by their precursors, and I basically laid out the questions that would obtain if it turned out that the nervous system and in particular the neural crest developed from a stem cell population. There was some indirect evidence for that. I was really strongly influenced by my neighbor Paul Patterson in thinking about that. That review [after it was published] was very influential. At that time, we used to get reprint request cards, when we published a paper, because there was nothing online. If I published a research paper and I got 50 or 60 reprint request cards, that would be impressive. For that review, I got 500 reprint request cards. I remember one of the administrative assistants in the mail office—Stephanie Canada—saying to me as she handed [me] a stack of these [reprint request cards], she said, "Wow, that must have been some paper you published!" Because it was such a huge stack. Of course it was disappointing to me that I was only getting that kind of attention for what was sort of a perspective or theoretical piece, not a piece of experimental work. But it's characteristic, I think, of my career, that my ideas have always been better than my actual experiments that I've done.

Then at the back end was a second review that I wrote in 2000, after there was a lot more emphasis in other labs on studying neural development in the embryo and understanding the process of pattern formation. How the developing nervous system gets sort of molecularly carved up into a kind of Cartesian coordinate system, and how different cell types achieve their identity according to where in that coordinate system they're produced. The whole stem cell concept is not necessarily critical to that view. This paper was called something like "Pattern Formation and Cell Type Diversification: The Possible Versus the Actual" which was a nod to Jacques Monod who published a paper with that phrase in it. By that I was referring to the school of studying development in tissue culture [like I did], which is studying the possible. It shows you what can happen if you take cells out of their normal environment in the embryo and you put them in a petri dish and start throwing things on them and studying their development, versus the actual, which is what actually does happen in the developing embryo. It's sort of fate versus potential. These cells grew up to be firemen. Well, did they have at some point the option to become a ballet dancer, or to become a school teacher? We won't know unless we take them out of their normal life trajectory and put them somewhere else and expose them to different influences and see if we can push them in different directions. That bookended that [period]. Those are two review articles that I am especially proud of.

Then the other research article that I guess put my lab on the map was a paper we published in 1990 in Nature. That was a molecular biology paper, not a stem cell paper. There, we were the first to describe and clone two genes from the rat genome that were homologous to two Drosophila genes that control neural development in the fruit fly. These genes encode transcription factors, they're part of a family of transcription factors called bHLH, which stands for Basic Helix-Loop-Helix transcription factors, that had been shown in flies to play an important role in controlling neural development, and in mammals, to play an important role in controlling muscle development, in work from the late Hal Weintraub. These seemed to be genes that told immature cells what kind of cell type they were going to be. What we showed in our paper is that mammals had [pro-neural] genes that were homologous to fruit fly [pro-neural] genes and in mammals, those genes were expressed in and we showed later functioned in the developing nervous system like their fly counterparts, and their sequence was more closely related to fly genes that controlled neural development than they were to other mouse genes that controlled the development of different tissues like muscle. That indicated that across 500 million years of evolution, there was this parallel conservation of the sequence of these genes and the cell type whose development they controlled, which was a pretty astounding finding.

ZIERLER: What orthodoxies did that shake up?

ANDERSON: It shook up the orthodoxy that was popular in the neural development community that vertebrate neural development proceeded by mechanisms that were entirely different from the mechanisms that were described for neural development in invertebrates, specifically in grasshopper embryos and fruit fly embryos. The shorthand humorous distinction that people used to make was that the vertebrate nervous system developed by the American plan, and invertebrates developed by the European plan. What this meant was that if you're a cell that has not yet differentiated in an invertebrate embryo, you develop by the European plan, meaning you do what your ancestors did, whereas in a vertebrate embryo, you do what your neighbors do. And that's the American plan—you do what your neighbors do—versus the European plan; you do what your ancestors did. It reflected this view that everything in invertebrate neural development was sort of genetically pre-programmed, and in vertebrate development, it was not genetically pre-programmed; it played out according to what the environment of the cells were, and cells had this broad potential, and anybody could become anything that it wanted to do. The finding that the vertebrate nervous system was using the same genes to control the development of its neurons that fruit fly embryos were using to control the development of its neurons I think threw a big monkey-wrench into that false dichotomy.

ZIERLER: You mentioned the interest that this piqued for regenerative medicine. Did you yourself get involved? Did you become interested in translational science at all, or you kept an arm's length from that stuff?

ANDERSON: I did. I actually wound up cofounding one of the first—not the first, but one of the first biotech companies focused on developing neural stem cell technology. I cofounded it with Irving Weissman from Stanford who was a very famous hematopoietic stem cell biologist, and Rusty Gage—Fred Gage—from the Salk Institute, who is now the president of the Salk Institute. He was very interested in adult neurogenesis. Our hope was to translate some of this basic knowledge that we were learning about neural stem cells into therapies. That company was formed in 1994 and it lasted I think about 24 years and then it finally died, closed, without having ever made a single product, like many biotech companies do. That was a sobering experience and it sort of cured me of any deep interest in pursuing my fortune through biotechnology. I've sort of stayed away from it since then. But yeah, we did. We definitely tried to pursue that. I would say it's a dream that has yet to be realized.

Other than stem cells in the hematopoietic system, there are very few therapeutic applications of stem cell technology. In fact, just two months ago, maybe three, The New York Times had an article about a stem cell scientist at Harvard, Douglas Melton, who is one of the most famous stem cell scientists in the world—he has been on the cover of TIME magazine and he's the director of the Harvard Institute of Regenerative Medicine—and finally, after 35 years of work on the development of the pancreas and of the insulin-producing islet cells, the beta cells, they reported the first diabetic patient who received a transplantation of beta cells that were grown in petri dishes to produce insulin so that he didn't have to take it from an insulin pump. It just shows you the huge gap in time and technology between the conception of an idea, which is trivial to articulate—"Oh, yeah, let's figure out how to make beta cells from pancreatic stem cells, and we can transplant them into people to treat their diabetes" --people were talking about that in the early 1990s—and actually getting that into a patient, which took until 2022. That's the first patient, and it's not a random prospective clinical trial with placebo controls and all those other kinds of things. It's an N=1 anecdote.

ZIERLER: We talked in our last conversation about Lee Hood and the circumstances of him leaving Caltech. By the time you put this startup together in the early 1990s, were you still fighting that tide that he was, or the culture at Caltech had sufficiently changed where startup and entrepreneurialism among professors was encouraged, even?

ANDERSON: It was in transition. I should say that Lee Hood himself was very entrepreneurial, and he had been involved in I think advising a company that his friend and fellow Montana product, Irv Weissman, had set up to study stem cells with help from David Baltimore and also Si Ramo. Actually, I had a whole flirtation with Si Ramo before I set up Stem Cells Inc., which was the eventual name of my company. Si Ramo was interested. I guess he had heard me talk at some Caltech alum function. He was interested in trying to get me to set up a company on my own and was ready to incorporate it, and I just felt it was too much for one person to do, and so I bowed out. Maybe in retrospect, I should have done it. That's another conversation.

Anyhow, Lee was certainly very entrepreneurial, but at the time, Caltech for example didn't even have an Office of Technology Transfer. By the time that I cofounded Stem Cells Inc. there was a tech transfer program. I had patented a lot of things and licensed some of my patents to Stem Cells Inc. although none of them ever turned into anything that produced any revenue for Caltech at all. That may have had some impact on how Caltech used technology now. They're still very interested in promoting translational applications of their work, but they're much more careful about what they decide to invest university resources in patenting, as well they should be. Not that I don't think the things that I patented were worth patenting, but it was just very difficult to say, because it was so early in the game, whether they would turn out to be worth anything or not.

ZIERLER: I assume your tenure decision was a foregone conclusion and not very dramatic, but I wouldn't know unless I asked. Looking back, was there anything dramatic about it?

ANDERSON: Like any other assistant professor, or maybe more so than others, I was very anxious about it. But I thought things were going well enough that by the time it came around, I wasn't that worried about it. I may have asked to come up for tenure early but I can't remember if that happened or not.

ZIERLER: Was the culture in the Division to set up junior faculty to succeed, that you were supposed to be on a trajectory of tenure?

ANDERSON: Absolutely. It was not the case like it was at Harvard at the time, where they would hire three junior faculty but only have one senior slot open with the expectation that two of the three would be denied tenure and that the third would take that open slot. No, everybody here was hired and is hired with the expectation that if they do well they will get tenure, and there will be room for them. That was certainly good.

ZIERLER: This is often the time in the career stage where you get invited to consider offers elsewhere. Did you do that? Did you ever think seriously about moving on from Caltech?

ANDERSON: Yes. Many times. Many times. The one that I considered the most seriously at the time that I was coming up for tenure was moving to UC San Francisco, where there were a number of close friends and colleagues of mine from graduate school at Rockefeller, who had gone to UC San Francisco. In fact, I had two offers at UCSF at the time I was considering the offer from Caltech. For a variety of reasons, I turned them down. The thing that really made UCSF different from Caltech and very attractive is that it was an urban medical center, and so there was much more interaction between labs. Labs were more cramped and crowded together, and so the collision frequency between people in different labs and opportunities for collaboration and just scientific cross-fertilization were a lot greater. Caltech is a suburban campus. I often felt and still occasionally feel, but back then particularly, that I was really sort of isolated, because our labs were large. We didn't have to go to other labs to borrow other equipment. Each lab was sort of self-contained. There just wasn't a tradition of people just sort of dropping by into each other's offices to just schmooze like there was at Columbia and like there was at UC San Francisco. So I came very close to taking that offer. I don't remember why in the end I turned it down. It might have had to do with the amount of space that was available, which yes, at some level, I wanted that, because I wanted more intensity, but I didn't want such a big contraction of space. Anyhow, that was the first of several.

Then I had several flirtations with Columbia where I got my postdoc, over the years, because I missed New York and was interested in going back to New York. Then the last major job opportunity that I considered and turned down was in 2009, which was the chance to be a Max Planck director in a Max Planck institute outside of Munich. That was very attractive, because when you're a Max Planck director, basically you don't have any boss. You don't have to write grants. You don't have to teach. They just give you a budget and you do whatever you want. Many people say it's the best job in science in the world. Two of my colleagues in biology, Gilles Laurent and Erin Schuman, who are married to each other, did leave to take jobs at a Max Planck institute in Frankfurt. Of course they weren't Jewish, but I sort of rationalized it to myself as considering it a form of reparations if I would take that job.

ZIERLER: [laughs]

ANDERSON: That's another story. So yes, just to follow back to your trajectory and your question about did I consider other jobs, I did seriously consider moving to UC San Francisco. In retrospect, it might have been a good idea for me to move once in my career. Now it's kind of too late. But I think it's a good thing to sort of shake you up and put you in a different environment.

ZIERLER: The way that you talked about UCSF, Caltech not having a hospital—in what ways is that a strategic disadvantage for people like you at the individual level, and where is it a strategic disadvantage institutionally for Caltech?

ANDERSON: I don't think it was such a strategic disadvantage for me personally because I wasn't that translationally oriented. But I think that it was a disadvantage—well, depends who you talk to. If you talk to hard-core scientists, they will say that not having a medical school is a huge advantage, because you don't have to deal with the bureaucracy and the complexities of patient care and all of that kind of thing. That's true, but at the same time, if you are a basic science department in a biomedical academic medical center, you do get the benefit indirectly of funding via patient fees that are brought in by the clinical departments like cardiology and pediatrics and orthopedics and that sort of thing. And you do get exposure to problems in medicine that you don't find out about if you're a student at Caltech or if you're a faculty member at Caltech. For a while, I was trying to see if anyone was interested in the sort of obverse of an MD/PhD program, where instead of taking people who fundamentally want to be doctors and making them go through a PhD program for five years in the middle of their medical training, you would take people who know they want to be research scientists and you would expose them to the first two years of medical school, the classroom part of medical school so that they could get some familiarity with human disease and human physiology and sort of expand their intellectual horizons as to what problems are important. That never took off.

ZIERLER: As the Human Genome Project was ramping up, were you following that? Were you involved at all?

ANDERSON: I wasn't involved in the Human Genome Project, but I was certainly eager to take advantage of the Human Genome Project and to use GenBank, the genetics database, as a way of finding new genes. It definitely had a major impact on the way we thought about the molecular biology research that we were doing.

ZIERLER: After these two significant papers that you mentioned and the reputation—

ANDERSON: I never did anything else important again!

ZIERLER: [laughs] What came next as a result? Now that you had built up some momentum, what was happening at that point in the mid 1990s?

ANDERSON: We played out the sequelae of those discoveries. On the stem cell side, we identified signals, growth factors, that acted on the stem cells to control their differentiation into neurons or glia or different types of neurons. We showed importantly that they worked by instruction and not by selection, which was an important debate at the time. In the hematopoietic system, these growth factors work mostly by selection. That is, precursor cells stochastically choose one of a couple of possible fates, and then, having made that choice, they survive if the right growth factor in their environment, or they die if it isn't. Whereas what we showed in the nervous system is that the growth factors actually act on the uncommitted cell to push it to choose a pathway of differentiation in one direction or another. It's not a stochastic choice followed by selection or stabilization; it's an instructive mechanism. That was very important.

We followed up on the finding of these genes that were homologs of the Drosophila genes and showed that they were in fact required for the development of certain classes of neurons in the neural crest and we isolated other genes in that family, other flavors of genes, and showed that different members of that family were important for different kinds of neurons. For example, the peripheral nervous system has sensory neurons and it has autonomic neurons which are like sympathetic—parasympathetic neurons, and we found one group of these [proneural] genes that controls the development of sensory neurons and one that controls the development of autonomic neurons. That again paralleled the situation in Drosophila where there are also different types of neurons that are controlled by different flavors of these genes.

Then there was a third major discovery, or I think important discovery, that we made in the 1990s, which was the discovery and cloning of a transcriptional repressor we called the neuron-restricted silencer factor, or [more accurately] neuron-restrictive silencer factor. NRSF. It's not called that anymore; it's called REST, a name that somebody else gave to it for stupid reasons I can go into later. Anyhow, this is an interesting gene because the prevailing view of how cells activate particular genes that are appropriate to their cell type—that is, what turns on the globin gene in a red blood cell? What turns on the elastase gene in a pancreatic cell?—these are all positive-acting mechanisms. There are master genes that are specific for red blood cells or for pancreatic cells or liver, and they turn on a battery of genes that is appropriate for that particular cell type. That very much follows the logic that Eric Davidson was developing from his early studies. What we found in the nervous system was a DNA binding protein that did the opposite. It functioned to turn off neuronal genes in non-neuronal tissues. In other words, it was almost like it evolved to protect the rest of the body from the brain. That was a very unusual and I think—I don't want to say totally unprecedented, but surprising mechanism.

Over the years, that has turned out to be I think a pretty important gene that many, many people have studied. It exploded into an entire field. At the time that we cloned it, it was just my lab and one competitor lab who were racing with each other to find it. We published on it first, but it wound up getting the name that our competitors gave to it, because unknowingly, the acronym that we chose for it had already been used by the Human Genome Project to refer to some other gene, and a person from the nomenclature committee called me up and said, "You discovered this first, but we can't use your name. Do you want to change the name and give it a new name, or can we just use the name the other person gave to it?" Stupidly, I said, "Oh, just use the name the other person gave to it."

ZIERLER: [laughs]

ANDERSON: I hadn't yet read at that point the immortal statement from New York Times science writer Nicholas Wade, who once said in the context of talking about the different names given to HIV/AIDS virus, he said, "In science, as in primitive societies, to name an object is to own it."

ZIERLER: Yep. [laughs]

ANDERSON: I was just too idealistic and naïve to know that. Anyhow, it turned out that that gene was not as important in development as we thought. That was a real choice point for me. Having taken all this time to find this gene, which we didn't identify until 1995, and that was one of the first main objectives I came to Caltech from Columbia to achieve—in fact, the earliest initial work on it started back at Columbia in 1985, so it took me ten years to get to that point, and then having discovered that it really wasn't that important for development, it was more important for the functioning of adult neurons, I had to decide, well, do I want to stick with the problem and focus on development, in which case I'm just going to have to leave that NRSF gene for other people to study, because it will divert me from what I really wanted to study, or do I follow it up because it's important and I've devoted all this time to it. For better or worse, I chose the first path, which is stay with the problem, don't chase that [NRSF] if it is not going to bring you deeper into the problem that you want to study.

ZIERLER: Was that a gut decision?

ANDERSON: Yeah, I think that was a gut decision. I thought about it a fair amount, but in the end, it sort of was a gut decision.

ZIERLER: What did it come down to?

ANDERSON: I think just the idea that this gene wasn't what we had hoped it was going to be, like a master regulator of differentiation between neurons and non-neuronal cells. That really came out of the genetics. When we finally knocked the gene out or knocked it down, it didn't have the profound fate-switching phenotype that one would expect from a developmental regulatory gene. And I guess I just wasn't that interested in it. I could have made a career out of it like my competitor did. She spent the next 25 years working on this gene.

ZIERLER: When Tom Everhart stepped down, did you immediately see an opportunity for biology to have a moment at Caltech?

ANDERSON: No. Why would I have seen an opportunity for biology to have a moment at Caltech?

ZIERLER: Maybe because David Baltimore might be available.

ANDERSON: I was on the presidential search committee that led to the identification of David Baltimore as one if our lead candidates. It was chaired by Kip Thorne. There was a representative from each division, and I was the representative from the Biology Division. I assumed that the likelihood of getting a biologist president of Caltech was about the same as the likelihood of getting a Black president of South Africa at the height of apartheid. It was not on my radar screen. But to Kip's credit, David was on Kip's radar screen. I thought that David was not going to be interested in something like this. He had been going through a very difficult and challenging time with the false accusations of scientific misconduct that were leveled against him, the Congressional investigation that he went through, his ejection as the president of Rockefeller. I mean, from the outside, he looked like somebody that no university would want to touch with a ten-foot pole.

ZIERLER: Did you have special appreciation for what he was trying to accomplish at Rockefeller in promoting junior faculty, given your connections to Rockefeller?

ANDERSON: Absolutely. And I was not surprised that other faculty at Rockefeller who considered him a threat to their modus operandi took advantage of this opportunity to get rid of him. Anyway, that notwithstanding, the thing that changed me was I called Irv Weissman, who was David Baltimore's closest friend and colleague in science, I would have to say. Those two are practically like brothers. Irv was at Stanford. I said, "Is there any chance that David would even be remotely interested in this?" I didn't know David that well back then. I just saw him as the quintessentially East Coast scientist that spent his entire career at MIT except for his brief interlude at Rockefeller. Why would he want to come out to Pasadena? On top of that, he was Jewish, no less. Irv said, "If you want David, you'll get David." That just really surprised me. I relayed that to Kip, and then I was responsible for digging up all of the background. I think I must have made over 120 phone calls to various people getting background and input on David.

ZIERLER: Did you know already that the Baltimore Case was all trumped up and bogus, or was that part of your background calls, to make sure?

ANDERSON: I had a strong suspicion that it was, but I had not followed it in so much detail, and it became more apparent when I went through all of these background checks and long discussions with people. Some of these discussions lasted an hour, an hour and a half. That was before—I didn't know Dan Kevles was writing his famous book on the Baltimore Case at the time. I didn't know Dan very well at all. I think it was no coincidence that the announcement of David's appointment as president coincided with the publication of Dan's book, and specifically with the publication of an excerpt from it in The New Yorker, which I read carefully, as did everybody else on the committee. That was the circumstance. But I really had assumed that we were going to have yet another goyishe Midwestern physicist or engineer. Although that's not fair, because I think Harold Brown was Jewish. I'm not sure.

ZIERLER: He was, but he didn't advertise it.

ANDERSON: Yeah, he didn't advertise it. And I think we had one other Jewish president. I can't remember—Murph Goldberger? I think he was Jewish.

ZIERLER: Yes, of course.

ANDERSON: But we also had a very long and infamous history of anti-Semitism at Caltech under Robert Millikan who was eager—and I think Elliot will know more about this than I will—but made specific comments when he was recruiting people to the newly found Biology Division to try to keep Jews out of the Biology Department because they wouldn't be good for academia.

ZIERLER: And there was no analog for Einstein in biology as far as he was concerned. That's where he would have made an exception.

ANDERSON: Nope. No, there wasn't.

ZIERLER: Did Biology change as a result of David Baltimore coming here, do you think?

ANDERSON: No. I don't think so. I think David was very explicit about the fact that he came here to be the president of Caltech, not the president of Biology at Caltech. At that time, Biology was continuing to be plagued with all kinds of infighting and problems. I remember David coming to one of his first Biology faculty meetings as president, and one of our more notorious disgruntled faculty members who shall remain nameless just lit into him. David left at the end of that meeting and he never came back while he was president. He said, "I am not here to fix Biology. I'm here to be president of Caltech." So anything that changed in the Biology Division I don't think was a consequence directly of things that David did. To the extent that biology became more mainstream at Caltech, David's presidency might have contributed to it, but I think it more had to do with where the overall field of biology was going and the fact that it was becoming more quantitative and more computational, and so it was something that people from backgrounds in applied math and physics and computer science could find a way to relate to, which they couldn't previously.

ZIERLER: What about from the building perspective, all of the new institutes that he was looking to create. What did that do for biology or for you specifically?

ANDERSON: He tried very hard to get a neuroscience initiative going. In fact, he recruited me and Christof Koch, who was at Caltech at the time. Christof was I would say the most visible media presence of any faculty in Biology. He worked on consciousness. He published books. He was in the newspaper a lot. David had he and I develop a proposal that he submitted to Eli Broad. This was for a $100 million-plus neuroscience institute, like the Chen Institute now. It was very frustrating. It was at the very early days of this. None of our other major competitor institutions had yet set up a neuroscience institute, and we were trying to be first movers in that area. Eli turned out just not to be interested in it, despite the fact that it went through multiple cycles of revision and resubmission. In the end, he decided to pass on it, so that was very frustrating. We did get the Broad building, and that went up when David was president, but it's not specifically for neuroscientists. I would say certainly its main impact on me was that I spent a lot of time and effort writing these proposals and didn't get anywhere with them, Christof and I, but we did get a new biology building, which was I think the first new one in quite a while. I think David was doing what he was supposed to do, which was to be the president of Caltech, not the president of Biology.

ZIERLER: Of course during all of this time, you're an HHMI investigator. Did that influence the kinds of research questions you took on, or that was just simply a nice way to cover costs?

ANDERSON: It definitely influenced the kind of research direction that I took, in particular my decision to switch in the late 1990s from development and stem cell biology into neural circuits and behavior, yet another decision that was too far ahead of its time. At that time, the vice president for Hughes was Gerald Rubin, who's a famous geneticist, who later wound up being the first director of the Hughes Janelia Research Campus. I don't know if you've heard of the Hughes' Janelia Farm facility. It's their first freestanding research campus. It was a $500 million facility built in Northern Virginia that's a really state-of-the-art research facility. Anyway, Gerry was the vice president and in charge of the investigators program at HHMI. He made a very strong argument as part of his take on the Institute that HHMI support is not to simply provide an easy way for you to get money to support the work that you would have gotten funded by NIH if you weren't a Hughes investigator. It's to give you the opportunity to do things that you couldn't do or wouldn't be able to do if you weren't a Hughes investigator. He implied that investigators who came up for review every five years would be judged on whether they used HHMI funding to really push themselves into new directions, or whether they were just using it to supplement their NIH funding. I took that seriously. I don't know how many other investigators took it as seriously as I did. That's when I really decided to throw caution to the winds and change not only fields, but change organisms. I started out working on mice in my neuroscience work but rapidly adopted fruit flies because the mouse work was going so unbelievably slowly. Flies were a faster system, and I felt like I had already stepped off a cliff making this radical change in fields, and so what was the big deal to just take on another organism at the same time.

ZIERLER: To clarify, being an investigator for HHMI, does that give you the kind of leeway to make those dramatic mid-career switches that might not otherwise be possible?

ANDERSON: If you want to, it does. But I also have to say that I had help from Caltech that I'm really grateful for. I had support both from Elliot Meyerowitz who was chair around that time—and Elliot is another great example of somebody who made a major mid-career switch from studying Drosophila gene expression to studying plants. That was without HHMI support, although he is now an HHMI investigator. Subsequently, I had support from Mel Simon, the subsequent division chair, and they gave me temporary surge space that I could use to set up a fly facility and microscopes that I needed and behavior rooms to start to try to do the kinds of things that I was doing, because I couldn't just turn over my regular lab into that. It required really an entire change in infrastructure. The institutional support was really important.

ZIERLER: I wonder if you ever thought about just how amazing it was, going back to fruit flies—T.H. Morgan, Ed Lewis—there's still mysteries to be understood with the fruit fly.

ANDERSON: Yep, there are. When I look at the connectome of the fruit fly brain which we now have, it's so staggeringly complicated that the idea that we're going to figure out the mammalian brain or the mouse brain at that level of understanding anytime soon seems laughable. But you never know. They may be different enough that each can contribute in its own way. But I was very happy to get into fruit flies. I really enjoyed doing fruit flies, as I told you. As I think I told you in a previous conversation, I was really disappointed when I told Seymour this and asked him to collaborate and he said he wasn't interested in doing it, although he was later forced into it by the data that his postdoc got. So, that was good.

Did I tell you about the artery-vein [discovery]? I must have told you about that. That really threw a monkey wrench into this transition from neural crest and neural development into behavior, which is that by accident, we stumbled on what in retrospect is probably the most important discovery from my lab, at least it's the paper that I think is cited more highly than any other paper [of mine]. It's the discovery that arteries and veins are made from genetically distinct cell types that are present before the heart starts to beat, which overturned 100 years of dogma in the field of vascular biology, and was a seminal paper. It was a total accident. At that point, I felt like I should try to follow up on it for a bit, so I made the opposite decision that I made with NRSF, because I was in the middle of changing directions anyway, and definitely in the area of brain science, the wind was at my front, not at my back. Nobody knew who I was. I was starting from scratch. I didn't make any super splashy contributions at the beginning. Whereas this finding in vascular biology, in angiogenesis, immediately put me on the map. So there was a huge amount of positive reinforcement to continue to develop this area. For a while, in the late 1990s, I had a tripartite lab. I had one lab that was sort of finishing up the ends of neural stem cells and development. I had the part that was starting to push into neural circuits and emotional behaviors. Then I had the part of the lab that was studying vascular biology and angiogenesis.

ZIERLER: And the graduate students and the postdocs, did you have them split up, or were they doing everything?

ANDERSON: No, they were split up. I had different groups in each of these areas. That was really challenging to manage. In the end, I would say that the excursion into vascular biology is something I'm not sorry that I did, but it definitely slowed down the vigor and speed of my transition into neural circuit research. In fact, that didn't really take off until I published my first fly paper, which was in 2004 with Seymour Benzer and Richard Axel. The work that we published on mouse fear and mouse neural circuits [in 2003], it was okay, but it was nothing that got anybody excited or distinguished me from the thousands of other people who were doing that kind of thing. That's just how it goes.

ZIERLER: You'll know more about the prestige than me—in 1999, the W. Alden Spencer Award in Neurobiology. Is that a more prestigious award?

ANDERSON: It is more prestigious, but I discount that award. It is given by Columbia University, and in 1999, Columbia was trying very hard to recruit me from Caltech, and I almost went. I came very close to going. I think they gave me that award to sort of sweeten the package as part of it. But who knows? I don't have the control [experiment]. I don't have myself in 1999 not being recruited by Columbia to know if I was going to get that award anyway.

ZIERLER: Neurobiology, do you take issue with that term as well, or that's okay?

ANDERSON: No, that's fine. In fact, that award was for the work I had done in developmental biology, not the stuff that I was just starting, because I hadn't done anything yet. In that respect, it was kind of nice. But, prizes are not a healthy thing to think about. It's nice if you get them, but you shouldn't do science because you want prizes.

ZIERLER: What were the key infrastructural challenges in transitioning to Drosophila in terms of the instrumentation, in terms of the facilities?

ANDERSON: I needed incubators to house my flies. They look like big refrigerators, and they're kept at a constant temperature and constant humidity to put vials of flies in. I needed a chamber, a room that was humidity-controlled and temperature-controlled and was light-tight to do all my behavioral experiments so that the flies were not distracted by the sounds. I didn't have a separate behavioral room, but I got some sort of a big aluminum box with a door on it that could be used for that purpose. There were all kinds of behavioral gadgets that we invented and designed and had built. We got more involved in using computers to analyze our data because we would be tracking the velocity of the flies and their direction, and so we would measure that. We had fairly rudimentary programs to extract basic parameters like velocity and [walking] bout structure. We needed rooms with dissecting microscopes and sources of CO2 to anesthetize flies while setting up crosses. It's a whole different infrastructure, working with flies, compared to working with mice, one that I actually prefer, because it's much simpler. You don't have to go through all of this huge amount of paperwork that is required for vertebrate animal research. People can work with the flies right in the lab. They don't have to go down into the bowels of the animal facility to work with their mice and do their experiments sort of offsite. There's a more visible community in your lab when people are working with flies, and I kind of missed that. I liked that.

ZIERLER: Did you feel any chronological connection going all the way back to the late 1920s and the origins of the Division of Biology, working with fruit flies?

ANDERSON: Well, sure. When I moved from Beckman Behavioral Biology to Kerckhoff-Alles, I was in Ed Lewis's old space on the third floor. In fact, I rescued some of the big diagrams, chromosomal maps, and pictures of his models of how the bithorax complex worked, and I had them put on the wall under plexiglass to protect them. They're still there. I had the big chromosome map framed and put them there. I had Ed Lewis's picture hanging up. My two-photon microscope room was in the room that Ed used for his fly collection. Then the other wing of my lab in the third floor of Alles was Roger Sperry's old space, and I had his picture hanging up as well. So, absolutely. My strongest connection was with Seymour, because Seymour really was the founder of the field that I was working in, fly neuroscience.

ZIERLER: Last question for today, and it will set the stage for our next talk—just intellectually, what was the steepest learning curve? What did you have to do the most reading and learning for, in making this switch?

ANDERSON: I would say in the case of vertebrate neuroscience, just getting my head around the complex and obscure anatomy of the parts of the brain that I was interested in studying—the hypothalamus, the amygdala, interconnected structures—and just learning my way around the brain and starting to understand what was known and not known about the function of those different brain regions. I think that was really a big challenge. Then getting on top of all of the voluminous literature in that area. That was the refreshing thing about working in flies. There was so little known that you just didn't have a lot of literature that you needed to read about. That included fly neuroanatomy which even at that stage was fairly rudimentary. Whereas people have been doing experiments on behavior and conditioning in mice and rats and hamsters and guinea pigs since the 1930s and 1940s. There's a literature in psychology. There's a literature in neuroscience. So that was really a steep learning curve to get my head around that, but more in terms of a knowledge curve rather than a conceptual curve. Whereas the steep learning curve I have been on recently as we've gotten into computational neuroscience has been much more of a conceptual learning curve, and I have found it much more enjoyable than just having to cram my head with a bunch of facts about obscure Latin names of places like the substantia innominata, the substance with no name. The zona incerta, the zone of uncertainty.

ZIERLER: [laughs]

ANDERSON: And my favorite, the nucleus ambiguus, the nucleus that is ambiguous in its structure.

ZIERLER: [laughs] On that note, David, that's a great place to pick up for next time.

[End of Recording]

ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Monday, August 22nd, 2022. It is great to be back with Professor David Anderson. David, once again, great to be with you. Thank you so much.

ANDERSON: Good to be here. Thank you.

ZIERLER: I want to go back to something you said in a previous conversation, just learning the anatomy. In terms of the anatomy itself, how much of it was simply you being a student, learning the accepted science and the discovery, and how much were you really at the vanguard in terms of appreciating that there were limitations in our understanding of the anatomy and the functions of each different part of the brain?

ANDERSON: That's a good question. I would say it was maybe 70% to 80% of the former, and 20% to 30% of the latter, in the sense that there is a very well-established macro-scale anatomy of the rodent brain. Or I guess I should say mesoscale. It describes which brain region connects to which other brain region and where they receive connections from. There's about 830 different brain regions in the rat brain that neuroanatomists have recognized. They are defined by their what's called cytoarchitectonic properties. Often you can see boundaries around them in the microscope. Sometimes they are defined by chemical stains, histochemical stains. Those have been known for many decades, and nobody can learn all in their head all 832 regions and what the connections are between them. What you have to do is focus on a particular brain region or circuit, and then start to try to get that anatomy at least under your belt from reading review articles and some of the original research articles. What the limitations are is that that level of description of anatomy does not establish the relationship between gene expression patterns and brain regions, cell types and brain regions, nor does it distinguish which cells are responsible for, say, the projections from brain area A to brain areas B, C, and D. They don't distinguish if those are individual cells that project to all three regions—that is called collateralization in neuroanatomy jargon—or whether there are distinct subpopulations of neurons, each of which projects to only one region, but which are intermingled together within these brain structures. That is the 20% to 30% of the anatomy of the brain areas that we work on, that we have been trying to contribute to, and are still trying to do that now.

ZIERLER: At this time, when you're first starting to learn the anatomy, are you referring to both Drosophila and mice?

ANDERSON: I started working on mice, because I was more familiar with techniques for studying mice from my developmental work. In fact, we didn't start Drosophila until about three or four years after we started the mouse work. Once we started the Drosophila work, then I had to learn something about Drosophila anatomy, although at the time, Drosophila anatomy was not described in the kind of detail that mammalian neuroanatomy—I'm talking about brain anatomy here, neuroanatomy—was described. That has changed enormously in the last three to four years, with the appearance of the first electron microscope-level connectomes of female drosophila, but that wasn't available at the time that we started the work.

ZIERLER: I wonder if you can paint a broader picture. When you began the mice work, what were the overarching questions? What were you most curious about at that stage, right at the beginning of the project?

ANDERSON: From the beginning, I was interested in emotions and how emotion states were represented in the brain, and the circuits that processed emotions. 1998 or 1999 was when I think I recruited the first postdoc in my lab to start working on behavior and circuits. As a result of conversations with colleagues in the field and reading the literature, it was clear that the most work that was done on any one emotion and its representation in brains was on fear, primarily in rats, but that was a type of learned fear called conditioned fear. What is meant by conditioned fear is that the animal learns to become afraid of something that it wasn't previously afraid of, by pairing that, or experiencing that neutral stimulus simultaneously with an innately aversive stimulus, like a foot shock. What happens as a result of that—that's a form of Pavlovian associative learning—and as a result of that, the next time the animal encounters the neutral stimulus, it will evoke a fearlike response in the animal. That is just to say what conditioned fear is, and I would say 70% of the literature was based on fear conditioning paradigms. What I was interested in, and am still interested in, were the pathways that underlie innate fear; that is, the fear of things that the animal doesn't have to be trained to be afraid of. We know that there are certain types of stimuli that are innately aversive to a mouse, in the absence of any prior experience. The overriding question initially that I wanted to know the answer to was, are the neural pathways and circuits that process innate fear the same or different from the pathways that process learned fear? Do they all go through a final common fear node, or do they use completely different pathways? That was really the question, and that was a question that I thought could be answered.

ZIERLER: Just a nuts-and-bolts question: As you transferred your lab into this project, what did that mean in terms of instrumentation, in terms of the kinds of students who were fascinated with working for you? How big of a shift did it feel at that level?

ANDERSON: It was a huge shift, not so much in terms of instrumentation, but primarily in terms of recruiting people to the lab, because by that time, I guess I had become viewed as one of the leaders in the neural stem cell field, and I had postdoctoral applications from many highly qualified candidates who wanted to work on stem cells in my lab. I had to make a decision that if I was going to make this transition, I was going to stop recruiting postdocs to do stem cell work in the lab, and I was going to try to recruit postdocs to work in this other area, in which I had no previous work and no reputation at all, and there were many other labs with strong reputations in that field. That resulted in my losing many talented postdoctoral applicants. There were a couple of applicants who approached me initially to work on stem cells but then became interested enough in the behavior and circuits problem that they were willing to take a risk of working on that. But I also had to dig around and try to recruit people who didn't know about my lab but who were working in the fear field and who I thought might be interested in doing it. That was a major challenge.

ZIERLER: What about grant writing and just the economic environment of switching to a brand-new field at that stage of the career?

ANDERSON: Yeah. If I had had to rely on NIH grants, I could never have made this transition. Never. NIH will not give you money to move into a field in which you've never done any previous work. Fortunately, because I had funding from HHMI, the Howard Hughes Medical Institute, I could use that funding to do pretty much whatever I wanted. Some of that funding I diverted into supporting this behavior research. Then, there were some internal grants at Caltech that I was able to apply for, some of which I got. I should say that Elliot Meyerowitz was chair at the time, and he was particularly supportive of my making this change, perhaps because he had made a similar change from Drosophila developmental genetics to plant developmental genetics, after he had been here for a while. So, that's how I did it.

Eventually, I managed to get enough preliminary data that I could start to try to write some grants to the NIH where I could show, "Look, we know how to do this technique. We know how to do that technique. We have expertise to measure this, that, and the other behavior." I was able to get some funds that way. There were also some individual donors who I talked to. I'm thinking specifically of Ed Scolnick, who at the time was or just had stepped down from being the CEO of Merck, and he was very interested in treating mental disorders, psychiatric disorders, and was frustrated with the slow pace in the pharmaceutical industry. When I described to him my vision of using techniques from molecular genetics to try to identify not the genes and individual molecules that control a behavior, which is what most people had been doing at the time, but rather the cells and the connections that control behavior, he was sufficiently excited that he also gave me a chunk of money that I was able to use to fund a student or a postdoc. But, it was difficult, because it's not like one day I walked into the lab and said, "Okay, everybody that is doing stem cell research, drop what you're doing and leave." Or, "Drop what you're doing and switch to a behavior project." Because I had students and postdocs who were deeply involved in those kinds of research projects at the time, and I think I continued to publish papers in the stem cell field until 2003 or 2004, even though I started the transition to behavior in 1998. We didn't publish our first papers in that area until 2002 and 2003. So, there was a lot of overlap, and it was exacerbated by the fact that at the same time, somebody in my lab made an unexpected discovery about arteries and veins that distracted us for a while, because it turned out to be very important. And maybe embarrassingly, or maybe it was more humbling, I think it remains my most highly cited paper, although I haven't checked in a long time.

ZIERLER: Tell me about the research. What was so exciting about it?

ANDERSON: It was an accidental discovery. We were using genetic techniques to try to study the expression of a molecule and gene we had identified that we thought might control the migration or differentiation of neural crest cells. Remember that I was working on neural crest cells, and neural crest stem cells, in my previous incarnation. The student who was working on this generated a mouse in which he could both visualize easily where this molecule was expressed, and when, and also see what happened when you took it away. What he found initially was very disappointing to him, because he didn't really see it where he was hoping to find it on developing neural crest cells. But he saw it on branched structures in the brain, and he interpreted those as nerve fibers. He wasn't really interested, or we weren't interested, in studying nerve fiber growth. So he came to me very disappointed that he put all of this work, several years of work, into this project, and the molecule that he wanted to study was not expressed in the cells that he wanted to study. But I took a closer look at the branched structures in the embryos, the mouse embryos that he showed me that were expressing these molecules, and I realized that they were not nerve fibers; they were way too big to be nerve fibers. I realized that they were blood vessels.

That was my first and I think main contribution to this project: I realized they were blood vessels. And I also realized that not all blood vessels were labeled by this gene. As a neuroscientist, I didn't find that terribly surprising, because it had already been well-established from the previous 20 years of work that different nerve fibers had different molecular labels on them and could be molecularly distinguished in a way that correlated with their branching patterns and where they went. I thought, "Well, if nerve fibers can be different from each other, maybe different kinds of blood vessels could be different from each other." I asked the student to look into what was different about the blood vessels that were labeled with the stain, and the ones that were not labeled with the stain. The student came back to me and told me, showed me, that stain was labeling cerebral arteries and not veins. That was interesting, because—I didn't know anything about the field of angiogenesis and blood vessel development and vascular biology, but I was under the impression that arteries and veins—I knew they had different properties from each other, but I was under the impression that they were basically made from the same kinds of cells and they got different as a result of physiological influences after the heart started to beat. Like the arterial blood has differences in flow rate, oxygenation, blood pressure, et cetera, that exerts different forces—mechanical forces, shear forces—on the cells, that can change their gene expression.

I wasn't sure how important this was, but I decided to cold-call one of the most well-known people at the time in the angiogenesis field, and that was Dr. Judah Folkman, who was a professor at Harvard Medical School. He had been championing the idea that one should try to treat cancer by developing drugs that choked off the tumors' blood supply rather than by treating the tumors themselves. Because the tumors were mutating, and so any drug you throw at them, eventually the cells are going to mutate and escape the drug's effects. But the blood vessels they recruit are not mutated, and so the idea was that if you had an effective anti-angiogenic drug, the cells in the blood vessels wouldn't become insensitive to it. Anyhow, he was a major figure. I don't know how I got his phone number, but I cold-called him. He didn't know me from a hole in the wall. I said, "Look, we have discovered that arteries differ from veins in the expression of a particular gene, and we can see that expression difference in developing blood vessels before the heart begins to beat. So the part of the primitive blood vessel network, if you will, that is destined to become arteries, is already expressing this gene, and the part that is destined to become veins is not expressing this gene."

ZIERLER: Which tells you what?

ANDERSON: Which tells you that there are genetically programmed differences in the cells that make up arteries and veins that are there before the heart starts to beat. It implies that arteries and veins are made from different cell types, not from the same cell type sort of generic plumbing. Judah said, "This is incredibly important. You've just overturned 100 years of dogma in the angiogenesis and vascular biology field. You have to follow this up, and you have to publish it." So, we did. At that point, my student [Hai U. Wang], who was really a brilliant kid, really took off on this, and he made another very interesting leap. What he found is that not only did the expression of this particular gene label developing arteries, but if he took this gene away, the development of the circulatory system and the developing blood vessel network was completely blocked, and the embryos died basically at the time of heartbeat. He then, knowing as we had known from the beginning, that the gene that we were looking at was a protein on the surface of cells that was likely to bind to another receptor protein that was on other [interacting] cells, and those receptor proteins were known, and there were multiple ones of them, the student went through and checked the expression of all of those putative receptor proteins and found one which showed the complementary pattern of being expressed on veins but not arteries, which was really amazing. I can't remember if we had found that before I called Judah Folkman or not. We may have discovered that before.

What that said is, arteries and veins are not only molecularly distinct before the onset of heartbeat, but moreover, they expressed an interacting ligand-receptor pair whose functional interactions were essential for remodeling and branching of the blood vessel network, implying that the arterial cells and the venous cells had to talk to each other. It was particularly interesting because the ligands and receptors that we had identified which were called EphrinB2 and EphB4, were thought to signal bidirectionally. That is, there wasn't a sending cell and a receiving cell, but each cell could act both to send a signal and receive a signal, which is what you might want, if you wanted interactions between endothelial cells, the cells that make up blood vessels, to be important for remodeling the blood vessel network. This was an important discovery. We wrote it up. I really couldn't take any ownership for it intellectually, because it was not something that I hadn't been interested in or was trying to study; it was a complete accident. Yes, I recognized that it might be something important. I think it was Pasteur who said, "Chance favors the prepared mind." So, my mind was a little bit prepared, but not really prepared to study blood vessels.

That paper flew into Cell. It was published with the cover [image] of [that issue of] Cell. And it made me an instant star in the field of angiogenesis, through no desire on my own to be this. Suddenly I started getting invited to give keynote talks at angiogenesis meetings, to write review articles. This paper had a huge impact. Meanwhile, the behavior work was just struggling. There were no big discoveries. It was slow and grueling. The techniques that we needed were not there yet. So, this was a case where I was trying to switch into one field and I made a discovery in this other field. In the angiogenesis field, I really had the wind at my back, propelling me forward, and in the behavior field, I had the wind in my face. We pursued the angiogenesis [work] for a little bit, and at one point, I had three different subgroups in my lab. This was around 2000. I had a subgroup that was still finishing their work up on neural stem cells, I had a subgroup that was working on angiogenesis, and I had a subgroup that was trying to work on the mapping of neural circuits. Although the angiogenesis work—and I was able to recruit postdocs and students to work on that—although it was very successful, I realized that I was going to have to make a decision that I couldn't both move from neural development into this new field of neural circuits and behavior, at the same time that I was trying to establish a line of research in angiogenesis. I think after several years of doing that, I decided to terminate the angiogenesis project. Let me see if I can see when the last paper was that we published in that [area]. I can't find it [right now]. It might have been in 2005, 2006. It was a nice paper that brought together the developmental neural work and the angiogenesis work by identifying important interactions that occurred between developing nerves and developing blood vessels. It turns out that in the embryonic limb, it's the arteries that are aligned with nerves, and the veins that are not. We were able to see that and discover that because we had this way now of staining tissue to show us, where are the arteries and where are the veins. We followed up on that study, which I really enjoyed, and that was a really good paper. But I just couldn't continue—it was as if I was getting divorced and trying to get remarried and developing a new relationship with somebody, and all of a sudden this mistress came into my life who was extremely seductive. That put me into a position of playing this three-way, three-ring circus, which I think really slowed down the transition into neural circuits and behavior.

I think if I had to do it all over again, I might not have pursued the blood vessel story to the same extent that I did. But I figured that it was important enough that it deserved a little bit of follow-up work, to show that it wasn't just a one-off flash in the pan. That field has continued to develop and is a very robust part of angiogenesis. I just let go of it, which is not easy to do, when you're getting a lot of reward and reinforcement from this area, and you're getting nothing [from the behavioral work]—I mean, no one was inviting me to talk at any neural circuit and behavior meetings. The first paper that we published on behavior, rather than being a cover article in Cell, got rejected from Neuron, which is even a second division journal down from Cell, and we wound up publishing it in the Journal of Neuroscience, which is a good journal, but it's a sort of academic society journal where not particularly exciting or groundbreaking neuroscience work is published. We certainly didn't enter that field, the mammalian neuroscience field, with the same kind of splash that we made in the angiogenesis field. In fact, if I remember correctly, although we published a few papers, dribs and drabs, we didn't have a major publication in a high-impact journal on our behavior work until 2010, which was basically eight to ten years after we started that project. It was a research article in Nature, it was a big deal, but by that time, I had already made a lot of advances in drosophila. I'm getting ahead of myself, but whenever you want me to talk about Drosophila and how and when that started, I can do that, too.

ZIERLER: We'll definitely get there. I wonder if you've ever reflected on just the meaning, the bigger takeaways, of this remarkable story of overturning a century's worth of dogma on a research topic that you kind of fell into by accident. What do you make of that, in terms of how science happens, how you develop a research agenda? What does all that mean?

ANDERSON: I think it's true that a lot of important discoveries happen by accident. It was very humbling, because it told me that I was not able to deliberately conceive of experiments that would produce results that were as important as ones that I stumbled on by accident. Obviously you can't make a career of just wandering in the dark and hoping that you're going to stumble on amazing discoveries by accident. In that respect, I think it was humbling. But it also really was one of the few bona fide ‘aha moments' that I've had in my entire 40 years of doing science, and you don't get a lot of those. You really don't get a lot of those. When I realized that those fibers were not nerve fibers but blood vessels, and that they were arteries, that was a big aha moment. Although it was an aha in the sense of, "Gee, that's interesting, I didn't know that before," I didn't immediately realize that I had overturned 100 years of dogma in a particular field. It took somebody who was an expert in the field to persuade me of that.

ZIERLER: Even after you left this research, have you paid attention to the afterlife of the angiogenesis work, what it has gone on to do as a result of your contributions?

ANDERSON: I haven't followed it closely, but I do know that unfortunately Judah Folkman's brilliant idea of treating cancer with anti-angiogenic drugs did not turn out as well as he had hoped for. There is one example of that. It's a drug called Avastin, made by Genentech, and it acts by blocking a particular secreted protein called VEGF, vascular endothelial derived growth factor, that is essential for artery and vein development. Genentech was able to show in a clinical trial that treatment with that drug in certain kinds of terminal cancer patients extended life for about three months, and that's about the best that they could get out of that. It was tried in some other cancers. I think it was tried in breast cancer. I don't know how well it has performed, but certainly anti-angiogenic drugs have not had the impact on cancer that CAR T cell therapy or PD1-L or PD1 antibodies have.

On the other hand, that same drug, Avastin, sold by Genentech under a different brand name, has become a major frontline treatment for wet macular degeneration. In fact, my dad got this injected into one of his eyes for several years before he died, to clear up his macular degeneration. Because wet macular degeneration—and I think I'm getting this right—that is, there's wet and there's dry, and I think the drug works for wet, not for dry macular degeneration. Anyhow, this drug, against VEGF, strongly promotes recovery from and minimizes macular degeneration, because that is caused by an overgrowth of blood vessels in the eye. So, if you inject a protein that inhibits blood vessel growth, those excess blood vessels retract and recede, and clear up your vision. Now, that had nothing to do with anything that we discovered, but I think that has been one of the major clinical wins from all of the work that has been done on angiogenesis. There may be more. It is a huge field now. I can barely keep up with my own field, let alone follow another field like that.

ZIERLER: Just a point of clarification—when you were explaining why the NIH would not fund you at that stage, just to be clear, if this mouse work was something that you were pursuing straight out of your postdoc, the NIH would have been happy to fund this? It was more about where you were at your stage of your career, rather than the kinds of things, the topics that NIH was interested in funding at that point?

ANDERSON: No, it wasn't about the kind of topics; it was about the fact that I had no track record of working in this field. The grants that I was able to get when I started the lab, as you said, they were an outgrowth of work that I had done as a postdoc. As a result of that work, I had published papers in that field, and so I could point to those papers as evidence of my expertise in the techniques that were necessary to do those kinds of experiments. I didn't have a track record in the field of neural circuits and behavior, at least not at the beginning. If I had sent in a grant on that, they would have just triaged it. They would have dumped it in the circular file.

ZIERLER: Because you were able to pivot to funding from HHMI, would you say that your ability to do that is essentially Exhibit A for exactly why HHMI gives that level of freedom and flexibility? Is this exactly the kind of research that HHMI funds are meant to be deployed to?

ANDERSON: In principle, yes. In practice, not always. But around the time that I made the switch, I at least had support from the vice president for science at HHMI at the time, Gerald Rubin, who was a Drosophila researcher. I remember when he had this position, he made an announcement to the HHMI investigators in a talk that he gave at one of our meetings, saying precisely what you said: HHMI funding is not supposed to be used to support research that you could get an NIH grant to do. We are defeating our purpose if you do that. You should be using these funds to do work that is out of the mainstream and that you couldn't get NIH to fund because it's too conservative and not willing to take risks. Now, I don't think that meant that he thought people should do what many probably thought I was doing, at least in the stem cell field, which was to try to commit professional suicide by walking away from a field in which I had established a reputation over 20 years as a world leader, into an area where nobody knew me from a hole in the wall, hoping that I could succeed a second time in doing that. But, as I said, I took what Gerry Rubin said quite literally, maybe too literally, but it was part of the motivation.

ZIERLER: We've talked previously about the significance of David Baltimore as a biologist being president of Caltech. Of course, he is president while all of this is happening. Was that a factor for you at all? Did that give you more of a pivot space to do things that might not have been possible otherwise?

ANDERSON: Yes, it did. In fact, one of the things that David did to help is that he gave a special grant, a joint grant to me and Christof Koch, who was my colleague at the time. Christof was a physicist turned neuroscientist who had been working with the late Francis Crick on the problem of consciousness. He was interested to see if he could discover neural correlates of consciousness in an animal model. David Baltimore evidently thought, "Let's put together this molecular biologist who is interested in neural circuits and behavior with this theorist who is interested in consciousness, and see what we come up with." So, we did, and that certainly provided additional funding to help the overall mouse behavior project grow. We did publish a paper from that work, a joint paper, Christof and I, that was interesting. It was not a major high-impact paper, but it certainly was fun.

Unfortunately, Christof left to become the head of the Allen Institute. That happened long after I had started my interactions as a founding scientific advisory board member of the Allen Institute, which was around 2002 or 2003. That's another story, and that is something that I—helping to get the Allen Institute going, and persuading Paul Allen and his sister that they should choose as an inaugural project for the Institute a project to map gene expression throughout the adult mouse brain, I think has had the most impact of anything I have done since I moved into this field, in terms of how many labs use that database and what it prompted the Allen Institute to continue to do. It's not a paper I wrote; it came from advising and lobbying a very generous philanthropist with a lot of money to try to do neuroscience work in a way that wasn't being done at the time. Like I say, that's a whole other story.

ZIERLER: No, but it's right there in the narrative, the chronology, so let's stay on that for a few questions. First, how did you get to know Paul Allen? What was the initial point of connection?

ANDERSON: This is very, very specific. I was giving a lecture in a summer course at Cold Spring Harbor Lab, which at the time I think was still being run by James Watson, the co-discoverer of the DNA double helix with Francis Crick. Or at least if he [Watson] wasn't running it [the lab], he was hanging around and going to lectures. He went to my lecture, and in my lecture, I was talking about what we were trying to do to use molecular biology to try to elucidate circuits and cells underlying innate fear and compare that to learned fear in rodents. After my talk, he motioned for me to come over and talk to him, and he said, "Are you doing—" this I was I think in June of 2002, if I'm not mistaken, or maybe June of 2001 -- he said, "Are you doing anything in late July?" I said, "No, why?" He said, "Well, there is something happening in Seattle that I think you might be interested in."

I don't know why Jim took a liking to me; that's something else I also felt sort of somewhat guilty and embarrassed about, given Jim's reputation for anti-Semitism and making all kinds of racist statements. But anyhow, he got me invited to a meeting in Seattle, which was a very small meeting. I think there were only maybe 15 to 20 scientists in attendance, half of whom were Nobel Prize winners. It was held in the conference room at Vulcan, which was, and is I think, the private arm, the for-profit arm, of Paul Allen's financial empire, and was something that he used to produce films about science. Anyhow, Paul and Jim Watson had become friends on one of Paul Allen's cruises, and Jim had convinced Paul Allen that he should really do something philanthropic in the area of neuroscience, particularly related to mental illness, because Jim has or had a son who suffered from schizophrenia, and was very committed to transforming neuroscience research, particularly into brain disorders. So, Paul got that idea, and he had convened this meeting to get suggestions about what kind of research and what kind of institution, should he try to support. Richard Axel, my postdoctoral advisor, was also at the meeting, and a number of other famous scientists that either were already Nobel laureates or would become Nobel laureates including Bob Horvitz from MIT.

Anyhow, it was very coincidental—somehow maybe too coincidental—but about a month before I went to give this talk at Cold Spring Harbor, I remember having dinner with my wife, Debra, at a restaurant on South Lake Avenue, and complaining about the fact that the work that we were trying to do-- to identify gene markers for different cell types and brain regions, particularly in the amygdala, which is involved in fear -- that I and other people were having a lot of trouble accessing data that contained maps of gene expression that NIH had supported as sort of a larger science distributed project. Because the people who were collecting the data, even though they were supposed to make the data available to everyone, were being a little bit disingenuous about that, to say the least. That is, they were saying, "Well, you can have the data if you want, because the data we collected are funded by NIH, but the only way to get access to the data is through our software, our database. And NIH didn't fund the database; we funded that with private funds, and so we're not obligated to give you access to our database." I said to Debra, "This is just bullshit. What needs to be [done]—"

And the people who were doing the work were kind of doing it piecemeal. I said, "For a project like this, it needs to be done sort of Manhattan Project style, in a centralized facility that is dedicated to doing this in a consistent, quality-controlled, almost industrial manner, and which makes all of that data immediately available through databases to the entire community of neuroscientists." It was I think within three weeks of that that I met Jim Watson at Cold Spring Harbor, and I realized, in going to this meeting in front of Paul Allen, that here was a fairy godmother or fairy godfather who could wave a magic wand and make this vision happen. I pitched this idea at that meeting. It is recorded on tape, and Paul Allen even mentions it in his autobiography, Idea Man, and he credits me for the idea.

It didn't immediately go in that direction, because not every scientist who was there thought that was the right thing to do, to make maps [of gene expression in the brain]. Other people thought the Institute should try to solve a problem, like perception, or cognition, or consciousness. But I realized in follow-ups to discussions with Paul and his sister, which believe it or not happened on his yacht, the Tatoosh, which was like a 350-foot yacht that we were flown to in a private jet, that was anchored off of the Bahamas, in the Caribbean, and it was like being on a yacht in one of these James Bond movies. It had private staterooms, incredibly luxurious, you could go wherever you wanted to on the yacht but you couldn't leave. These were what they called charettes. They were planning sessions to decide what the project that this nascent brain institute should pursue. There were actually two of them, with different groups of scientists, but I was at both, because I guess they wanted me there.

I realized quickly that having the Institute pursue a scientific problem, a neuroscientific problem like how does consciousness work, or how does the brain generate perception or cognition, was not going to work with the Allens, because they clearly were allergic to the idea--as a result of some previous failed investments--of just giving a bunch of money to a bunch of academics and letting them run free and do whatever they want and hope that something useful would turn out of it, several years later. They wanted a clear plan of research with timelines, milestones, and deliverables. I realized that this project, to map the expression of all 20,000 genes in the mouse genome to the entire—[to] every one of the 823 regions of the mouse brain—was a project of that ilk. It could be tracked and monitored and project-planned. I think that's ultimately what made them go forward with that project, as well as getting support from a bunch of other scientists as well.

I have to say that being affiliated with the Allen Institute as an advisor—I began advising them informally in 2001, and I have been an advisor since the Institute was founded in I think 2002 or 2003—it has been one of the most rewarding things that I have done, and it has also, frankly, redounded to my benefit, in my own research, because it has allowed me to collaborate with people in the Allen Institute and to get a first look at the technology and tools that they were developing, figure out how I could apply it to the work that I was doing, and do that in collaboration with the Allen Institute. That was a very positive driver of the mouse work. But it also, as I say, benefited the community, because when they finally completed this gene expression map—and I think I should be able to tell you—that was published in 2007 in an article in Nature, "Genome-wide atlas of gene expression in the adult mouse brain." Not only did they collect all of these terabytes and terabytes and terabytes of data, but they made a fantastic web-accessible database for storing the data and cataloguing it, and importantly, they also developed algorithms for mining the data, searching the data, doing queries on the data, and they made this available free of charge to the entire neuroscientific community. To this day, they tell me that it remains, of all of the different maps—and they proceeded to do maps of different kinds, maps of human brain gene expression, maps of axon fiber projections, maps of cell types—the original gene expression atlas remains the one that has the most usage of any of the products that they have put out.

ZIERLER: Was Paul Allen always interested in translational possibilities? Did you have to manage his expectations at all?

ANDERSON: That was the great thing about Paul. He was really interested in the basic science of how the brain works, because he was a programmer. He was a computer scientist. From his perspective, the brain was the most sophisticated computer on the Earth, and he really wanted to understand what it is made of, how it is built, and how it works. It was very different from most philanthropists in neuroscience, who give money to cure a particular disease. Like Jim Simons, the billionaire former mathematician and hedge fund investor, has literally contributed hundreds and hundreds of millions of dollars to autism. He has also funded in parallel basic research that isn't autism-directed, but that came later. The focus on solving autism came first. Paul was not like that. Paul wanted to know the answers. Paul would come to our scientific advisory board meetings and sit there for hours, looking at the data that people were presenting, and he would ask really smart questions, and constantly push us. "Why aren't you doing this? Why can't you do that? What technology is missing that would allow you to do that? What would be game-changing here?" That type of thing. He really propelled this project along in that way, was deeply intellectually engaged in it. That's why it was such a tragedy that he died so early, just at 67. He was only a few years older than me when he died in 2018, but he had had a history of cancer, various types of lymphomas, and finally got him. Very, very sad.

But, like I say, I consider the help that I've given the Allen Institute—I didn't do the project; I just conceived of what the project would be, and they were able to hire a fantastic team of people in science, project managers. I could never have run a large-scale project like this. I mean, this was really the first case of a sort of experimental high energy physics-like approach in neuroscience, like CERN. It really put the Allen Institute on the map and made them something that people depended on in the field, that people depend on for data, and that NIH, rather than shunning them and viewing them as competitors as they originally did when they started this project, now write specific proposal requests that are tailor-made to be done by the Allen Institute because they know no one else in the world can do this kind of project. So, the Allen Institute not only runs off funding from Paul Allen—he has put hundreds and hundreds of millions of dollars into it—but also they have gotten hundreds of millions of dollars of funding from NIH, from the Brain Initiative that was started in 2014.

ZIERLER: Beyond your own interest in getting the Institute up and running, what was valuable in terms of your own laboratory at that point?

ANDERSON: We wanted to find genes that marked cells and neurons in particular brain regions that we were interested in, like the amygdala and other areas that we had mapped out and defined as being important in innate fear. Doing that was extremely time-consuming and laborious. It would require graduate students to first identify and clone these genes, and then map their expression by hand, one by one. This is something that would take each student maybe three years to do. What the Allen atlas did is it allowed you to look it up online! In that extent, it was really analogous to the Genome Project, that instead of having to sequence genes themselves and find them, graduate students could just look up the genes and their sequence in a database online. So, we were able to use that—again with help from the Allen Institute—to identify markers for the cells involved in fear, and later for the cells involved in aggression, and to use that as a starting point to get a point of entry to those circuits and develop ways of functionally perturbing those cells in order to see how that affected the animal's behavior. That's the way in which I, and I think many other people in the field, benefited from the Allen atlas.

ZIERLER: To get a sense of how mature the field was when you first started with the mouse project, were there postdocs that you could bring on that had done enough work in this area that you could lean on them as you were in a steep learning curve at this stage?

ANDERSON: There was one that I recruited from an experimental psychology lab at UCLA. He had a lot of experience in studying conditioned fear and anxiety in mouse behavior assays. That was very important to bring him into that. We had had experience with manipulating genes in mice, as we did it to study angiogenesis, as we did to study neural crest development, and we just applied those approaches to marking and mapping and manipulating specific cell populations in the adult brain that we had reason to suspect were involved in the control of innate behaviors like fear and aggression and mating behavior. The state of the field at the time was such that people just could not do that easily by themselves.

ZIERLER: Going into this brand-new area of research, was there anything that you could lean on from the neural stem cell work that would put you farther ahead than you otherwise might have been at the point when the lab was really operational?

ANDERSON: In theory, yes. It was our facility with using mouse genetic technology to label and functionally manipulate, with a high degree of specificity, particular groups of neurons in the brain. The person that really was the pioneer of that was my postdoctoral advisor, Richard Axel, who used that type of approach, developed that type of approach, after he had identified the olfactory receptors genes, the smell receptor genes, which is what he and Linda Buck shared the 2004 Nobel Prize in Physiology and Medicine for. But he used those mouse genetic techniques to mark and map individual olfactory receptor neurons based on which receptor they expressed. Because each olfactory neuron expresses only one out of the thousand receptor genes that are in the mouse genome. So, if you tag that gene with something that you can visualize in the mouse, a reporter of some kind, in the way that we had tagged the gene that we found was expressed in arteries, you can trace the axons of just that subset of neurons that expresses that particular receptor.

That is how he made the additional discovery that all of the neurons in the nose that express a common receptor project the exact same spot in the first relay in the brain [the olfactory bulb], which was a really amazing and unexpected result. I and other people generalized that technique to say, "Look, if that can be done in the olfactory system, we can also do it in other parts of the brain, deep in the brain, in the fear circuits, and we can couple that with technologies to perturb the activity of these neurons, like silence them or activate them, so that it becomes more than just a look-but-don't-touch type of experiment." Not only do you generate pretty pictures that reveal the anatomy of these neurons and show where their fibers are and show where their cells are located, but you can actually see what happens to the mouse when you shut these neurons specifically off.

We first did that using, I'm happy to say, a home-grown technology from Caltech that was the brain child of Henry Lester, that involved using a drug, which at the time was completely obscure, and which now because of COVID everyone knows about, called ivermectin, to—remember Trump was touting using [it]—it's basically an anti-worm drug that is used in veterinary medicine. It happens to bind to a type of ion channel that when open, inhibits neuronal activity, and it was Henry's idea to genetically plant that ivermectin receptor in specific neurons in the mouse, and then use ivermectin, which otherwise doesn't affect mice, to shut those neurons off. That was our big 2010 paper in Nature, where it was really one of the first papers to identify genetically a specific set of neurons in the amygdala, characterize their connectivity, and to show what the effect was on behavior when those neurons were silenced. In fact, it was a surprising result. Everybody knew that the amygdala was important in promoting learned fear, and so the natural expectation going forward was that if you knocked out neurons in the amygdala, you would knock out fear, because people who have done that using traditional lesioning approaches or injecting non-specific toxins had shown that if you get rid of the amygdala, you get rid of fear.

We found that when we silenced this particular subset of neurons, we increased fear in the mice, rather than decreasing fear, and revealed that there was actually a yin-yang circuit in that part of the brain that consisted of neurons that inhibited fear and neurons that promoted fear that were reciprocally, antagonistically connected with each other, and that controlled, in some sophisticated way, the amount of fear that was produced by the amygdala. So we were able to take the study of amygdala function to a finer level of granularity than had been previously accomplished. There were just a few papers at that time that had managed to do that. This was just after optogenetics. My only regret with that study is that we really could have published it three years earlier, and I wish we had, because by the time we published it, the technology that we developed with Henry Lester was being supplanted already by optogenetics, so it had a very short shelf life, as it were.

The reason we didn't publish it—classic science story—the first experiment that my postdoc did to test this gave a spectacular result. The animals that were injected with the drug showed more freezing, and the control animals did not. This was a complicated experiment. It involved making lines of genetically engineered mice, which we had to do. That took us several years to do. It involved building virus vectors that contained the subunits of this ivermectin receptor and learning how to inject those specifically into the amygdala of transgenic mice. I mean, these were just bears of an experiment. He got this beautiful result [the first time he tried the experiment]. Within that first experiment, the differences [between experimental and control groups] were highly statistically significant. I think there were like six or seven mice. These days, many people would just go ahead and publish a result like that, because it has six mice in it, or seven mice, it shows a highly statistically significant difference, and let's report the result and move on. But I insisted that my postdoc do the experiment independently, at least one more time. He did it again, and it didn't work. We then proceeded to spend the next two years troubleshooting that experiment, and in the end, I think we published data points from 50 different mice, to finally convince ourselves that, yes, across all of these mice, even taking into account the experiments that failed for what we now know in retrospect were technical reasons—the virus was no good, the injection site was no good, et cetera—but there was still a statistically significant effect that came through. I felt good about it in that respect, but by the time we were ready to publish the paper, another lab was onto the same thing, although using physiology. We managed to collaborate with them and publish back-to-back papers in Nature on this, but we really had to share the credit with them for this discovery, and if I had just gone ahead and published the initial result, I would have been two or three years ahead of them.

ZIERLER: Was there something deeper to the connection to Henry Lester, or was it just sort of casual acquaintance that got you involved in understanding what his technology could do for you?

ANDERSON: No, no. As Henry likes to say, we are clones. Henry grew up in Teaneck, New Jersey, the same town that I grew up in. He went to Teaneck High School, the same high school that I went to. He went to Harvard College, the same college that I went to. He went to Rockefeller University, the same graduate school that I went to. And, like me, he worked on, for most of his career, on the same protein that I worked on for my graduate thesis, which is the nicotinic acetylcholine receptor. I guess you would call us "homeboys" in today's parlance.

ZIERLER: Did you know him before Caltech?

ANDERSON: Yeah, I knew him from Woods Hole. I had met him in Woods Hole. And a scientist who became a close friend who was a student with me in the MBL summer neurobiology course in 1979 was a postdoc in Henry Lester's lab. I went out to visit her after the course, and I met Henry there. I met him in 1979. Before that, I didn't realize that I was his clone. Now, we have been at the same university for three and a half decades. So, it was good collaboration, it was a good idea, but like many good ideas, the implementation was just a little too cumbersome for it to be widely adopted. What made optogenetics so successful is that the implementation was much easier than anything else anyone had done.

ZIERLER: I'll ask in the way that I did about your perceptions of Paul Allen; for you, in the lab, once the mouse experiments were up and running, what aspects of it were simply purely basic science, and were you thinking at least in a very theoretical sense long-term that there were some translational possibilities here?

ANDERSON: Oh, yeah. That did not require an act of genius. People that had been working on the amygdala in rats for the decades before I entered the field, like Joe LeDoux at NYU, had made a big deal of the fact that if we understand how the brain controls fear, we may be able to learn how to better diagnose and treat psychiatric disorders that involve maladaptions of fear circuitry like PTSD, phobias, anxiety disorders, et cetera, et cetera. In fact, one of the experiments that we did that was a follow-up to the amygdala paper in Nature, which we never published, because it was just sort of one experiment, but we showed that blocking the activity of this particular set of neurons just in the amygdala could prevent the effect of a commonly used anti-anxiolytic drug, benzodiazepine, like Xanax, delivered systemically, from having its normal effect to reduce anxiety. That is, you inject the entire mouse with this drug. The drug goes all over the brain. It is likely interacting with receptors in many different brain regions. And yet, if we shut off the neurons that we were studying in the amygdala, the efficacy of that anti-anxiety drug goes away, implying that those neurons are essential nodes for the action of the anti-anxiolytic drug. That remains a very important objective in translational research that is very poorly understood. When you have a drug that is used to treat a psychiatric disorder, what are the cells, the key cells and circuits, through which that drug exerts its ameliorative effect? For most of the drugs that are used routinely in psychiatry, like Prozac, we don't know the answer. Or anti-psychotics; we don't know the answer. This is all in Chapters 9 and 10 of my book, in general.

So, yes, there were clearly translational implications of this. I suppose I could have thought about starting a biotech company to try to develop some of the translational applications. But I had been through that in stem cells. I cofounded one of the first stem cell companies with two other very well-respected stem cell scientists. The company was a failure. It lasted for 24 years and folded without having developed a single drug. It took a huge amount of time and energy, and I just didn't have it in me to try to go through that whole process of founding this startup, and it wasn't clear exactly what the patentable technology was that could come out of the work, at that particular stage. I think now, it's clear, and there are people that are actually studying—I found this out at a meeting I went to in Stockholm in June—they are studying more intensely the interactions of drugs with those neurons in the amygdala to try and find out if they can improve upon drugs for treating anxiety and other psychiatric disorders.

ZIERLER: Last question for today—because you were alive to the possibilities, the transitional promises, even though you didn't want to fully invest in that yourself, do you think that expanded the kinds of graduate students and postdocs who might be interested in working with you, recognizing that this is clearly set up as a basic science kind of environment, but with the recognition that they might go on to careers in biotech, in industry, as the field, as the technology and the science develops?

ANDERSON: It certainly helped me attract some good postdocs and graduate students, but it was too early in the stage of the field for there to be any biotech companies or pharma companies that would have been appropriate for the kind of training they got in my lab. So I can't think of anyone that came out of my lab in that period that went into—they went to academic jobs, or if not, they went into consulting or something, but none of them wound up at that early time, in biotech. Eventually, one of them did, but that was not until 2016, 2017. Meanwhile, if you look at what Viviana Gradinaru is doing, which is much more about engineering and iteratively improving the virus technology for genetically manipulating cells in the brain and other tissues, she has set up multiple companies. She has many of her trainees go off and work in those companies that she has set up, or in other companies. But I just couldn't see what, at that stage—and remember, this is 2009, 2010—what would really attract an investor in that area. And I didn't want to just go into technology development. I really wanted to understand how the brain controls a particular innate behavior. Like I've done many times in my career, I made an important discovery, and then proceeded to walk away from it—

ZIERLER: [laughs]

ANDERSON: —and start something else, which was aggression. But I did make a promise to myself that I wasn't going to walk away from aggression. That's why we've been working on that for the last 11 or 12 years. In some ways, that is good, because there's a body of work that I can point to, that we have produced. But it is also bad, because there is now a lot of competition in that field from the students that I trained who are working in it, and from the students that they trained who are working in the field as well. So that other mode of operation, which basically I learned from Seymour Benzer—this is what Seymour loved to do. He would make a foundational discovery in one area—not that I'm saying any of my discoveries were foundational except maybe the artery/vein, and I didn't do that on purpose—but Seymour would make a foundational discovery and then walk away from it, and leave other people to pick up the pieces, and start to do something else. Because Seymour hated competition. He just hated working in a competitive field, where there's somebody breathing down your neck, and somebody with a similar paper in the pipeline, and are you going to get scooped, and blah blah blah. I don't blame him. But on the other hand, if you work that way, you don't get the satisfaction of having constructed a body of work that gradually improves your understanding of the thing that you discovered at the beginning and shows you where you were wrong, shows you where you were right, etc.

ZIERLER: Many questions next time about how and why you got involved with Drosophila at this point, but just to set the stage for that, chronologically, when did that happen? How many years were you into the mouse project before you decided to add on or switch focus to drosophila? When did that happen?

ANDERSON: I think it started around 2001, 2002, because we published our first Drosophila paper, which was a collaboration with Richard Axel and Seymour Benzer, in Nature, which was the highest-profile paper I had to show for any of the work we did on neural circuits—that was October of 2004. It came out the week before Richard won the Nobel Prize, paradoxically. I must have started it around 2001, because that paper had to have involved at least two to three years of work—maybe 2001, 2002. The reason I did that is because the amygdala work was just going so slowly, I felt like the lab was in a nosedive going down in flames in this area, and that if I didn't do something to pull it up out of the nosedive, we were going to crash and burn, and this whole attempt to shift fields was just going to wind up having been an unmitigated disaster. So, I decided to set up Drosophila because it was clear that we could do a lot of the experiments that we wanted to do ultimately in mice, in flies, but more cheaply, more easily, and more quickly, and more decisively, than we could do them in mouse. I would say for the years between 2004 and 2010, when that amygdala paper was finally published, it was really our fly work that kept me afloat in the field, rather than the mouse work.

ZIERLER: That's a great place to pick up next time. We'll see how all of this develops with Drosophila.

[End of Recording]

ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Monday, December 5th, 2022. It is great to be back with Professor David Anderson, at long last. David, it's great to be with you again. Thanks so much for joining me.

ANDERSON: I'm glad to be here. Thanks.

ZIERLER: What I want to do today first is pick up on something really interesting that you said in our last discussion. You explained in great detail how you came to appreciate just how much better and efficient it would be to work with Drosophila over mice. That you could pursue the same research questions and do it in a way that was much more efficient and constructive and even from a cost savings perspective. The question there—it's sort of a fun historical question, and I don't mean to ask it cynically—but couldn't you have just paid attention to what T.H. Morgan and Sturtevant and all of those guys who came from Columbia's fly room a century ago? Wouldn't they have been able to tell you about the value of Drosophila themselves?

ANDERSON: I didn't need T.H. Morgan; I had Seymour, asking me when I was going to see the light. So really if I understand, you're asking, why bother doing any experiments in mice? Why not just work on drosophila?

ZIERLER: Unless there's something specific about mice with your behavioral interest that you thought initially would be impossible to pursue in drosophila.

ANDERSON: It's really a technical difference. The two systems each have their strengths and weaknesses. In the 2000s, there were a couple of technological breakthroughs, early to mid 2000s, that actually put mice, in some respects—neural circuit research as opposed to gene research, in mice—a bit ahead of drosophila. I will briefly say what they were. In both systems, we were taking the approach of trying to find a point of entry to aggression circuitry by artificially stimulating specific neurons and asking whether they promoted aggression or not, and if they did promote aggression, what features of aggression did they promote? If they promoted aggression, were they also necessary for naturalistic aggression? When I say point of entry, the idea is that if you start, you find a cell or cell population like that, you can build out the rest of the network by following the connections of those neurons. You can look upstream at their inputs, and you can look downstream at their outputs, and try to figure out what is different about what the inputs are doing, and what is different about what the outputs are doing, and do that in an iterative manner until you build up a pathway, basically. At least that was the thinking going into it.

The advantage of Drosophila is that we could identify multiple points of entry in an unbiased way, by doing these what I call forward cellular genetic screens, unbiased screens, where we take thousands of flies, in each of which different neurons can be activated, and we turn on those neurons without even knowing in advance what those neurons are, then finding the rare needles in the haystack where activation of those neurons caused an increase in aggression, and then trying to narrow down those neurons and find their connections, and as I say, build out the circuit. I would say we finished our first screen in doing that around 2011, 2012, which is right around the time that we discovered aggression neurons in mice. The problems that we ran into with drosophila, the limitations that we ran into at that point, were first a difficulty in actually mapping inputs and outputs to a particular group of neurons, in other words anatomical tracing. In rodents, there are viruses, neurotropic viruses, that you can inject that will be taken up by the terminals of inputs to a particular region and transported back. That's called retrograde mapping, and you can find input cell populations. Conversely there are anterograde tracing methods where you introduce a virus into the cell bodies of the neurons of interest, and it moves forwards along their axons, and you can see where they project.

These things take time, but in 2019, after having identified the estrogen receptor neurons in mice in 2014 as the ones that are controlling aggression in this ventromedial hypothalamic nucleus, by 2019 we had published a complete input/output map for that population of neurons. To give you an idea of scale, we identified about 35 different inputs, and 35 different outputs, and about 80% of the outputs are also inputs, so there's a huge amount of feedback in the system. We still have not done that for the aggression neurons that we discovered in flies, because the viruses that you use to trace connectivity in mice cannot be applied in flies. They don't infect the neurons. It's too hard to inject them into a particular location. A few efforts to try to do this tracing in other ways were made, but they really weren't comprehensive and satisfactory. That situation changed in 2020, 2021, with the publication of the first partial EM connectome of a fly brain, where basically you could just look up in the computer and find all of the connections, inputs and outputs, to any group of neurons and you could know the number of synapses, and you could know the sign of the synapses. In fact, you can get much more detailed information about connectivity from an electron microscope connectome than you can from the sort of viral tracing experiments that we can do in mice.

However, that connectome has only been published so far for female flies. Don't ask me why. And a lot of the neurons we discovered in flies that control male aggression turn out to be male-specific. Not by design or construction of the screen, but they're male-specific. And so we haven't really been able to use the female connectome to map their inputs and outputs. So it's now 2024, ten years after we published our first paper on—2023, sorry—nine years after we published our first paper on aggression in drosophila—actually, I take that back; the first paper on aggression in Drosophila was 2008, so it's 15 years after that, and we still don't have a complete input/output map for the five or six different aggression neurons we've identified in drosophila. That was kind of running up against a brick wall there.

Then there's a second very important difference. At the time [we started working on Drosophila]—in the mid 2000s—2005 is when the first optogenetics paper was published. That was first demonstrated in worms, C. elegans, which are transparent, and it was then demonstrated in mice by inserting fiberoptic cables deep into the brain at the site where the light-sensitive ion channel that is encoded by the channelrhodopsin-2 gene, was genetically expressed. But unfortunately, Drosophila was too small to insert a fiberoptic cable in its head, because the fiberoptic cable is almost as big as the entire head. Furthermore, it's not transparent. And, it's got a cuticle. Unfortunately, the first opsin that was developed, channelrhodopsin-2, is activated with blue light, 477-nanometer light, and that light doesn't penetrate the cuticle of the fruit fly. So, basically, for the first nine years of optogenetics, [while it was revolutionizing circuit research in mice] it could not be applied in fruit flies. I actually published the first paper on optogenetics applied to an adult fruit fly but I was only manipulating cells in the antenna, which are not covered by the same kind of cuticle that the head is. It wasn't until 2014 when we and an independent group at MIT published on opsins that could be activated by red light, in the 625- to 650-nanometer range, that we could start to use optogenetics in fruit flies for activating neurons deep in the brain. That's because the red light is not absorbed or scattered as much by the cuticle as the blue light and so it can penetrate into the brain of an intact fly.

So, the tools that we had for activating neurons in flies before optogenetics were temperature-based, what we call thermogenetics. We had ion channels which we could turn on by raising the temperature, body temperature of the flies, from 23 degrees Celsius to 30 degrees Celsius. Before [the advent of] optogenetics, that was a huge advantage over mice, because you cannot raise the body temperature of a mouse by eight or nine degrees Celsius, because it's homeothermic, whereas flies are poikilothermic; their body temperature is the same as whatever their environment is. So if the lab is 23 degrees, the flies are 23 degrees. If the lab is 32 degrees, the flies are 32. So, flies in the era of thermogenetics, which is what we were doing from our first fly [neural circuit] paper, which was 2004, up to our first fly aggression neuron paper which was 2014, that [manipulating the function of the neurons] was all [done using] thermogenetics. There, flies had a big advantage over mice. But, when optogenetics broke [came on the scene], flies were off limits to this—so here you had what was arguably the most powerful technology for manipulating neuronal activity--optogenetics--and the most powerful system for genetically marking and manipulating specific populations of neurons in a brain--fruit flies--and you couldn't put those two together. But after red light activated opsins, we could do that.

That was good, but there was still a disadvantage to studying neural circuits in flies, relative to mice, for another reason. That is the inability to record signals from active neurons in freely behaving flies. That's really I would say the third major branch of what any neuroscientist interested in how neural circuits control behavior would want to do. They would want to functionally perturb neurons [they had identified i.e., genetically marked] to see how turning them on or turning them off affects behavior. That's the manipulation. They would want to map the connections of these neurons, their inputs and their outputs. And, they would want to measure activity in those particular genetically marked neurons. Those are sort of what I call the four "m"s of circuit neuroscience—mark, map, manipulate, and measure—the neurons.

Again, here is where this tiny size of the fly brain and a fly neuron—a fly brain is smaller than a grain of rice. I mean, the whole fly is like the size of a grain of rice, and fly neurons are only about two microns in diameter, the cell bodies. It makes it very difficult to record neuronal activity. Only a few rare people could do that—in fact, the first paper reporting [single-cell] electrophysiology in flies (fly neurons) came out of Caltech. Not from my lab; it was done by Rachel Wilson who was then a postdoc in Gilles Laurent's lab, when he was at Caltech. Even then, you could only record one neuron at a time, in the fly brain. What you really want to be able to do is to record from multiple neurons at the same time so you can compare the activity of different neurons in a particular brain region or in two different brain regions. That's something that we—again--had started already to do in mice by 2011. We put a bundle of electrodes in the ventromedial hypothalamus and we were the first people to record electrical signals from neurons firing during aggression in the mouse ventromedial hypothalamus.

The second sort of critical breakthrough in technology for the mouse was the development of miniature head-mounted microscopes for doing calcium imaging through microneedle glass lenses that can be inserted into the brain in mice. That technology, again, like optogenetics, came out of Stanford, but from an optical physicist named Mark Schnitzer. These are two-gram microscopes that you can put on the mouse's head, and the mouse will learn to adapt to their weight. If you genetically modify the neurons you're interested in to express a calcium-sensitive jellyfish fluorescent protein, which flashes more brightly every time the neuron fires, because calcium comes into the cell and binds the jellyfish protein and makes it flash, then you can record activity deep in the brain of a freely behaving mouse. Moreover, you can record activity from genetically targeted cells, because you can use genetic markers for cell populations to determine what neurons you are going to record from.

We published the first application of using that technology to record neuronal activity in the hypothalamus during mouse social behavior in 2017. Meanwhile, we still have no information in 2017, even in the end of 2022, ironically, about whether the aggression neurons we discovered in flies by genetic manipulations of their function are actually active normally when those animals are performing aggression behaviors. It is possible, and we have used this technology, to do calcium imaging in flies that are head-fixed under a two-photon microscope. In fact, it's possible in principle to image activity across the entire fly brain [in such head-fixed flies]. We've published some papers on that, as have other people. We published a paper on that in 2020. But, right now there is no way to do that type of two-photon microscopy to measure calcium activity in the brain of a freely moving fly. There have been a couple of techniques that people have tried and have published on to do an approximation to that, but they haven't been widely adopted, and they're very complicated and difficult to set up.

So, this is a case where I would say the fly system in the early 2000s [prior to optogenetic] started off as being vastly more advantageous than mice, for studying circuit neuroscience. Then starting in about 2005, [technology for studying neural circuits in] mice started to improve until for certain kinds of experiments mice actually became better than flies. That has been reflected in the fact that the number of people who have wanted to come to my lab and to other labs in general to study circuit neuroscience in fruit flies has fallen dramatically in the period since 2017, 2018, whereas it has continued to skyrocket in mouse. Basically the way that many people look at using mice versus flies is that flies are a system that you use to do [certain] experiments if and only if you can't find a way to do them in a mouse. As soon as you can do a certain kind of experiment in a mouse that you could also do in a fly, or especially if you can't do it in a fly, then the justification for using flies vastly diminishes. Because after all, mice are mammals. Their brains basically have the same structure as ours, and what everybody wants to know about in the end is what our brains, how our brains work, and the fly's brain is very different in its organization from the mammalian brain.

Now, other labs, using head-fixed flies and two-photon imaging, have in the meantime made some spectacular discoveries in fly systems neuroscience in very narrow, restricted areas, particularly the question of how flies navigate direction. They basically have found the equivalent of a neuronal compass in the fly's brain, which is actually neurons that are in a ring, and there's a bump of electrical activity that moves to different positions around the ring as the fly is facing different positions along the xy axis [i.e., that encodes the head direction of the fly at any given moment]. That has been sort of the epitome of fly computational neuroscience, but that's not the type of behavior that we are interested in explaining. So, it has become more difficult to justify the use of flies for studying aggression [because of the inability to record the activity in the brain of flights while they're fighting]. Now, in theory, the connectome of the fly should change that [i.e., put flies ahead of mice], because it will be easily, I think, five years and probably closer to ten, before we have a complete connectome of the mouse brain. It's just orders of magnitude larger and more difficult. It's 108 neurons compared to 105 neurons in the fly brain, and it's a centimeter long versus a millimeter long (or even less than a millimeter long, in some cases), in the fly brain. But as I say, we're still in the early days of fly connectomics, where we have a female connectome, and the male connectome is being assembled now and hopefully will become available in another six months. But that [having a complete connectome of the fly brain] is really transformative. That has had the same effect on fly circuit neuroscience research that the human genome had on genetics research. That is, you no longer have to map connections between neurons that you discover, because the connections are already there in the EM connectome, they're loaded onto a database in the computer, and you can search that database starting from any particular neuron of interest and immediately identify--using various algorithms that people have developed--all of the inputs to that neuron, all the outputs; not just the first-order inputs and outputs but the second order, third order, fourth order inputs. That's the good news.

The bad news is that it has made it painfully obvious how overwhelmingly complex the connectivity of the fly brain is. There may only be about 20,000 neurons or so on either side of the central fly brain, but on average, each neuron projects to at least five and sometimes ten other neurons and may receive projections from dozens of input neurons. So trying to figure out which of those connections is relevant to the particular behavior that you're interested in studying is still a very challenging and daunting task. We have a project going on in the lab that is using the female connectome to study an aspect of aggressive behavior that you can study in females as well as males, but other than getting a sort of wiring diagram and a number of hypotheses that are suggested by the diagram, testing those hypotheses experimentally—to figure out what are the connections that control which types of behavior and what do they do, is really daunting, still. And we still have the problem that we can't record activity from the neurons that we're interested in, in freely moving flies. And unfortunately, a social behavior like aggression is not amenable to being performed by a fly when it's glued underneath an objective of a two-photon microscope. It can walk on a little Styrofoam ball mounted on an air cushion but it certainly is not going to be able to fight with another fly.

And so, where the fly field has moved now is—like I say, it has been the most effective in studying those behaviors and aspects of neural coding like head direction and navigation, that can be done in a head-fixed fly under a two-photon microscope. [But for other questions about brain function] I think the mouse really still has an advantage [at least for] now, over the fly. I have to say, I'm torn. There are days when I have felt like maybe it's time to just close down the fly operation completely, because I don't have that many students or postdocs working on it anymore. I only have about three people, as opposed to ten people working on mice. And the number of problems that I can study in flies but not in mice is dwindling as the [mouse] technology improves to a greater and greater extent. That was also, I have to say, impacted by the development in 2014 of the NIH Brain Initiative. This was a program that President Obama started, Brain Research for Advancing Innovative Neurotechnologies. It has continued to run and will run until 2025, and much of the funding in the first five years of BRAIN was focused and is still focused on technology development, and specifically technology development in mammals—in mice, maybe in non-human primates. Much less work is supported on fruit flies. I think where there's more technology that is being developed, you're going to have more opportunities to do experiments that you couldn't do before, and I think that has affected the shift as well.

As I say, young people are voting with their feet. They're voting with their feet. There are a handful of labs that are doing first-rate work on computational aspects of fly direction sensing and movement. Michael Dickinson here on campus is one of those. But you can sort of count those labs on the fingers of two hands. Mine is not one of them simply because aggression is not amenable to whole-brain imaging in drosophila. Of course, when I started working on drosophila, it had multiple advantages over mice, but technology changes, and so the benefit of systems change. I think Drosophila will still be a good system for linking genes to behavior, which is what Seymour Benzer wanted to do, through their action on specific cell neural circuits. But the pendulum is still squarely centered right now on circuit-level analysis of behavioral control and hasn't swung back to trying to study the effects of genes on behavioral control, in fruit flies.

ZIERLER: A clarifying question—I think it's a really important point. You have laid out all of the various distinctions between flies and mice, whether it refers to the presence of viruses or the technology available to access a tiny fly brain. At the end of the day, with all things being equal—let's say there are no technological limitations—are flies and mice equally valuable for studying aggression? I mean that both in the fundamental science sense, but then also in the applied sense. Because as you alluded, we're sort of more interested in mice because they're mammals and their brains are more like us, but that seems more like a translational line of reasoning. I wonder if you can address all of that.

ANDERSON: Yes. To answer the first part of the question, if there were no differences in technology, if we could record activity from neurons in any neuron we wanted to in freely moving flies, I think I would choose flies over mice, for no other reason than that they are so much cheaper to work with that I wouldn't have to spend all my time writing grants to support the research. It's huge. It's orders of magnitude difference in the amount of money to run a fly lab versus a mouse lab. I think many of the general principles that have already emerged from studying neural circuits in flies will apply to mice and to humans, although their implementation may be different. Mice have a quote-unquote "neural compass" for sensing head direction, encoding head direction, but it doesn't look like an anatomical ring or a disk in the mouse brain as it does in the fly brain. It's topologically equivalent to a ring or a disk, but it's not topographically instantiated as a disk.

Now, in terms of translation, there is obviously a major difference when it comes to therapies that are aimed at specific cell populations and brain regions. Because, for example, in psychiatric disorders, there is a lot of evidence implicating the prefrontal cortex and the amygdala and other regions in mental illness, and many, many labs are working hard to try to identify the different cell populations in those regions and to understand what aspects of, for example, fear, anxiety, or aggression they encode, with the hope that some of that will be translatable to humans. Although even going from mice to humans is a big leap. Nevertheless, humans [and mice] have a prefrontal cortex and an amygdala. Flies don't. They just don't have those same brain structures. What they do have are genes [shared with mice and humans] that act in some of their circuits that control their behaviors. There, I think our work and that of others have provided some very strong examples of translation from flies to mice. For example, one of our first fly aggression papers, molecular circuit papers in 2014, identified a neuropeptide called tachykinin, Drosophila tachykinin, that is specifically expressed in neurons that control aggression in males. We showed that in flies the gene [encoding the neuropeptide] is necessary for aggression, as is the gene encoding the receptor that binds to this neuropeptide, which is a chemical message released from one neuron to another. We also showed that social isolation, which makes flies more aggressive, does so in part by elevating the amount of expression of this peptide in the brain. Four years later in 2018, we showed that the same thing was true in mice, that the mouse homolog of this neuropeptide gene was also strongly increased in its expression by social isolation, and we were able to show that that increase was necessary and sufficient to explain the increase in aggression caused by socially isolating mice.

Humans have this protein as well. In fact, there are drugs that block the action of this protein. I've been trying since 2018 to convince investors to invest in a biotech startup aimed at translating that finding into humans. For various reasons I can get into separately, no one has been interested. But that is an example of how a gene that was ignored in the context of aggression and social isolation [research] in mice suddenly became highly relevant as a consequence of the discoveries that we made in fruit flies. Because you can [search a genome database using] the sequence of a gene [you studied] in the fly, and you can immediately find the homologous gene in a mouse, just by doing a search on the computer, and then figure out where that gene is expressed in the mouse brain by doing another search on the computer, there is a much more direct line of translation into something that is potentially therapeutically relevant to humans.

ZIERLER: Before we allow that tangent to fall off, why has no one been interested in that line of research?

ANDERSON: It's because of what I said before about mice being closer to humans but not close enough. Drugs that block the action of this peptide were developed and tested in humans for their ability to treat schizophrenia and major depressive disorder based on some experiments in rodents that were, in retrospect, very shaky and not very rigorously done, and they failed, in their efficacy, in the clinic. The good news is the drugs were not dangerous—they were well-tolerated—but they didn't show any efficacy. When a pharma company fails in a drug trial after having [spent]—it costs them about $50 million to $100 million, down the drain, in a clinical trial—with a particular molecule, once it has failed with that, it's once burned, twice shy. So, pharma companies and venture capitalists have been understandably not eager to try to invest money in this class of drugs, again based on findings in mice, because why is there any reason to think they would translate into humans [after all these other failures]? I can understand that, and we're trying to think of ways of derisking that process of translating from mice into humans, but that's going to take some time and it's going to take some investment. We're not there yet.

ZIERLER: The idea that mice are not close enough to humans, there's a logic there that would suggest that there's a compelling factor to move onto monkey research, although I wonder if that's more of a political than even a technological or a budgetary non-starter.

ANDERSON: No. It's not a non-starter. In fact, there is a group at MIT that has made a major investment in studying marmosets, which are a small non-human primate, much smaller than the rhesus monkeys that have been the workhorse of most non-human primate neuroscience, mainly visual neuroscience, up to now. Marmosets have a shorter gestation time. They're usually born as fraternal twins. They're smaller, so it costs less to house them. And, it is believed, quote-unquote, that because they are non-human primates, their brains, particularly in areas relevant to psychiatric disorders like the medial prefrontal cortex, will be more relevant or more predictive of things in humans than is the case in mice. That remains to be seen. However, working with any non-human primate, whether it's a rhesus macaque or a marmoset, is hugely expensive, and so the numbers of animals that you can work on is highly constrained. For example, when we publish a paper on recording activity in a mouse, in mice during a behavior like aggression, for example, it's based on minimum of six and often over a dozen individual mice, where we perform the same experiment, to make sure the results are consistent across animals. Whereas in the monkey field, it's acceptable to have one monkey—results from one monkey, sometimes two at most—be a sufficient "n" to be a statistically meaningful sample. People are more concerned with how many neurons they're recording from in that money than how many monkeys they are recording from. But I think the real reason is it just would be prohibitively expensive to have even six monkeys in each paper that you were doing.

Furthermore, because monkeys are so valuable and because they're so expensive, it's a real issue as to whether you're going to sacrifice the monkey at the end of an experiment to look inside their brain and see if you actually hit the spot you were aiming for with your injections of viruses or fiberoptic cables or not. We do that routinely in all of our mouse experiments, so we map precisely in each mouse where the injection site was, where the fiberoptic cable went. In many cases, that is important, because we'll do the experiment, say, on six mice, and it will work great in five mice, but in the sixth mouse, for some reason the experiment doesn't work. But then when you slice the brains up, you find that in that sixth mouse, the student who did the injection missed the target site by a couple of hundred microns. And so, you can eliminate that mouse from the analysis because you didn't perform the experiment you thought you were doing, but you have to kill the mouse to do that. People are not going to do, say, a single optogenetic experiment in a monkey, particularly one that they have spent months and months and months training to do a particular task, and then kill it and slice its brain up to see if they injected the virus where they thought they were injecting the virus.

ZIERLER: To return to this binary of translational versus fundamental research, in the way that you're explaining, a mouse is better than a fly and a monkey is better than a mouse. But does that apply in both realms? In other words, are monkeys fundamentally the best—all things being equal—technology, budget—let's say those are not considerations. Both from a fundamental research perspective about understanding aggression, and for ultimately understanding the human brain and developing technologies and therapies that could be helpful for humans, does it work in both cases? I wonder if you could elucidate a little there.

ANDERSON: I would say that it depends on the question you are asking, and the tools that are available, and also on just the size issue. Just as a fly brain is too small to inject a virus into or stick an electrode into, the monkey brain is so huge in comparison to a mouse that it's often difficult to inject just enough virus to cover the one region of the brain that you're interested in. So, size, for some studies, can be an advantage, because it allows you to separate things anatomically, but particularly for functional perturbation experiments— those using viruses or optogenetics, it can be a disadvantage. I think that even if technology was equal, cost was equal, you're not going to change the size difference between a monkey brain, even a marmoset brain, and a mouse brain. That is one of the impediments, as is the generation time. The generation time of a mouse is about three months. For a marmoset, I think it's at least a year, between the time they go through the gestation, grow up, reach sexual maturity, and are able to produce another generation. So, the [genetic] experiments are a lot slower in mice than in monkeys.

Now, if I were just doing translational research and cost was no object and there was no technology difference, maybe I would favor marmosets slightly. But I think there just isn't enough data particularly on their behavior yet to say whether all of the other sacrifices that you would have to make [by not working on mice]—and of course, money really is a constraining factor. As is access to animals. I mean, we have large repositories of mice in the country, like the Jackson Labs and Charles River, which make mice essentially available in unlimited numbers, whereas the number of monkeys is highly limited, just because they are expensive to breed and maintain, and there aren't facilities that maintain large numbers of them for distribution. Even the group at MIT, which has sort of cornered the market on marmosets, has a limited number of animals that they can maintain at any one time. I think it's on the order of 50 to 100 animals, certainly not enough to supply the world marmoset community if everybody decided to drop mice and rats and work on marmosets.

ZIERLER: Beyond the cost, is there a political factor there, a squeamishness that makes marmoset research simply less attractive?

ANDERSON: That depends on the individual. For me, personally, since we study emotions that are unpleasant and behaviors that can be dangerous like aggression or fear or panic, I think I would feel uncomfortable performing those sorts of experiments on monkeys unless I could be convinced that the thing I wanted to understand absolutely could not be understood in a mouse. That's pure anthropocentrism, and there's no sort of rational justification. The mice may feel just as bad as the monkeys do when I give them a foot shock. They may be as freaked out as monkeys would be when they are fighting. I have no way of telling. But monkeys look more like us, and I identify more with them. I wouldn't do those kinds of experiments to a cat, because I have a pet cat. So, the non-scientific part of my brain sets some limits on what I would be willing to do.

ZIERLER: Something that was seemingly counterintuitive—you emphasized the importance of technological advance to be able to shift from flies to mice. But then you also explained how difficult it is, for example, to access a fly brain with the availability of the fiberoptic cables which are almost as big as the brain itself. Wouldn't it follow, then, simply as a larger organism, that it would go the other way, that you would need the technology to advance in order to get from mice to flies?

ANDERSON: Yeah, it would. It's not going to be by fiberoptic cable. It's got to be by imaging. And there still is a problem in imaging activity of multiple neurons in a freely moving animal. The two systems where that has been done the most successfully to date are in the nematode worm C. elegans, which is transparent. It's only about a millimeter long. That's what Paul Sternberg works on. It has only got 302 neurons. Then the zebrafish larvae, which are considerably longer—I think they're maybe the length of half a thumbnail, so maybe half a centimeter or something like that, and they have 100,000 neurons. People have succeeded in imaging activity in all 100,000 neurons simultaneously, but not in a freely swimming zebrafish. They usually have the head and central brain immobilized in a block of agar, and then the body and the tail are free to move around, so that they can see if the fish is trying to swim and in what direction it is trying to swim. But trying to keep track of hundreds of thousands of neurons that are being optically identified in a tiny animal as it is flitting around in a tank of water and moving at different depths is really challenging.

Flies, unfortunately, are not transparent. As I said, there are some technologies where people have made glass windows in the cuticle of a fly, and they have expressed the fluorescent calcium indicator GCaMP so specifically that you don't really need a microscope to tell which neurons are active. You can just use a photomultiplier to count photons coming through the hole in the fly's brain, and you know that those photons—you know which neurons they're coming from, because there's only two or three neurons in the whole brain that are expressing the genetically encoded calcium sensor. But the number of experiments like that that you can do are few and far between, and flies don't like to do things like fight after you've made a hole in their head and glued a plate of glass on top of them. It has been interesting to watch these trajectories develop. I think that in fly neuroscience research, the circuit part will continue to thrive for behaviors that can be studied in head-fixed animals. For some behaviors [in both flies and mice], it will tend to go deep. That is, from the circuit level down to the cell and to the molecule level. Whereas in freely moving mice and in monkeys [and also in head-fixed flies, using whole-brain imaging] it will start to go broad, which is to try to measure activity in as many different regions of the brain as you can simultaneously. For example, people like Richard Andersen and Doris Tsao when she was here, already used fMRI on rhesus monkeys to identify the regions of the cortex that respond to faces. This is in an awake-behaving monkey. You can't really do fMRI on an awake-behaving mouse, because it's too small, it will squirm around and move too much, and it will also be freaked out by all of the noise in the magnet. So, there isn't yet an fMRI equivalent in the mouse. But there are other approaches people use [such as volumetric focused ultrasound imaging].

ZIERLER: In the way, as you said, students are voting with their feet, where the number of fly researchers is dwindling and mice researchers is growing, are there technology limits to advances? Is there a limit? Is there a Moore's Law of nanotechnology that makes further advances in fly research basically impossible, because there's no technology on the horizon?

ANDERSON: No, I would never say that. If people are able to generate nanoscale silicon probes such that you could implant a silicon probe array in the brain of a fly and record from hundreds of neurons in a freely moving fly, that would be a game-changer, and I'm sure that there are people that are doing [working on] that. It's just that the community is not that large. There are maybe 2,000 or 3,000 people studying all aspects of fly neuroscience, including developmental neuroscience, molecular neuroscience, not just behavior and circuits. There's 35,000 people that are studying mice, rats, monkeys of various kinds. So there's just a much greater demand for new technology advancement.

ZIERLER: To zoom out on your overall research agenda, a quip you made in our last discussion, where you said the story of your research career, at least up until 2010, was make a significant discovery and then promptly walk away from the field—it begs the question, what was it about aggression, circa 2010, that compelled you to stay on it? Even if you were interested more broadly in emotions, why stick with aggression and not go on to fear, for example? What was so compelling about aggression?

ANDERSON: The main reason was that I wanted to align the work that was going on in flies and mice in my lab. There is no question in anyone's mind that flies fight with each other. When you watch them in a movie, they even look like little boxers. They stand up on their hind legs, tussle with each other. They lunge at each other. Sometimes they throw each other in the air. No one would disagree with you if you said, "These flies are fighting." The same thing is true of fighting mice or fighting rats. Fear, ironically, turns out to be more difficult. The reason gets down in part to language and definition. Notice when I talk about aggression, I'm using a word that describes a behavior. When I talk about fear, I am using a word that describes an emotion. That's really not the way to talk about it. If you're going to compare those two sort of domains, you should either talk about fear versus anger, or freezing and running versus fighting. There, the problem with studying defensive behavior, particularly predator defensive behavior, in fruit flies, is first of all, they fly away. [laughs] So if you were to approach them with a spider and try to study their responses, they're going to fly away. Now, yes you could put them under a two-photon microscope and head-fix them and record activity in their brains, and people do that for mice [as well]. But the vast majority of work on defensive behavior on mice and rats is performed on freely moving animals, because they behave more naturalistically when they are not locked into a head stage under a microscope. And it is hard to do that in flies.

The other thing, and this is sort of where the rubber hits the road in the things that I've written about in my book and papers, is that in the case of defensive behavior in flies, it's more difficult to make a distinction between whether you're studying a reflex or a state-driven behavior than in the case of aggression. Most people, if you ask them, when the fly jumps off the kitchen counter while you're trying to swat at it with a fly swatter, is it afraid, they would say, "It's just a jump reflex." Indeed, we know that there are jump reflexes in the fly brain that control these rapid reflexive escape responses. Now, we think, and we have some evidence for, that there are also aspects of defensive behavior in flies that are driven by internal states, in that at least the behaviors show properties like scalability, persistence, valence, generalization, other features that are characteristic of emotion state driven behaviors in mice. We've measured them and published that in a paper in 2015, but it's very difficult to do [those measurements in a head-fixed fly].

So [in the case of studying fear in flies] you have this sort of [disadvantage]—the face validity of fear in flies is less compelling than the face validity—that is what it looks like on its face and by its nature—than is the face validity of aggression in flies. And, there's the additional question of whether defensive behaviors that you can study [in flies] are just stimulus-response reflexes or state-driven behaviors. In fact, I think before we published our 2015 paper, it wasn't even clear whether flies show freezing behavior in response to a threat, whereas freezing has been the main behavior that has been used for the last 30 or 40 or 50 years to study fear and defensive behaviors in rats and mice. We were able to document [in our 2015 paper] some rare short-lived cases of freezing in flies in response to a moving overheard shadow, but nowhere near as robust as what you see in mice. That again gets back to the issue of face validity. Like I said, mice fight, and it looks superficially like flies are doing the same thing, although they don't bite each other, because they don't have teeth, but they lunge at each other and tussle with each other. So you can make a one-to-one face validity link between aggression in flies and mice. It's much harder to do that for freezing behavior. You can do it for escape [jumping] behavior, but there's paradoxically less work done on escape behavior in rodents than on freezing behavior, because the animal runs away!

ZIERLER: In the way that you are making a distinction between reflexes and emotions, are they both fundamentally related to survival? In other words, where I'm going with this is, for example in higher-order animals where it seems as if elephants can engage in mourning practices, or dolphins can exhibit empathy, which seemingly have no basis in survival, at least in terms of fight or flight, can you connect those higher-order emotions with the kinds of things you're studying, or is that essentially universes away?

ANDERSON: You're raising two issues that are somewhat conflated here. One is the distinction between the evolution of emotion per se as a type of behavioral control mechanism distinct from stimulus-response reflexes, versus the evolution of different kinds of emotion. I think most people in the emotion field would agree that fear is a more evolutionarily ancient and primal emotion than, for example, shame, or guilt [or mourning or empathy], which are really characteristic of more highly developed social vertebrates, in particular mammals. So, yes, there is a universe of difference, if you want to study emotions of different types, types that are found in higher mammals. But that's different from the case in studying [emotions associated with] survival behaviors [such as fighting, freezing and mating].

The second conflated issue is whether they're—to sort of paraphrase what you said, if I understood correctly—does every emotional display [necessarily] have a survival value? Darwin certainly argued that it did, and his entire 1872 monograph is based on explaining why certain emotions are expressed behaviorally in the way that they are, and what is the survival or selective advantage of having that particular expression.

[Ed. Note. Anderson added the following, subsequent to this discussion]: Darwin also noted that certain emotional behaviors can be expressed in contexts where it is not obvious that they have any survival value – such as a cat that kneads a soft blanket with its paws. He attributed that to his Principle of Serviceable Associated Habits. That is, if a behavior that expresses a particular emotion does have a survival value in some contexts, then (because the emotion and its behavioral expression are so tightly tied to each other) according to this principle, any other context that evokes that emotion will trigger that behavior as well – whether it has utility in that context or not. In this example, the survival value of kneading something soft is that a kitten needs to do that to stimulate the flow of milk from its mother's teats so it can consume the milk and survive. But that behavior may express-or be associated with-an emotion (e.g., contentment, social affiliation) that is also evoked by a soft blanket – in which case it will evoke the useless kneading behavior. Of course, you can also explain the blanket-kneading without invoking any emotion at all, by saying that if the cat's feet touch anything soft and warm and pliable it will start kneading it, like a reflex. But Darwin anticipated that criticism and said that he had seen examples of cats kneading thin air with their paws – so the behavior is not dependent on contact with something soft and warm. But that's an anecdote, not data.

It's obvious in the case of defensive behavior—freezing to avoid detection by a predator, flight to avoid entrapment and capture by a predator. It's obvious in the case of threat behaviors and aggression behaviors. If you talk about grieving and empathy—(and parenthetically, I would dispute whether it has been rigorously and objectively shown that elephants are capable of grieving and dolphins of empathy. These are based on observations of animals in the wild, not experimental manipulations of any kind, and they are laden with anthropomorphism and anthropocentrism.) That's something that in my research I've tried to avoid by developing these objective criteria for what I call emotion primitives—features of emotional behaviors that distinguish them from reflex behaviors, like scalability, persistence, valence, generalizability, etc., so I don't have to first guess at what kind of emotion the animal is showing and whether it corresponds to an emotion that I have as a human or understand as a human. I believe that any emotional display in any animal ultimately has survival value, if not for the individual then maybe for the group of animals, but asking me to explain that for dolphin empathy and elephant grief would be just asking me to make up just-so stories. I don't know it. I can just invent answers, if I thought about it for a while.

[Ed. Note. Anderson added the following, subsequent to this discussion]: Many researchers have argued that empathy has survival value in a social species because it promotes altruistic behavior. In the case of grief, it's less obvious what the survival value is. One possibility is that it has none, but that grief is inevitably evoked by the loss of an object of affiliative love (e.g., an offspring). In other words, there is no grief without prior love. Affiliative love clearly has a survival value in promoting social bonding and behaviors that an animal uses to care for its offspring (like nursing in mammals). So if love was selected for because it has survival value, grief will automatically "come along for the ride" in evolution, even if it itself has no direct survival value. It's sort of like a "side-effect" of a survival emotion, if it is inextricably linked to love in some antithetical manner, like a see-saw. If one side goes down, the other side automatically goes up. If that concept is true, it would be interesting to understand how it works in the brain. On the other hand, it could be that once an animal has had the experience of grief – e.g., if a mother's offspring died because she didn't care for it adequately – then in future the anticipation of grief will drive the animal to behave in a way that avoids the re-experience of grief – in other words by taking better care of its next offspring. In that sense, grief could have a survival value, as a negative emotion that the animal will behave in a certain way to avoid (analogous to terror, pain or hunger) – that's called ‘negative reinforcement' by psychologists. But you still need to argue that the first experience of grief is automatically evoked by the loss of an object of love, because of their see-saw relationship in the brain. But that's a just-so story.

ZIERLER: But would it follow, then, that with your suspicion that elephants truly exhibit mourning or dolphins exhibit empathy, that humans are capable of these things, and that they are essential for our survival? What is the continuum here? What does that look like for you?

ANDERSON: I don't need elephants and dolphins to tell me that humans have empathy and grief. I know that from my own subjective experience and from asking other humans about their own subjective experiences. You're asking the question of how continuous is this, and is there a discontinuity in type of emotional expression, and if so, where do you draw the line. That is a very difficult question to answer. It's not as easy as something like language or music or mathematical ability, which we can clearly say no other animal that we know of on the planet has these abilities. I mean, there are some arguments about whether certain animals can count, and whether they can learn a quote-unquote "language" if they're trained to do it in the laboratory. But the bottom line is there are certain brain regions that have been identified through fMRI scanning in humans that are critical for music and critical for language, by people like Nancy Kanwisher at MIT, and you don't find those regions in non-human primates, at least not in rhesus macaques. Now, they may exist in a chimpanzee or a gorilla, but you're not allowed to put a chimpanzee or a gorilla in a brain scanner for ethical reasons, so we'll never know the answer to that question.

But it is a hard question in the case of emotion, and particularly for social emotions like shame, guilt, embarrassment, and that kind of thing. It's very hard to think of ways of identifying behaviors that display those emotions in a non-human primate, even, without resorting to grossly anthropomorphizing the animals. There's an interesting article in National Geographic about animal emotions in the last issue. I'm briefly mentioned in it as the sort of example of the bad guy who is too hard-nosed about anthropomorphizing, and why don't we just forget about that and enjoy all the things we can deduce about animals without worrying about anthropomorphizing. But it does admit that anthropomorphizing can be ambiguous, and you can be fooled, and there's a great photograph showing what looks like a jungle explorer up to his waist in a water hole, close to the bank but not close enough to easily climb out, and there's an orangutan on the bank hanging to a tree with his hand outstretched to the person. Okay? Your natural interpretation, your anthropomorphic interpretation, from looking at that image, is that he's offering help to the human who is stuck in the mud hole. But it could just be that he's asking for food from the human! Because he has been around enough humans to know that they carry treats and rewards with them, and what he is really thinking is, "Yeah, you're going to be dead in a few hours or so, because you're not going to get out of that hole, so while you're at it, why don't you give me all your bubble gum and candy, and I'll see you in another life?"

ZIERLER: [laughs]

ANDERSON: You have to be so careful about what we attribute to animals based simply on observing their behavior and pushing ourselves onto their existence and saying, "Well, here's what we would be doing if we looked like that and sounded like that and acted like that, therefore an animal that looks, sounds, and acts that way must be doing the same thing as we would, and therefore we can infer their internal state." Even with a great ape, in a natural situation, it's ambiguous.

ZIERLER: Perhaps a less fraught question—you explained wonderfully the complexity of these maps, and the years that it will take to get there. I think immediately of all of the phenomenal advances in computational power even over the last 20 years. Are the things you're talking about so complex that even supercomputers are not up to the task?

ANDERSON: Very good question. The field of connectomics involves taking a brain, slicing it up in one way or another into very, very thin sections, imaging each section in an electron microscope at tens of nanometer resolution, and then stitching those sections back together again to reconstruct all of the nerve fibers and axons and cell bodies. Because you can't visualize those in just a slice of brain; you have to fix the brain and stain it with a contrast agent that allows you to delineate the perimeter of individual cells and their axons and fibers and collect that data. There have been two technological roadblocks in connectomics. The first has been simply the process of sectioning brains at scale and scanning and collecting the EM data in a way that doesn't damage the specimens or lose sections. You collect thousands of sections from a brain, you drop a couple of those on the benchtop, and you're done; you've got to throw that brain out because you have a gap in your reconstruction. But those problems, thankfully, have largely been solved, by very clever experimental physicists and engineers.

The second problem has been the segmentation and reconstruction problem. By segmentation, what I mean is this: that if you imagine a group of neurons whose cell bodies are next to each other, and that they are sending out axons, and now imagine slicing that block of tissue into sections that are perpendicular to the long axis of the nerve fiber, in cross-section, the nerve fiber is going to look like a donut. Now, if all neurons had rigid axons that just pointed straight ahead with no deviation, the problem of assigning the donut in each section to the originating cell body of the neuron whose axon has been sliced through in that section would be pretty easy. But axons are not straight, rigid tubes. They weave back and forth through the brain. They branch highly. And so the process of segmenting, literally drawing outlines around individual cells and separating the perimeter of one cell from the perimeters of all of the other cells that are packed up against it in the tissue, and then doing that for every section, and then stitching those sections together, has really been the rate-limiting step for a long time. Many people in the early stages of connectomics, I would say even up to 2015 or 2016, were doing this by crowdsourcing or by having armies of graduate students sit at computer screens and monitors, drawing things by hand. Obviously you could only reconstruct very tiny volumes of tissue in that way.

The transformation has been computing. It has been that people have figured out how to use machine learning and machine vision to train—a lot of this work has gone on at Google—to basically train a machine learning algorithm to do the segmentation and the reconstruction, stitching-together process. It's not perfect. It still has to be proofread, spot-proofread in different places by humans, but it has turned the process of reconstruction of EM-sectioned brains from something that was basically impossible to do on all but the teeniest little bit of tissue into something that now can potentially be scaled up, certainly to an entire fruit fly brain with its 100,000 neurons in it. That has been done, and is being done. It will be done in the larval zebrafish, and as I said, probably in the next maybe five years it will get done in mice, at least in part of the mouse brain, probably the cortex, which is in my opinion the least interesting part of the mouse brain, but it's the one that people are able to raise the most money for, because we do all of our thinking and talking and math and everything with our cortex, even though the mouse doesn't. But that's where it will probably get done, first.

So, yes, computational tools have been critical in that respect. Computational tools that have been critical in turning the measurement of behavior from an incredibly subjective qualitative and highly investigator-idiosyncratic process into a rigorous quantitative consistent methodology have also been based on machine vision and machine learning. That's something that Pietro Perona, who is a professor of electrical engineering here and who has been my on-and-off collaborator on this since 2008, has spent a lot of time doing in flies and in mice. We wrote what I think was a pretty influential review on this called "Computational Ethology" in Neuron in 2014. Michael Dickinson has done a lot of this as well, also in collaboration with Pietro, and that has now exploded into a huge field where people are using supervised classifiers, and unsupervised classifiers, to dissect the behavior of animals at various levels of complexity, from the kinematic level to a whole-animal level. That's another area where computation has been critical.

A third area is in the analysis of very large high-dimensional sets of neural activity data. When you are recording from hundreds or now even thousands of neurons in a single imaging plane at 30 frames per second over a period of 20 minutes, as we do, or 30 minutes, routinely, you generate vast amounts of data, upwards of many terabytes of data from a single experiment. Machine learning, machine vision, as well as other statistical tools and mathematical approaches have been critical for people to try to make sense of that. Because we can't think in four or five dimensions, let alone in a hundred or a thousand dimensions, so there are various dimensionality reduction techniques that have been developed for trying to extract signals from neural activity data. The same kinds of approaches have been used at a finer scale level of investigation at the level of gene expression in individual cells, where you're measuring the activity of 10,000 genes in each of 30,000 cells simultaneously and trying to use that information to classify the cells into similar groups, which you call cell types. Again, there was both a technological advance that spurred that and now advances in computational methods for analyzing those data. So, at many scales of biological organization, from molecular and cellular biology to neural circuit activity to behavior to reconstruction of neural circuit activity, neuroscience has depended critically on massive parallel and high-speed computation. I think we probably still don't have as much computational power as people would like to have, particularly for the connectomics problems.

ZIERLER: But you're hopeful that computation is up to the task in the coming years?

ANDERSON: Yeah. I think so, particularly if Moore's Law continues to accelerate. We're involved in our first connectomics program and collaboration with a lab in Germany, in a Max Planck Institute. There, believe it or not, the most difficult problem to solve is how to send the specimens, the brains, back and forth from the U.S. to Germany, and to get them through customs without having them be held up so long that they deteriorate and are no longer good enough to be used for EM connectomics. I think once we solve the customs and transport problem, the connectomics—our collaborator thinks—and we're just talking about the ventromedial hypothalamic nucleus that we have been studying for the last 10 or 11 years—he thinks that we should get segmented and imaged in about four to six months.

ZIERLER: In our very first conversation, we talked in general terms about the Chen Institute, but now that we're in the narrative of the 2010s, do you have a clear memory of when you first heard about Tianqiao and Chrissy and their ideas of partnering with Caltech?

ANDERSON: I think I actually first heard about them from friends at UCLA, where they had been visiting and nosing around and asking about supporting neuroscience at UCLA. At that time I had no idea who they were. The next time I heard about them was after they and Richard Andersen had agreed to engage in funding his research. That then became expanded by our then-division chair, Steve Mayo, to pitch an idea that developed into a much broader support for neuroscience research at Caltech than just Richard Andersen's research. Although funding for Richard Andersen's research is still a major part of the Chen gift. It's a little less than 30% of the endowment that is left for Caltech, aside from the money that went into the Chen building. So there's strong support for Richard Andersen. That's when I first heard about them. I only met them when I had to fly to Singapore with President Rosenbaum and Ed Stolper, who was provost at the time, and Steve Mayo, for the actual signing of the agreement, because at that time, the Chens were living in Singapore and they didn't know me from a hole in the wall, and I didn't know who they were either. Fortunately, that has changed.

ZIERLER: Were you presented as you were going to be leading the Institute, or that was part of the discussions that the Chens were in?

ANDERSON: By that time, Steve had asked me if I would be the director of the Institute, and with certain provisions and negotiations, I said that I would do it. The reason I was brought to Singapore was because I had at that point agreed to be the director, or at least the inaugural director, of the Chen Institute.

ZIERLER: As you well know, the Chens are very interested in translational medicine and applications and things like that. Given that you're so fundamental in your interests—and even if you weren't, the timelines that you're talking about, these are still decades in the making—was there any challenge in squaring that circle and getting them to understand the connections between the fundamental research and whatever managing of expectations in terms of timelines, or they got it from the beginning?

ANDERSON: I think that their main expectation of a relatively short-term translational payoff is from Richard Andersen's work in the area of brain-machine interfaces. That's really what Tianqiao is interested in. I've talked to him about this. He is not interested in drugs, or drug development. He says that they are too risky, it takes too long to get their approval by the FDA, which is true. The path for approval of medical devices, implantable devices like a pacemaker or an insulin pump—[is] much, much shorter through the FDA than the path for drugs. That said, I think the Chens have realized and accepted the fact that Caltech neuroscience is not going to be a cash cow of neuroscience intellectual property. Although there is an intellectual property agreement related to the Chen Institute and the disposition of IP that comes out of it. It's so complicated that you would have to ask somebody in the Office of Technology Transfer to explain it to you. They originally wanted basically all IP that was developed with Chen funding, in exchange for the funding—basically a sponsored research agreement—and Steve Mayo and the provost said, "No."

Part of me thinks that perhaps—this may sound a little cynical—but perhaps they've sort of written off Caltech as a mistake in the sense that it doesn't really fit what I think their original model was of providing new technology and IP to fuel Tianqiao's entrepreneurial instincts and interests in the area of remediating neurological and psychiatric disease. They are continuing to do that in a big way, especially in China. So I think we here at Caltech are extremely lucky that Steve and I guess also the director of what's now called AAR—I forget his name; he is now at Harvard [Brian Lee]—succeeded in getting the Chens to commit to a gift of the size that they ultimately made, which was $115 million. It was one of the largest gifts I think that Caltech has received from someone who was neither an alumnus nor a trustee. Most of our big givers are very close within the Caltech family.

I know that the Chens' original intent was to make gifts of that size approximately every year or every other year, to other academic institutions, and since 2016, when the Caltech Chen Institute was inaugurated, they have made exactly zero such gifts, I think in large part due to political issues. Tianqiao was strongly criticized publicly on social media in China for having devoted such a large amount of money to supporting science in the United States rather than in China, which is where he made his money. He was also concerned about the growing scrutiny by the U.S. government of U.S. scientists with ties to China, and people from China who were funding U.S. science. So they just basically decided to step back from philanthropy for neuroscience at the level that they funded it at Caltech. So we really, here at Caltech, just squeaked under the wire. I had nothing to do with that part of it, at all. That all I think can be credited to—I guess Brian Lee was the director of Development at the time, and Steve Mayo, and I guess to some extent also Tom Rosenbaum and Ed Stolper.

ZIERLER: There are two things going on here with regard to the Chens and their expectations about translational breakthroughs. One is just cultural, perhaps that Caltech is just too fundamental a kind of institution to operate on anything close to what the Chens might have hoped for. The other, of course, is that the timeline is just reflective of the complexity of the science, and it doesn't matter if it's Stanford or the University of Shanghai; this is just really difficult stuff to bring to market. I wonder if you could comment in both regards.

ANDERSON: I think you're right about both points. I mean, Caltech doesn't have a medical school. It is very difficult to have a vibrant program of translational research without a medical school. MIT doesn't have one, but they are very close by physically to Harvard, and there are a lot of joint programs. Now, we do have the Merkin Institute for Translational Research at Caltech, but that's a relatively small-scale operation and certainly doesn't compensate for the lack of a medical school. But I think most people associated with Caltech, particularly those in the administration and those like me who are focused on fundamental research, are glad we don't have a medical school, because of all of the additional headaches and costs and regulatory compliance issues that come with it. But the flip side is that we can't raise as much money, particularly in the biosciences, because most people who give money in the biosciences are giving it because they have a sick relative, and they want a disease cured.

As far as it being a long road and that it's going to take as long in Shanghai as it would in Stanford or Pasadena, I think that's probably true, although I think in the area of brain-machine interfaces and neuroprosthetics, it will probably move faster, particularly with the involvement of machine learning. Because if machine learning has taught us nothing else, it has taught us that you don't have to understand a complex system in order to be able to make accurate predictions about how it is going to behave under certain circumstances or in response to certain stimuli. That is a really tough thing to swallow, especially for those of us in the biosciences, who were raised with the idea that if you want to cure diseases, you have to understand them and understand the normal biology. We couldn't cure diabetes until we understood that the pancreas made something called insulin and we understood what insulin was and what it did, and that diabetes was a deficiency of insulin. We still haven't really cured the cause of diabetes, because it's complicated. It's autoimmune, or it has to do with what you eat, your metabolism, but we can at least treat it.

But machine learning now—as I say, you can make predictions, without understanding how something works, if you have enough measurements of various things that the computer can use to construct a feature space in which it can classify whatever you want it to classify. For example, one of the reasons that drugs such as cancer drugs and also psychiatric drugs are notorious for failing in clinical trials is patient heterogeneity, that humans are widely different from each other. When these drugs are tested in animals, they are tested in genetically identical, inbred mice. After you run a clinical trial, in retrospect you invariably find groups of populations that showed very robust responses, and others that didn't respond at all, but when you average them all together, there's no statistically significant improvement from the placebo control group, and the FDA says, "Thumbs down." It's not adequate to go back and say, "Well, look, this group here responded, so there must be something good about this." What machine learning has the potential to do is, by being fed lots of high-dimensional datasets collected from patient populations that have been run in preliminary clinical trials—genomics data, metabolomics data, physiological data, lipidomics data, et cetera—and then being given training data on which patients appeared to show a response to a drug and which patients didn't, it may be able to train itself to prospectively predict, from a larger population of prospective patients for a clinical trial, the patients that are likely to respond to the drug, based on the profile of all of these high-dimensional metadata that you can collect. That's without knowing anything about how the drug works, why the drug works, et cetera. So it's both miraculous and very exciting, but at some level a little discouraging for us basic scientists.

I have always felt very defensive about the idea that the only justification for biology is that it is a technology in the service of medicine. I mean, yes, we would all love our discoveries to be able to cure some disease. My father died a horrible, slow death from Parkinson's disease about a year ago, and believe me, if I could think of a good experiment to do to cure Parkinson's disease, I would be doing it now. I would have been doing it eight months ago. But it's too complicated, and I can't, and anything I can think of has been thought of by 50 other people. Yet, no one has that expectation of astronomy or astrophysics. Nobody is asking LIGO to cure a human disease. No one is asking what LIGO or what the James Webb Space Telescope is going to do to improve the human condition. It's sort of like art; you look at it and you're in awe of the beauty and the complexity of what you're seeing. I think that biological systems and in particular neural systems are every bit as beautiful and complicated as something out in the cosmos, and basic discoveries about how they work and what they mean should be given as much value as what they can do for the betterment of the human condition, at least in the short term.

ZIERLER: Well said!

ANDERSON: But most of the world doesn't agree with me!

ZIERLER: Beyond the obvious burdens on your time, in directing the Chen Institute, has it changed your research at all? Has it given you a wider vantage point that has prompted new approaches or at least new questions to things that you were already working on?

ANDERSON: I wouldn't say that it has done that, at this point. My research has broadened to incorporate more computational approaches. We have a paper coming out in Cell early in January describing the first observation of a neural circuit that shows line attractor dynamics, which is a property of dynamical systems, in the hypothalamus region of the brain, where nobody expected to see such kinds of fancy dynamic control of neural coding. But that happened because I was fortunate enough to recruit a theory postdoc to my lab, a brilliant CNS student who is a computational person, not as a result of having been the director of the Chen Institute. I think what I've gotten from the Chen Institute is really more just shepping naches. It's feeling good about giving money out to people. We just announced today the latest round of Chen grants, and we gave out over half a million dollars to 10 or 11 laboratories on campus. Most of the grants are about $50,000 each. Since the inception of the Chen grant program, we've given out $2.5 million just in these grants. That doesn't count the money that the Chen Institute has spent on supporting graduate fellowships, retreats, symposia, workshops, socials, other things. So to be able to be part of something that I think and hope is having such a positive impact on neuroscience research and the neuroscience community at Caltech is enough of a reward for me.

If something that comes out of that over the longer term changes my research approach, maybe—it certainly has been great that Pietro Perona is now two offices away from me in the same building on the same floor and not across campus in the Moore Building while I was over in Church and Alles, because I can walk over and hakn a tshaynik [lit., bang on a teakettle; figuratively, "bother" to "talk shop"] whenever I want to, whenever he is on campus, and talk to him. And the people in our labs which are next door to each other interact with each other spontaneously, and that is something that would not have happened without the Chen building. Ralph Adolphs is next to me, as well, and that has also been good. So I think the building has certainly had a positive influence in that respect.

But I think it's Caltech and the experience of being at Caltech that has driven me in this more computational direction, even though I don't pretend to be a computational person. If I had taken a job at UC San Francisco in the medical school, I would never have been working in an area like this, and it's because of exposure to people like Pietro and others in computational neuroscience, Doris Tsao when she was here, that I have been pushed in that direction. I would say that's my most recent reinvention, if you want to call it a reinvention, although it is not quite fair since I am really totally dependent on other people to do the heavy lifting there. I have a good intuition and heuristic understanding of what we are doing, but I couldn't sit down at the computer and go through the code and calculate all the results and explain to you every single equation. We rely on our collaborators for that. But that's from being at Caltech, where in the end, I think most people just care about numbers.

ZIERLER: In light of all of these advances, both tangible and intangible, can you conceive of a follow-on conversation with the Chens and making the case, "Look at all we've done. Would you consider coming back for a second round of support?"

ANDERSON: At this point, no. I think at this point, certainly not for a major gift. I think they've made the most generous gift of any gift that they've made to neuroscience philanthropy in the United States, so far. They are happy with us. They indicated that by a gift of $1.5 million last year, on the occasion of the 5th year anniversary of the Chen Institute, and that was specifically to start a new program, a sort of workshop or boot camp, in computational methods in neuroscience of the type we have been discussing, for people who come from a non-computational background in cellular and molecular biology, anatomy, and other fields that are critical to neuroscience. We managed to pull off the first case of that, or the first course, this past summer, at the Chens' property in San Marino, which they very generously let us use for that purpose. So I think small gifts, maybe, from time to time, but I think something major, they will either do at another university or somewhere in China, is my bet.

ZIERLER: Tell me about the book, The Neuroscience of Emotion, and how you got to work with Ralph Adolphs.

ANDERSON: How I got to work with Ralph Adolphs is that Ralph and I had known each other for a long time. Ralph was a student here when I first joined the faculty. Ralph and I taught at one point a 100-level course for undergraduates and graduates on the neuroscience of emotion. We had discussed this idea of emotion primitives, and I had coauthored with Ralph a perspective piece that was published in Cell I think in 2014. 2014 was really my anno mirabilis in terms of important publications. That set the stage for the content of The Neuroscience of Emotion, except it focused on what animal models can tell us about emotion, less than it did on humans. Then Ralph approached me and asked me if I would coauthor this book with him on the neuroscience of emotion. I really didn't want to do it, but I sort of felt a moral obligation to do it, because Ralph had been really helpful with the Cell perspective piece that we wrote, and I know that in Ralph's field, people publish books as one of their main types of academic output, which is not something that is characteristic in cell and molecular and developmental biology. People don't build their reputations on publishing books; they build them on publishing papers in Nature, Science, and Cell, and other high-profile journals. It was a daunting task, and I certainly regretted many times when I was in it, having agreed to do it, but I was pretty happy with the outcome, and that was largely due to Ralph. Ralph drove the entire process. He negotiated with the publisher. He was the one that ran interference with the editor. All I had to do was write chapters and read his chapters and do edits, and it was pretty straightforward. It was nowhere near as burdensome and ultimately as unsatisfying as the second book that I wrote, which I wrote solo. But, you live and learn.

ZIERLER: You enjoyed, though, the science communication aspect, writing for a broader audience?

ANDERSON: I did enjoy it. I don't think I reached the audience that I wanted to reach, even though I thought I was doing my best to make something accessible. But I really misjudged what people are interested in, when it comes to neuroscience, which is they want to know about what are the prospects for curing Alzheimer's and depression. I talk some about curing depression in the last chapter of the book, but mostly it's about basic science. The other thing is that people want self-help advice. They want self-help advice. Just to make clear how painfully obvious the distinction is, there's a colleague of mine at Stanford, a neuroscientist named Andrew Huberman, who started doing a podcast in 2021 during the pandemic on which he interviewed neuroscientists for extended interviews, for an hour and a half, and put them up on the web. He interviewed me about six months ago. He has almost 1.9 million followers on the web, on YouTube or on Twitter. If you scroll through what the subjects of the podcasts are, yes, there are some basic science ones like mine, but there are a lot that are about self-help, and they're increasingly part of self-help. And he sells supplements on his podcasts. So, 1.9 million followers, okay? If you go on Amazon and you look how many reviews I have of my book—23. Okay?

ZIERLER: [laughs]

ANDERSON: I rest my case.

ZIERLER: [laughs] Don't sell out, David. You have to continue being you.

ANDERSON: Well, I didn't write the book to make money, for sure, but I did hope I would reach a broader audience. I mean, even Vroman's doesn't carry it. I think they had two copies and they sold two copies and that's it. But that's pretty depressing, when your local bookstore doesn't even carry copies of your book. But, it is what it is. It was an important exercise for me. It was humbling and informative as all good learning experiences should be. And I don't know whether I'll ever write another book again, but I certainly won't underestimate the difficulty in writing a book by yourself, but also in what it takes to really reach a broad audience.

ZIERLER: To get back to your comfort zone, a year later, you were a coauthor on the paper "Computational Neuroethology: A Call to Action." It's relatively recent, but are you seeing reverberations? The call to action that you called for, are things moving in the right direction for neuroethology?

ANDERSON: The precursor to that article was the article on computational ethology that I wrote with Pietro in Neuron in 2014. That's why Pietro and I were invited to be coauthors; we were the graybeards on that paper. That field has exploded. I wish I could say that Caltech has maintained the leadership in that field that it started on initially, but as often happens, it's the students that we trained here who have gone off and made major contributions as postdocs or graduate students and now in their own labs. That has become I think a thriving and vibrant field. We're not talking the scale of cancer research or single-cell RNA-Seq—it's still a sort of niche area of neuroscience—but I do think that the early efforts, particularly the ones in Michael Dickinson's lab, the work that his trainees have done, the work that Pietro's trainees have done in other institutions, and some of our joint trainees, has really paid off.

ZIERLER: Moving closer to the present, when COVID hit, people had to work remotely. Both in leading Chen and for your own lab, what were some of the major challenges, just to keep things going where there is a physical presence that is required?

ANDERSON: The first and biggest challenge, particularly during the shutdown, was to try to stop my students and postdocs from going into a deep depression. Because many of them live alone, and they don't have families, and they were locked in their apartments for three and a half months. They couldn't come in to the lab. So I instituted coffee meetings, initially three times a week on Monday, Wednesday, Friday mornings, to just talk about the pandemic, to talk about science. We continued to have lab meetings as people sort of got into the groove of analyzing data that they had already collected before the pandemic, and presenting that to talk about. I think mentoring rotation students and new postdocs coming to the lab was a real difficulty when they showed up at the time that the lockdown occurred and through the ensuing year. I lost one very good postdoc, and I lost another graduate student who eventually left Caltech to go to Columbia and Harvard, respectively, just because they could not build up enough momentum to get their projects going. The first six months of a postdoc or of graduate school are really critical in getting people on their flight path and getting them launched into a program. That was really quite difficult.

It was also challenging in the Chen Institute to maintain functions of various kinds. Mary Sikora, my executive director, and Helen O'Connor, her assistant, did a fantastic job. We had a virtual retreat one year, where they used software that allows you to have a little avatar that you can move with your mouse that goes to visit various posters, and then you're in a little chat room where you can hear one student's poster, and then you can go visit another one, and then you can meet with somebody else in another chat room to talk. It's not the same as being in person, but it certainly helped with that aspect. I think Caltech, like every institution, lost a lot of its sense of community during the COVID pandemic, and we're still not fully recovered. Already, COVID cases are starting to spike again. That's really too bad, and I have to say it hasn't really recovered in my lab yet. That in part I think reflects the habit that people got into of having to work in shifts. When we did go back to the lab, we could only have so many people in the lab at a particular time to maintain a certain low density, minimize contact. So, people got in the habit of coming in, doing their work for four hours, and then going home. If they had data to analyze, they would do that at home. That's not what makes science fun. What makes science fun and what makes running a lab fun is having a place where people hang out. They don't work 9-to-5 schedules and then just go home, and the rest of the time you can't talk to them because they're focused on their experiments. It's a place where people hang out and you can have informal discussions. That's where the creative juices really start to flow. That certainly has not returned to its pre-pandemic level in my lab. I don't know if the same is true for other labs. But I think that has been a casualty of the pandemic as it affects science.

The other major challenge was trying to recruit people to my lab. Basically for the first year or so of the pandemic, or year and a half, I had no postdoc applicants. I used to get multiple postdoc applicants every week, particularly when I was working in stem cells. It was a hot area. I thought, "Okay, maybe this is it. Maybe I'm going to have to close my lab down. No one thinks what I'm doing is interesting anymore." Until I learned from other colleagues that nationally, there is a postdoc crisis, which has been engendered in large part by the pandemic. People have reassessed their priorities. Just like we had the Great Resignation in people who are working outside of science, a lot of people who were getting PhDs have decided it wasn't worth it for them to spend another six or seven years working for a relatively small salary in the hopes that they could apply for an academic job that had 350 other people applying for the same job. So, the postdoc market has dried up. Fortunately, beginning last March, for me it picked up again, and I have five new postdocs coming to my lab, which is good, although my lab is still much smaller than it has been in the past. Which I actually don't mind, because in an effort to try to increase postdoctoral recruitment and salaries in general for young people, the state of California mandated an increase in the minimum starting postdoc salary to about $65,000 a year, at a time when even NIH—and I think this is still true—their minimum starting salary is $54,000. When I was a postdoc, I got paid $17,000 a year, which is equivalent to about $35,000 a year in 2022 dollars. What that means is, my grants have not gone up—no one's grants have gone up—by 50%, so we simply can't afford to have as many postdocs as we could. Which I think is good. There are fewer projects to think about, and you can think more deeply about each project. Maybe that's a better way to go. Although I will have, by this summer, ten postdocs and five students or so in my lab, assuming I don't take any more students, which is plenty.

ZIERLER: The cultural impact of COVID, of just not being in the lab, being in person, where the magic happens, are you concerned that this is a long-term effect? That whenever COVID leaves, that science, that the culture of labs, will retain this remote feel to them?

ANDERSON: I think that will depend on the culture of each individual lab, and also the culture of the labs that are surrounding them. One of the things that has always been a challenge for me at Caltech compared to Columbia, for example, is that Caltech has what I call a suburban culture, and Columbia medical school has an urban culture, in the labs. Meaning the labs are large, people are spread out at Caltech, so the interaction frequency, the spontaneous frequency of bumping into somebody, is lower at Caltech, than it is at a place like Columbia where people used to be at least jammed on top of each other at a much higher density. That's true for individual labs. It's also true for labs that are in the same building. They tend to stay separated from each other. So even before the pandemic, I think Caltech or certain parts of Caltech, at least the parts where my lab was situated, didn't have the same level of energy and human interaction that a more densely crowded environment would have. I think when it's in that state where it's tenuous to begin with, it makes it even harder to come back from something like COVID. Maybe at places like UCSF where it's different and more crowded, an urban lab, maybe there it will come back.

But I have to say in talking to my colleagues, it's almost universal perception that the postdocs and graduate students have really changed in the way that they view their--what is fashionable to call now--work-life balance. When I was in graduate school, I had no work-life balance; I had work. Maybe I went out on a date once in a while. The same thing was true when I was a postdoc, until I got married. That's part of what made the atmosphere that there was in the lab, was that people were there all the time. People were there 12, 14 hours a day. Now, there's plenty of people in my lab who are here for eight hours a day, nine hours a day. Some work harder, but it's not the critical mass that you need to produce that sort of hopping atmosphere. I think students and postdocs are having children, families, earlier, much earlier than in my generation. Once you have kids, then you get involved in picking them up from daycare and school and that also cuts down on the amount of time spent in the laboratory. So, I think things have already been trending in that direction, the lab being a place that you work rather than a place where you live and happen to work as well. I don't see that recovering, getting any better, after COVID, for what it's worth. I miss that, to some extent, but I know a lot of people don't share my old-fashioned perspective on this.

Maybe it is healthier for students and postdocs to have a better work-life balance, but science remains very competitive, and the people that have done the best in my lab are the people who have worked the longest hours in the lab, no question about it. It's an experimental science, it's not theoretical physics, and you just have to be there doing the experiments, in order to make sure that eventually enough experiments work that your project succeeds. That means being there a long time, and that means having time to bullshit with people while your experiments are running or you're waiting for samples to come out of a measurement device. That's what created this generative atmosphere that I miss, and which may exist in other places. Maybe it does exist in other labs on the Caltech campus; I don't know what your experience is from talking to people here. But I think it's not just me that is noticing this.

ZIERLER: It's about the grind; it's not the genius. That's what really gets the work done.

ANDERSON: Yes. As they say, it's 1% inspiration and 99% perspiration.

ZIERLER: To bring the story right up to the present on a scientific note, tell me about some of your recent work on neuropeptides, and what might be patentable about this, what the IP angle is there.

ANDERSON: Basically, I mentioned that we discovered that this family of neuropeptides, the tachykinins, is elevated during social isolation in mice, and that remarkably, all of the long-lasting adverse effects of social isolation in mice, which include increased aggression, increased fear, increased anxiety, all of those can be eliminated by treatment with a drug that blocks the action of one of these tachykinins. It's called tachykinin 2 or neurokinin B, in mice. It has a slightly different name in humans. Moreover, we can actually mimic the effect of social isolation in a non-isolated mouse by forcing its tachykinin neurons to fire more and to release more tachykinin peptide when they do fire. We do that by genetically engineering them. That says that the increase in tachykinin release is really causal to the effect of social isolation to have all of these behavioral sequelae. We filed two patents on that. One patent, which I think has been issued, is mainly a patent that proposes a method for modifying neuropeptide release in a subject by combining these two manipulations that we performed in mice, both an increase in the electrical activity of the neuron and an increase in the amount of the neuropeptide. It's like you both have to fill the balloon with more water, but that won't help if you don't open up the nozzle to let the extra water out. You have to open the nozzle more, and that's where the increased activity comes from. Now, you might not want to do that for a neuropeptide like tachykinin that promotes a negatively valenced internal state, but you might want to do that for neuropeptides like endorphins and enkephalins that produce positively valenced states and that have an analgesic action. Again, this is very long-term and it will require combining this with the type of technology that Viviana Gradinaru and Paul Patterson developed at Caltech for getting viruses that carry these genetic payloads that allow you to fill up the water balloon and open the nozzle wider by crossing the blood-brain barrier, so that you don't have to drill holes in a head and stick needles into your brain, but you can just give somebody an injection. So that's one patent.

The second patent, which I think is close to being issued, but I'm not sure, because we've had a lot more trouble with it, is a use patent of tachykinin inhibitors, tachykinin receptor inhibitors, to treat stress, anxiety, fear, aggressiveness, brought on by social isolation. The importance of social interaction and the detrimental effects of social isolation were of course very apparent during the COVID pandemic. Tachykinin antagonists are being tested for other uses, for example they are being tested for their ability to mitigate hot flashes in perimenopausal women, which is a totally unrelated field of use. So this would be a field of use patent. But we are actively—we—I and some of my young colleagues who were involved in that story have been thinking about ways in which we could persuade investors to fund some sort of a startup that would allow us to develop these applications. One idea that we've been pursuing that I came up with is, rather than trying to persuade somebody to go right into humans, what about developing veterinary medicine applications of this? Because after all, it does work in mice, and we know that during the COVID pandemic, again, there was a huge increase in the acquisition of domestic dogs and cats. I think something like 23 million dogs and cats, or maybe it's just dogs, were purchased over the pandemic. Now, as people are going back to work and leaving their pets at home all day, there is a huge epidemic of separation anxiety among cats and dogs, because they've grown attached to their owners, who have been home all day and all night, every day during the pandemic, and now certainly they are left alone. The idea is that maybe the bar for testing these drugs in domestic pets should be lower than in going to humans, because, after all, cats and dogs are evolutionarily closer to mice than they are to humans. In fact, I have even been approached by somebody who wants to test these drugs in domestic pigs, because apparently social isolation induced aggression is a serious problem in pig farming, for the females in particular. So, one idea is that if we were able to get somebody to test these drugs on animals and they were successful, that could be a stepping stone for convincing investors that, "Look, look how well it works not just in mice, but in cats and dogs, and maybe even in pigs." Which are so close to people physiologically that people are transplanting pig hearts, pig heart valves, and other things, into humans all the time. Maybe that would convince them to fund another shot at testing these drugs for their efficacy in psychiatric disorders.

Those are the patents that we have. It's nothing like the raft of patents that my lab produced in the 1990s and the 2000s, where we had many, many patents related to composition of matter for stem cells and angiogenesis of arteries and veins--none of which, I have to say, amounted to anything, in terms of being turned into a product. They helped get biotech companies started in which I participated, but none of those biotech companies were successful, ultimately. Patents are a double-edged sword. They can be useful, but they cost a lot of money to prosecute. They cost the university a lot of money. The university has gotten much more conservative about what inventions they encourage patenting. And, my lab is a basic research lab; we're not a technology development lab. There is an additional patent disclosure we're putting together on a method we did develop to measure the release of neuropeptides from neurons in real time, which is a method that has been lacking and could be used to develop a drug screening platform to find new drugs that affect the release of various peptides. So I certainly remain convinced that therapeutics, particularly for psychiatric disorders and pain that are based on neuropeptides, are an untapped resource, and that the death of that approach in 2010, roughly, has been premature, and that there is room both to develop new drugs and also to rescue abandoned drugs that are sitting on the slag heap of failed pharmaceuticals that didn't make it through clinical trials but which might be repurposed for indications that were not obvious at the time that the drugs were originally developed and tested. But, who knows.

ZIERLER: Now that we're worked right up to the present, for the last part of our talk, to wrap up this excellent series of conversations, a few retrospective questions about your career, and then we'll end looking to the future. To go back to this point you made, this sharp point that you made about nobody asks an astrophysicist about the beauty that they have uncovered in the universe and how that might improve the human condition, if you can reflect on all of your work in biology broadly conceived, where do you derive the most satisfaction exactly on that point, in elucidating that beauty, in seeing something deeper that gave you that satisfaction, totally divorced from translational pressures that society might impose on biology?

ANDERSON: I think the pleasure of figuring out how something works, which is I think a paraphrase of something Feynman once said, and also in biology, of seeing how things are organized and built, particularly when there is a relationship between structure and function. I'm thinking of our work discovering that arteries and veins are genetically distinct from before heartbeat, and that discovery making something that was previously invisible suddenly visible. Making a latent type of organization or specificity patent is part of what I find beautiful in biology. I think that has been harder in neuroscience, although the fact that we can trigger or inhibit aggression or fear or mating in mice, or in flies, literally with the push of a button on a laser, is pretty remarkable, I think. Also to me it has beauty in that we understand the brain at least enough to be able to manipulate very basic survival instinctive behaviors in a very precise and reproducible way.

That is I think some of the places where there is beauty. I tried to communicate some of that in my book. But it's when you've got an experiment, a result that really carves nature at its joints--I forget who was responsible for saying that--and suddenly something that was fuzzy and ill-defined becomes crisply defined. Like, how do arteries and veins get different from each other? How does what seems to be the same group of cells in the female hypothalamus tell the female mouse whether when a male comes into her room, she should attack it or have sex with it? Where we've gotten very clear and beautiful answers to those things. That, and I think a lot of the fly work—again, the fact that we have identified a single neuron in the brain of a fly, one neuron that is present in both males and females, and activating that neuron is sufficient to trigger aggression in both sexes, and that's the first example of a cell that's common to both sexes that controls aggression. Even though male and female flies fight completely differently, this says that there is some common underpinning to aggression across sexes. I think those things stand out in my mind as being particularly beautiful.

ZIERLER: A counterfactual question, reflecting on your life at Caltech. The dramatic pivot in your research at the turn of the century, how inherently is that a Caltech story? In other words, in a parallel universe, if you were at a Harvard or a Stanford, would you have embraced something like you did at Caltech, do you think?

ANDERSON: I think probably not. My two biggest inspirations for that were Seymour, who switched in the 1960s from working on cracking the genetic code, which was the problem of the moment in biology, and to which he made fundamental contributions, to trying to discover a model organism that could be used to understand how genes control behavior. That was a major shift, and Seymour has long been a hero for doing that. Then more recently, Elliot Meyerowitz's pivot from working on the control of glue gene expression in fruit fly salivary glands to basically creating the field of plant developmental genetics and understanding pattern formation and differentiation in flowering plants—those are really my two inspirations. On the non-scientific side, another inspiration is Bob Dylan. As an acoustic guitarist, I understand why so many people were pissed at Bob Dylan when he went electric. But he's somebody that has constantly reinvented himself. I'm not trying to compare myself in science to Bob Dylan, but it's people that are not afraid to leave their comfort zone and try something different and reinvent themselves that are the inspiration for me.

ZIERLER: That fantastic joke that you shared, when you got to Caltech and biology was understood to be a humanities because of its lack of equations, has that culture changed at all, at Caltech, or is it still more or less what it was like?

ANDERSON: I think the culture still exists in pockets, but that people are less ready to say that, in much the same way that sexism and racism still exist in pockets at Caltech, and anti-Semitism, but people are less willing to say that. But, I do think that there has been a huge diffusion into the rest of the Institute of biological problems as worthy of study by people in other Divisions, and largely because of the impact of big data and the need for computational tools and approaches to understand big data and make sense of the overwhelming complexity of biology. I think there is some respect, maybe finally, for the fact that biological problems take a long time to solve, not because they are being worked on biologists rather than physicists and biologists are inherently less smart than physicists, but actually because the problems are a lot more complicated than a lot of the problems that have been solved by physics over the last several centuries.

[In] biology in general, solutions [to problems]—and you can correct me, because you're a physicist—but in general, the solutions to problems in physics that are the right solutions are often the simplest ones and they have symmetry and beauty to them. And with the exception of the few rare cases I mentioned a moment ago, solutions in biology tend to be baroque, unnecessarily complicated, Rube Goldberg-like machines that no engineer interested in efficiency would ever have dreamed of putting together. That's because biological systems on our planet evolved, and everything is a consequence of the sequence in which things evolved. Just as I gave the example of how the experimental system, fruit flies, which at one point in the evolution of my science, seemed like the bee's knees and the best system to approach the problem, ten years later things change, and it's no longer the best approach for certain kinds of questions; certainly [by analogy] in evolution, some of the first strategies that evolved to allow organisms to cope no longer were the most efficient ways to do things, as life evolved and new challenges were faced by organisms, and new functions had to be involved, adapted. They were all constrained by what came before.

It's great that there is a systems biology, and that people like Michael Elowitz are thinking about these issues in a sort of top-down way, to try to simplify them, but the fact is that biological systems are hard to solve, because they are so complicated. And that's why we've been banging away at cancer for decades and we still don't have a universal cure for cancer, although we've made great strides. And we've been banging away at the brain for a century, and we still don't have even a reliable diagnosis for Alzheimer's let alone a treatment or a cure. The same thing for Parkinson's. And we have drugs for depression but we have no idea how they work and why they work, and why they work for some people and not others. It's not because biologists are just not smart enough and quantitative enough to be able to solve these problems, which I think was a popular trope among physicists certainly when I was growing up. And I say that having grown up as the son of a theoretical astrophysicist. But I think that that has changed, and I hope that there is at least an increasing respect for biology and biological problems as challenging, important problems that are worthy of being solved, among the physical scientists who dominate the Caltech environment, intellectually, administratively, numerically, but also perhaps maybe a grudging increase in respect for biologists as people who are not doing biology because it's easier than physics, but because they want to solve what are hard and important and complicated problems.

ZIERLER: We'll end looking to the future. First, just a prosaic question: do you have a sense of when you might step down from leading the Chen Institute, or is it more, if you're having fun, if it's intellectually stimulating, you'll just keep at it for as long as you want?

ANDERSON: No, I think I should probably step down after my next term. I can't remember; I think it's a five-year term. I think it's important to have a different person with a different set of interests and views running the Chen Institute. It may be that that is also around the time I decide to retire or at least to close my lab. I've started to think about that. An interesting generational thing is that when I got to Caltech, there were people here who were running their labs full blast well into their mid-eighties, people like Seymour Benzer, Norman Davidson, Ed Lewis. It was not uncommon to have octogenarians here running full-blown laboratories. I'm not an octogenarian yet, but what I am seeing is people in my generation already having closed their labs—people at Stanford, people at Berkeley, people at Harvard—for whatever reason. I think part of the reason is that running a lab, particularly a big lab, in this day and age, is a stressful occupation. It is very stressful. The older you get, the more that stress wears on you, and stress is of course counterproductive to anything creative that you want to do. Maybe a little stress is good, but not too much, and there's more than there should be in science. That is certainly a different aspect of it. I've forgotten what your question was.

ZIERLER: Just about the plan of when you might step down.

ANDERSON: I would say certainly five years to step down from the head of the Chen Institute, if not sooner. I have to say, I'm enjoying being the head of the Chen Institute, A, because I have Mary Sikora and Helen O'Connor who are just absolutely fantastic, doing all of the heavy lifting and organization, and I just have to come up with ideas or bring them other people's ideas, and they implement, and they get it done. Mary is very rare in her abilities to do that. And I enjoy giving people money. It's certainly true that no good deed goes unpunished, and I have and will continue to take a certain amount of shit for the way that I run the Chen Institute, but in the end, I think it's a net benefit. I mean, half a million dollars a year in grant support. If you consider that it would take a million-dollar endowment, minimum, to support a graduate student for one year, and we give out six or seven fellowships every year, that's another six or seven million dollars going into support for graduate students. I would like to think it would have an impact on the neuroscience community, but that's a challenge, because the neuroscience community here, as it is everywhere, is very diverse, and people have very strong opinions about how they think the brain should and shouldn't be studied. Hopefully we can get them all together at our retreats to argue with each other and debate these issues and maybe something good will come out of that. But certainly it [Caltech] is a place that does not have a neuroscience department, unlike many of our competitor institutions.

MIT has a Department of Brain and Cognitive Sciences. Stanford has a Department of Neurosciences. UC San Diego, Columbia, many of our top competitors, especially those with medical schools, have departments of neuroscience. But that's just the way Caltech is set up. So I hope we can fill some of that gap in community-building here. Until I feel like it's really interfering with my ability to do science and do the rest of my job, I will continue to keep doing it as long as they will have me. I'm learning something about being in an administrative position without having to deal with the headaches that come along for the ride with being a division chair, which is something I would never want to do. I certainly learned that.

ZIERLER: Finally, we'll end on the science. Particularly in light of the timing where you might co-time stepping down from leading the Chen Institute to winding down your lab, so for whatever that chronology looks like—five, six, seven years, or whatever it is—what are the most important things for you to focus on during that period?

ANDERSON: That has actually become recently pretty well clarified in my mind, at least for now, as a result of the computational approach that we have taken and this discovery of line attractor dynamics in the hypothalamus. It raises all sorts of questions that are general to line attractors that have remained unanswered, like, does the brain really use them to perform important behaviors? What determines the cells that contribute to them? Are they influenced by experience, learning and memory, hormones, physiological state? All of those questions I think can be answered over the next five years using the hypothalamus as a test bed. Beyond that, I think there is an even more ambitious goal, which I think we may be able to get to in the hypothalamus, and that is to understand how genes, cell types, neural circuits, and the emergent properties of neural circuits, like attractor dynamics, interact with changes in [physiological] factors like hormones, to to cause changes in behavior. I think some of our recent studies, particularly in female mice where we've discovered what looks like evidence that a line attractor forms and disappears in the same region of the brain, as female mice go through the estrus cycle and transition from being sexually receptive to non-receptive, is a place where we might be able to make that happen. Because we know some of the genes that are affected by sex hormones like estrogen. We've identified the cell types and changes in the cell types in that region that occur during the cycle. We're poised to try to figure out which of those cell types contribute to line attractor dynamics, and how they generate the slow dynamics that are required to generate line attractors, whether it's chemical, whether it's via recurrent circuitry. And I think we might actually be able to start to achieve this vertical integration across levels of understanding and abstraction from genes to cells to circuits and synapses, to population-scale emergent properties, all the way to behavior. If that's something that I could make a dent in, in the next five or six years, I would be pretty happy. I can sort of see a way to that, and I'm really excited about that right now. So I feel like we're full-speed ahead. Of course we're going to have competitors, and people are going to swarm into this field, but I think we're in a very good position to learn some very, very important things. That's one of the hardest problems—solving the vertical integration problem in neuroscience.

ZIERLER: I'll just state editorially how heartwarming it is to hear the level of excitement. I mean, you kind of sound like a postdoc right now, thinking about the next five years.

ANDERSON: Yeah. Really the only thing I want to do when I come in to the office is work on our next paper right now, and go through the figures, and make sure everything is right, and get it written up, and submit it, and then start working on the next one. Fortunately, there's a next one coming down the line, and there's a next one after that. As long as the discoveries and the papers keep coming and I have something to tell the world about, and I have great students that I can watch grow up and make these contributions, I'll continue to do this.

ZIERLER: David, this has been a terrific series of conversations. Thank you so much—I know how busy you are—for spending all this time with me. It's a treasure for Caltech. It's a treasure for the history of biology. I really appreciate it.

ZIERLER: Thank you for your interest, David.

[END]