Pratyush Tiwary (PhD '12), Thermodynamicist and Research Leader in Translational Computational Modeling
Pratyush Tiwary has built a research career that fearlessly traverses disciplinary boundaries, that toggles between the pursuit of the most difficult and fundamental questions and the translation of their answers to profound societal benefit, all with an incisive grasp of the utility and revolutionary promise of artificial intelligence. Together, Tiwary's science is undeniably modern and cutting-edge, and because of this, it is surprising and even disarming to behold his deep appreciation for the history of science, and his ability to see the real-world challenges that birthed 17th century thermodynamics as a guiding light to help solve the great problems of 21st century physics, chemistry, and biology.
There is a verve to Tiwary's approach that must explain what he has accomplished relative to his age, and for this, his graduate experience at Caltech plays a leading role. As he jokes, reflecting on his undergraduate days in India, students had to work extremely hard, but in his discovery of the natural environs of southern California, Tiwary embraced rock climbing and running, which provided an indispensable outlet beyond science, and he learned how to bring that energy to his scientific thinking. From the basis of materials science, and the stellar mentorship and the deep friendships he forged in graduate school, Tiwary then embarked on a data science approach to chemistry and then biology in his successive postdoctoral appointments at the ETH in Switzerland and Columbia University in New York. Undeterred by his lack of formal training in either discipline, Tiwary grasped that the most pressing problems at the interface of the fundamental sciences could be probed with theoretical and computational modeling.
Upon joining the faculty at the University of Maryland, Tiwary immediately recognized a familiar and quite promising research culture: across the College Park campus, Tiwary became involved in interdisciplinary research projects which reminded him of his Caltech days, where academic boundaries were destined to be traversed in the pursuit of both curiosity-driven research and relevant applications. From his time as a junior faculty member to his named professorship today, Tiwary has made major and diverse contributions to the creation of chemical intelligence, the elucidation of biological phenomena once considered too rare to model, and to building abstract languages to augment machine learning capabilities. The results demonstrate that one need not choose between studying the brilliant complexity of nature for its own sake and doggedly pursuing discovery that can both increase access to health care and achieve clinical breakthroughs to treat severe disease.
Through it all, as Tiwary has learned, life never unfolds according to some logical, preordained plan. Success is sustained through ingenuity and hard work, but it is often sparked by serendipity, and made meaningful in forging friendships, and in continuing the transmission of scientific wisdom from one generation to the next, in the spirit of kindness and support. And this, above all else, fuels Tiwary's excitement and passion for the future of science.
Interview Transcript
DAVID ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Wednesday, December 6, 2023. It is great to be here with Professor Pratyush Tiwary. It's wonderful to be with you. Thank you so much for joining me today.
PRATYUSH TIWARY: It is my pleasure.
ZIERLER: To start, would you please tell me your title and institutional affiliation?
TIWARY: I am currently the Millard and Lee Alexander Professor in Chemical Physics at the University of Maryland in College Park.
ZIERLER: You were recently named the Alexander Professor. Have you met Millard and Lee? Is there any connection with your research?
TIWARY: Yeah! Millard is a very renowned and respected theoretical chemist in our department who is emeritus and still seems to come to the department even more since he has gone emeritus! When I was hired, I met him, and I have met him very regularly. Lee Alexander is his wife, and I have met her as well. They are both great supporters of our department.
ZIERLER: It's in your title, and given all of the things that you work on, it bears asking right at the beginning—for you, there's materials science, there's chemistry, there's physics, there's biology and pharmacology, there's data science and machine learning, there's theory, there's experiment, there's translation, there's fundamental! The question is, do you have a home discipline from which all other interests flow, or is your work so interdisciplinary that those academic distinctions and fields aren't terribly important for the kinds of questions you're after?
TIWARY: That's a wonderful question. I definitely have a home discipline and that goes back to Caltech, or even what got me to Caltech, and that home discipline is something known as statistical mechanics. That is where my heart is, and staying close to this core of statistical mechanics has allowed me to jump across so many disciplines. I would say it's so fundamental to so many things. As the name says, it's a way of looking at a very large number of entities talking to each other. You can use statistical mechanics to describe neurons in the brain. You can use it to describe water patterns. You can use it to describe whatever you want. That's the thing at heart. You can say, in a sense, I was a theoretical chemist even before I was doing chemistry, because even though my tenure home is chemistry, I don't have any formal degrees in chemistry. I get that from Caltech. It's easy to jump disciplines, number one, and yet you have to stay close to something that you're good at, and that was stat mech, as we call statistical mechanics.
The Enduring Utility of Thermodynamics
ZIERLER: Let's do some Stat Mech 101 in historical perspective. Statistical mechanics is a very old discipline. It goes back even to the nineteenth century. Some of the things that you're working on are so modern, they're so futuristic. What is the timelessness in statistical mechanics that makes it so valuable even for the things that you're working on today?
TIWARY: Here I have to take one step even back and kind of correct my answer to your last question. What's common to my research areas is something that precedes statistical mechanics. It is known as thermodynamics. That's really, really old. That's older than statistical mechanics. Thermodynamics is—Einstein used to say that—there is a quotation attributed to Einstein, I forget the exact words—like, "Relativity might disappear. Quantum mechanics might disappear. I am sure thermodynamics is going to stay." Thermodynamics came around 200 or 300 years ago. People were interested in studying how do steam engines work, and things like that. They really started thinking about—it was a French engineer named Carnot, and it's exactly 200 years ago. Next year is Carnot's 200th anniversary of his main relation. He, for the first time, started thinking, you cannot convert heat into work higher than a certain efficiency. You cannot become 100 percent efficient. He established that limit. This was useful in designing better engines. This continued for 100 years.
The statistical mechanics scheme came right around the cusp of the early twentieth century. That was someone known as Boltzmann in Germany. Boltzmann said, "Thermodynamics is all nice, but let's try to develop more tools to look at it." The boldest idea—just like machine learning is bold right now, or quantum computing is bold—around the turn of the twentieth century, the idea that was bold was, maybe things are made up of atoms. No one really believed it back then. The electron was not discovered and things like that. Boltzmann's model was inspired from a microscopic picture. He took thermodynamics and started thinking about it microscopically. It did not go well. People did not respect Boltzmann. People did not accept his ideas. He committed suicide. There were other reasons also probably, but he was just not respected. Then, Einstein came right around then, and Einstein used Boltzmann's ideas for his seminal papers, and that's when things really started shaping up.
These ideas have stayed. In modern machine learning, the best advances are happening on the basis of statistical mechanics. When you're training a machine learning model, you have these neurons that are talking to each other, and it's again the same thing. You have these neurons in the brain all in a neural network when they are going on and off. You want to study their collective behavior, and statistical mechanics allows you to do that. So, it's really timeless. It's as timeless as anything. I want to says that the Second Law of Thermodynamics, which is at the heart of all this, just says that—it's as simple as saying, as simple or as profound as you want to think about it—that time only moves forward. That's the most fundamental thing that none of us can doubt. That was thermodynamics, and from it, we have been developing—statistical mechanics, what I do is equilibrium statistical mechanics. A lot of people also think about non-equilibrium statistical mechanics, and that gets a bit more technical.
ZIERLER: I asked about research fields. What about research questions? What are some of the fundamental questions that you're after that serve as a connecting point for whatever given project you're working on?
TIWARY: Feynman said, around the 1960s or 1970s, that everything can be explained in the jiggling and wiggling of atoms. I think that's all there is! Be it a crystal, be it a protein, be it a DNA, in the end it is just atoms talking to each other. Or maybe electrons talking to each other. Yet you have these emergent behaviors which are very complicated. You behave very differently from my dog here in the zoom virtual background. Even though the basic biology is in some sense the same, you have very different emergent behaviors. How does emergence happen? How do you start from first principles, and can you explain emergent behavior? Some people might argue that it is hopeless. Even though everything can be explained in the jiggling and wiggling of atoms, the computation cost for doing that might take you longer than the age of the universe—let's think about it that way—so it's not going to work.
People like me think, no, that is not true. We can explain complicated behavior in biological and material systems by simulating things atom by atom, and understand things such as—when does a drug molecule leave a protein? How does a protein fold and misfold? Why is it that RNAs are so much floppier than DNA and how can we use it to design better drugs? All these questions, in the end it's just moving of atoms.
ZIERLER: Are you operating strictly in a classical Newtonian world, or does some of your—
TIWARY: [laughs]
ZIERLER: —research have quantum foundations or even quantum applications?
TIWARY: My first two papers in grad school at Caltech were quantum mechanical. I found it very hard, and I realized that even classical systems have so many open questions, so I just went back to classical systems and stayed with it. So, these days? No. These days, only classical systems. I know there is a lot of interest in quantum systems, but I think there is a lot of richness in classical systems also.
Fundamental Questions and Translational Possibilities
ZIERLER: What about the divide of basic science or fundamental research and translational applications? Are you operating in both? Are you specifically motivated more by the potentialities of translational research these days?
TIWARY: That's a wonderful question. I think that's kind of why, even though I am doing biology and materials science now, more of my group is doing biology perhaps than materials science, and part of the reason is that in biology the translation impact seems more immediate. It's so nice to think that you can design a drug which can help someone live longer. It's not that I will singlehandedly design a drug, but I will help with the process. With translation, the impact is always there. I think all good translation science happens on the basis of rigorous pen-and-paper chemistry, pen-and-paper math. The basic science has to be there. But we have to—sometimes basic science is comforting, right? We stay with simple systems. We know the exact results. We stay with it and we never leave it. That's a risk all scientists or applied physicists can take. Because real biological systems are very messy. You don't know what's going on. Yet, you have to do basic science and think about real systems starting from that point, and then you extrapolate from there.
ZIERLER: Being at UMD, being in College Park, the Greater Washington area, is that an asset for you, to be near the federal government and all that it offers?
TIWARY: Absolutely! I was at NIH just on Friday meeting my collaborators. We are working on immunotherapy and we are doing stuff there. I can't tell you what, but we have some discoveries there which we will be patenting soon, which is again the translation of which we think is already doing better than things out in clinical trials. It's just a 30-minute drive from here. It's kind of like Caltech and JPL. Now, think—you don't have only JPL; you have NIST, you have NCI, you have NIH. You have everything in the same area. So, absolutely.
ZIERLER: You're doing some of the most exciting research these days in artificial intelligence and machine learning. How far back does that go in your research? When did you embrace the importance of AI as a research tool?
TIWARY: Oh, I remember the exact moment. It was a breakfast. I had started here, and it was a breakfast with my old grad school friend named Steve Demers. You should check Steve out. Steve joined my advisor Axel van de Walle's group. I was the first student in my group at Caltech, and Steve was the second student or so. My advisor tells me, "We have this student coming over. He has a couple of Oscars." Like, "What? What did you just say?" It turns out Steve Demers does have an IMDB profile, and he did have Oscars. He worked on Lord on the Rings and Matrix. He was involved in designing the skin of Gollum and things like that. Steve was my friend from grad school. I remember when I started here in Maryland—Steve and I had been touch ever since 2008. It was 2017, I was working here, and I was thinking about dimensionality reduction. Biological systems or chemical systems are very, very high-dimensional. They have so many atoms talking at the same time. Yet, for example, when I try to grab this pencil in my hand, I know already that I don't need to think about the trajectory of every single atom, right? I can just think about the center of mass of the pencil and the rotation and get things done. Basically I had started thinking about dimensionality reduction very carefully, and Steve pointed out parallels with a machine learning method, autoencoders, which has now become extremely, extremely hot. In 2017, it was just picking up. Steve had the intuition to guide me in that direction. That's when it started. Then of course we went more and more deeper into it. That's when I started discovering that statistical mechanics can help us design better machine learning algorithms. That's the exact moment when I got into it.
Hype and Reality in the Current AI Revolution
ZIERLER: Nowadays there's so much discussion about ChatGPT. Where is there the hype? Where are people more excited, more exuberant than the research capabilities actually warrant at this stage in history?
TIWARY: That's a wonderful question. When you say where are people more excited, we have to start thinking about who are the people are we talking about. Are we talking about experts, or are we talking about non-experts? Non-experts, [laughs] there is ChatGPT. Experts, there are many other models which are more generalizable than ChatGPT. Because ChatGPT is what is known as a large language model which can basically write down text and sentences, but what is becoming clear is, while it is incredible in committing the whole of internet to its memory and giving you answers, and it's definitely reducing the time in day to day operations—as humans, we have to start thinking that if we are being asked to do something that ChatGPT can do, why are we being asked to do it? Clearly it is creating a divide between tasks that can be done on ChatGPT versus tasks that cannot be done on ChatGPT. This was revolutionary.
However, most of the tasks it can do a human could have done with some more training, maybe ten times more time, something like that. It's an efficiency tool. That's how I look at it. There are these other machine learning models such as something my group uses a lot, known as a diffusion model, which is again very much coming from stat mech—these are the models where it feels as if there's something happening, to me. It's non-trivial. The diffusion models have also caught the public eye. They are used in a very popular software known as DALL-E. I don't know if you have used it. For example, I could upload your picture and say, "Draw me a photo of David in an Impressionistic style," and it would do that. That uses diffusion models.
There is a lot of hype. I do think we are witnessing a revolution or a breakthrough. There is no doubt in my head about it. It has really made me think about—everything! Like, what does it mean to be conscious? What does it mean to be intelligent? Really humanity I think will have to address those questions much sooner than we thought. I wouldn't say we have reached artificial general intelligence. AGI is not obtained yet. This reminds me, maybe you also recall this, in the 1990s there was a lot of talk about Stephen Hawking and combining general relativity with quantum stuff. That's AGI currently. Has it happened? Have we really created consciousness? We might be closer than you think! That's what I feel. I'm very excited about it.
ZIERLER: The first part of your answer emphasized the efficiency, liberating us from some of the drudgery of research that large language models can do on their own. What about AI achieving a level now where it's not just about efficiency but it's about achieving research that simply wasn't possible before? What are you seeing in that regard?
TIWARY: It's facilitating research like that, but if you take a closer look, it's not 100 percent clear. There are good test cases that it has done research, but there's always a human in the loop. I don't think it can replace human intuition completely. I think the best research will happen when it makes the best of human intuition and AI. That's where maximum stuff is happening. I am not convinced that a purely AI tool is really—revolutionary. It's combined with the human.
ZIERLER: Questions of consciousness and a fundamental reckoning with what it means to be human, which is coming from this exponential growth in AI, this understandably has caused a lot of hand-wringing and wondering about the political implications of AI and perhaps the need for governmental or international guardrails to rein in this technology. Do you see yourself as part of those policy discussions that will need to happen probably sooner than later?
TIWARY: I think so, I think so. Everyone has to be involved, especially people who are using it, people who are working with it. We really need to look at test cases across different domains and understand how people feel about it and what's their experience. Absolutely.
ZIERLER: What would those guardrails look like? How can we put systems in place to make sure that AI remains a force for good?
TIWARY: First of all, let me give an example. There is a website known as Stack Overflow where you can go and post a programming question and Stack Overflow will give you an answer. Stack Overflow is really good, and the number one reason for being good is the up-votes and down-votes provided by other humans, so you have a very good metric of when to trust information or not. Normally if something has 150 up-votes and zero down-votes, it's going to be perfect. You will see a clear split—trust this; don't trust this. We need to have similar things in AI-suggested actions. When we start using these actions, especially when we start creating feedback loops, we have to have very good metrics of whether something should be trusted or not.
Now, trust, what is the basis of trust? In human relations, trust comes from experience. You trust someone when you have spent some time with them. Same thing with AI. That creates a fundamental problem. If you're going to apply it in problems where we already have a lot of experience, you might have an ability to say whether we should trust it or not, but then it's not going to be revolutionary because we already have a lot of experience. If you throw it into things where you have no experience, then how do you know whether to trust it or not? A big part of my group is again looking at ways to interpret AI methods and really establish ways of trust or don't-trust. I think that's going to be an important part of the guardrail.
ZIERLER: Your research group, given the breadth of everything that you're working on, how does a graduate student know that you're the right thesis mentor for them? Are they expected to be as wide-ranging in their interests are you are in yours? Do you segment out your research group according to what your grad students are interested in? How do you work that out?
TIWARY: That's a very good question. I have PhD students from chemistry, physics, applied math, biophysics, chemical physics, and maybe one or two departments that I'm forgetting—which is again a very Caltech style. It was so easy to take classes everywhere. Everyone has a core strength that they bring to the table, yet they should be able to have a conversation with others and be able to understand and explain to them. Everyone comes with a core strength, and through the course of time they pick up a second strength. Some people pick up a third strength, but normally people stop at two strengths. Then they work around this core. Again, start with something which everyone knows. That's our common language. Speak in that language. Everyone also tends to be very good in programming and in math. Stat mech, math and programming—if these three things are there, then the world is open.
ZIERLER: Just by a rough ratio, how many of your grad students are motivated to go into academia, and where is industry really exciting for the kind of research that comes out of your group?
TIWARY: A lot of them come to grad school thinking they will become professors, and maybe I turn them off or something goes wrong, or they think that the possibility of getting a tenured position is low, which is perhaps true, for a variety of reasons, and they start leaning more and more towards industry. That's the number one reason. The barrier to entry is lower. They can get in. There are clearer promotion metrics. You can do better. At the end of the year, you have a bonus. Academia is fuzzy, right? It takes time to see whether something we have done is correct or not. Secondly, when you become a professor, the hardest thing to do is research. There are committees, there is teaching, there is travel, all these things. They see that, also. Thirdly, the pay structure in industry is generally speaking outperforming academia.
Given all these factors, a lot of students are leaning towards industry. I think that's a wonderful thing. Because think about—if you were to become a professor of management, would your students who are doing an MBA or a PhD with you, should they also become professor of management? No, they should go out into the world and do better things with it. So I don't think of it as quitting academia, but I think of it as quitting industry. I ask them, "Why do you not want to go into industry?" The number of companies which are doing good research as opposed to just minor product development is really changing. It's not as good as—you know, 40 years ago, there used to be Bell Labs, which was really doing cutting-edge research. I don't think we have a Bell Labs yet. Many would like to think they are, but I don't think we do. [laughs]
ZIERLER: Pratyush, what about for you? Have you gotten involved in entrepreneurialism or the startup space?
TIWARY: Yeah, my main industry interaction is to serve on the scientific advisory board of a big company known as Schrödinger. It was actually started out of Caltech. Bill Goddard was one of its founders, together with Richard Friesner, back then at UT Austin. I am on their eight-member scientific advisory board. This gives me a very good perspective into industry. I really enjoy talking to them and advising them on what they should do, what they should not do, and how I see the field going. But I absolutely love being a professor, so I'm very, very happy being here.
ZIERLER: Pratyush, I wonder if you can survey the state of interdisciplinarity at the University of Maryland. What are the kinds of research centers or joint projects that help break down those administrative boundaries between departments that's relevant for what you do?
TIWARY: When I was hired here, one of the main attractions was that my tenure home is in Chemistry, but I'm two-thirds in an institute known as the Institute for Physical Science and Technology, IPST. My neighbor right across my office here is someone named Ellen Williams, who is also a Caltech PhD. She used to be the director of ARPA-E and the chief scientist of British Petroleum. She once told me that right after joining British Petroleum, the oil spill happened, so her job was not how to do better science, but why do science at all when you have bigger problems. This place, IPST, reminds me of Caltech every single day. It's that same environment where you have people just walking around, discussing science. You probably also know my colleague, Nicole Yunger Halpern. She is also in IPST. It's a very Caltech atmosphere, where it's like sitting outside the Red Door Café. People are talking to each other, and you overhear things from biology to physics. The Institute really facilitates things like that.
And, there are more institutes. I have just taken up a position as the director of a new center for therapeutic discovery at a new institute called the Institute for Health Computing. Now, this institute is really special because it is not just breaking barriers between expertise in the College Park campus, but it is breaking barriers in joining people across the Baltimore campus or the Baltimore County campus and things like that. This is a brand-new institute which has been funded by Montgomery County. This is a role I am just taking up which will make these things even easier.
Mathematical Foundations in India
ZIERLER: Let's go back now and establish some personal history. Let's go back to India. Tell me about the IIT system, why it's so special, and how you chose the particular one at Banaras Hindu University.
TIWARY: [laughs] You are invoking some good memories and some, let's say, childhood trauma. [laughs] The reason I say trauma is because the way it works in a country like India—there are a lot of Indians, right? We are an overpopulated country by any metric. So, if you want to go into engineering—and when I was growing up, it was like you either become an engineer or you become a doctor, or you're doing something wrong.
ZIERLER: [laughs]
TIWARY: I was good at math. So I tried to be a mathematician first. I went to the Indian Statistical Institute, which is a very special institute where every year 20, 30 students are admitted out of 100,000. I got in, but then I flunked. It was terrible. The math was way too abstract for me, very quickly, and I had to drop out! You won't find many people who are first-year college dropouts and professors. I am one of those, because I dropped out! But then I took the IIT exam. I knew I liked math. I saw that pure math, even though I thought I could do number theory, it was not my cup of tea. I knew I wanted to do something applied. I took the IIT exam thinking I would get into applied physics, but the way it works in this IIT entrance exam is a million students, I think—or maybe half a million, but I think closer to a million—take an exam, and you get a rank, an all-India rank, and that decides everything. My rank, I still remember, was 3,352. [laughs] So it's super competitive, as competitive as it can be. It was not good enough for me to get applied physics or engineering physics. But a friend of my father, Prof. S.S. Major, who was a professor at another IIT, he said, "You should go study something known as metallurgical engineering at the IIT BHU. Back then it was just IT BHU; it got the official IIT status shortly after that.
He said, "If you go there, first of all you will see metallurgy is not quite what you think it is." The stereotype of metallurgy is that metallurgy is an ancient subject. We named our ages. You're a historian, right? Bronze Age, Iron Age. Metallurgy is older than anything. Metallurgy is what has defined us. One thinks of metallurgy normally as extractive metallurgy. How do you get iron, and how do you convert iron into steel? That is a part of it, but a big part of metallurgy is something known as physical metallurgy, which is how do you study the physical properties of metal. This friend of my father was wise enough that he said, "If you go to BHU, number one, their metallurgy program is awesome. Number two, you will find as much physics as you want to do in it." That was the best decision anyone ever gave me. I went there, and it just so turns out that Varanasi is a very special city. It's the holiest Hindu city. If you are a Hindu—I am not a practicing Hindu; I am spiritual but I am not religious—you go to die in Varanasi. It's so special. People think of it as the Jerusalem of Hinduism. I was lucky enough to go there, and then I loved the city, I loved everything, and that's where I started doing stat mech.
ZIERLER: When did you first hear about Caltech?
TIWARY: Oh, Feynman Lectures in Physics, seventh grade.
ZIERLER: Oh, wow.
TIWARY: Maybe when I was 12 years old, my father handed me all three volumes and said, "You should read these." In fact, the first time I tried to get into an IIT I could not get in, because my father said, "All you need to do is to read Feynman lectures." That was not good advice, because it's a competitive exam, so what you really need to do is to solve problems, not read Feynman lectures. [laughs]
ZIERLER: [laughs]
TIWARY: In Varanasi, in my undergrad dorm, believe it or not, I had—it's so funny—I just wanted to go to Caltech as a freshman, ever since I remember, since Feynman and everything. So in my undergrad room, small room, I had a big poster saying, "Caltech Beckons." I used that to motivate myself. Lo and behold, I actually ended up there!
ZIERLER: When it was time to think about graduate school, you were specifically motivated to come to Caltech?
TIWARY: Yeah, I was definitely motivated. I applied to a bunch of places. Some places admitted me; some places did not admit me. Caltech thankfully admitted me, so it was a slam dunk. [laughs]
Shift From Metallurgy to Materials Science
ZIERLER: How did you choose a program of study for Caltech?
TIWARY: U.S. schools don't do metallurgical engineering. Materials science is what metallurgical engineering has become, so I came to Materials Science.
ZIERLER: What year did you arrive in Pasadena?
TIWARY: 2007.
ZIERLER: What do you remember? What are your early impressions when you got here?
TIWARY: I remember big parking lots, and the international student program, which is still very active, was very helpful. They had arranged Indian students to pick me up, take me to a restaurant. I lived in the Catalinas, and it felt like a resort to me. I was like, "Wow, it's so green and it's so nice." You walk over to Lake Avenue and there is a poster of Feynman on the wall, and it's like, "Okay. [laughs]. This is it. This is it."
ZIERLER: Had you ever been to United States before? Do you have any family here?
TIWARY: I had never left. I had family, but I had never left India. First time here, in an outside country.
ZIERLER: Tell me about the Materials Science program as a graduate student. What was emphasized?
TIWARY: The Materials Science program is very parallel to the Applied Physics program. There is a Department of Applied Physics and Materials Science, and they have the best logo ever. If you go to their webpage you will see a bracket, AP, and in the middle there is Planck's constant for the "h" in applied physics, materials science. It's quantum mechanics or Planck's constant which takes the bracket, the dot product between applied physics and materials science as in <Ap|h|MS>. That was it. So, I got here. Axel van de Walle was my advisor, who is no longer at Caltech. He had a two-body situation so he left Caltech. His wife was a professor at Chicago. They both moved to Brown. I joined his group. I considered joining other groups in theoretical chemistry—Tom Miller or Bill Goddard. I had my own concerns for not joining them. The nice thing about Axel was since he was mostly in Chicago, I could do whatever during the week. That was great. I would do a lot of rock-climbing during the week! [laughs]
ZIERLER: [laughs]
TIWARY: I made good friends in Materials Science. Then I quickly realized that I was actually quite decent in the core materials science courses due to my metallurgy education, so I was able to branch out. I took condensed matter physics from Kitaev and courses like that. It was really awesome that I could just branch out and take other courses.
ZIERLER: How did you go about developing your dissertation topic?
TIWARY: I started with the first topic, which as I told you was some quantum mechanics. It was doing something known as fitting force fields. That was a lot of fitting. I did not enjoy it very much. I passed my qualifying exams, and then I told my advisor that I am getting bored and frustrated, I don't like this, so I'm going to go to India and go climbing for a year. My advisor said, "You can't go for a year. How about you go for three months?" I said, "Sure." So I went to India for three months, to the Himalayas, I climbed up to like 20,000, 21,000 feet a couple of times, came back, and then I was like, "Okay, so what do we do now?" He gave me a list of 10 topics. He had some grants which allowed him to go after generic ideas. These grants are hard to obtain. I am very lucky that I also have similar grants, so you are not tied by one project. You can truly chase curiosity.
He gave me a menu of ten problems, and the one that really stuck with me was this problem of rare events. How do we simulate things which are very, very slow, and how can we simulate them on a computer? Things such as—and that's what I'm doing these days, also—how to simulate things—for example, an earthquake is a rare event, so you have to wait very long if you want to simulate it. Most of the time will be spent waiting. Same thing with a chemical reaction; most of the time is spent waiting. When something happens, it happens very, very quickly. What you want to do is to reduce the waiting time and then focus on the event of interest. That's a simple idea; how to do it automatedly and accurately is a very hard problem. That's how I started working on it, and I stayed with the problem.
ZIERLER: What is the threshold for a rare event? How do you define rare?
TIWARY: It depends on who you ask. My wife, who I met at Caltech, Megan Newcombe, she is a professor in geology, if you ask Megan she will tell you, "Anything that takes faster than a million years is not a rare event." [laughs]
ZIERLER: [laughs]
TIWARY: If you ask a quantum chemist, they will tell you, "Anything that is slower than a few picoseconds is a rare event." So, it's discipline-dependent.
Simulation and Scientific Reality
ZIERLER: What is the value of simulation? How do you understand or how are you more prepared for rare events with computational simulation?
TIWARY: The advantage of computational simulation is not only do you know when the rare event happens, you can study everything atom by atom. That's the beauty of it. In experiments, rare events also happen. But it's very hard to go and draw insight into what really drove the rare event of interest. How can we make it happen less rarely, for example, or more rarely? You might want to control it, right? Computer simulations can really show you what drove the process, and the understanding that comes from the simulations, if it's done correctly.
ZIERLER: This is as much a philosophical question as a scientific question—is simulation always in the service of experiment, or is simulation sometimes an end in and of itself, where you can learn something about nature simply by simulating it regardless of applying that to a deductive experiment?
TIWARY: You're asking me a very good question which I think about a lot. I often find myself fighting with editors of prestigious journals who reject my purely theory papers or purely simulation papers because there is no experimental collaborator. I love my experimental collaborators. I have at least three of them. I have many papers of them. But I really believe simulation for simulation's sake, or more broadly speaking theory for theory's sake, is very important. A good computer program is like a living thing. You can see it's very optimal. It has got its own beauty. It has almost got its own life. A lot of people don't appreciate that, perhaps because there is so much back-simulation out there. Unless we do theory for theory's sake, we cannot expect it to be predictive. Often it's dangerous to just do simulations and publish it because it retrospectively validates what you saw in the experiment. How do you know? It has to be prospective first of all, right? And it's hard to do prospective simulations. There are some examples, but it's hard. So, yes, I firmly believe in simulation for simulation's sake, because it's just so beautiful. It's like playing a video game, right? You can really enjoy it! It's awesome!
ZIERLER: Maybe this is an even deeper question—how do we even know that simulation or theory for theory's sake is true? What's the baseline?
TIWARY: That's a very good question. This becomes a bit context-specific. You can come up with limiting factors that for math becomes useful. You can come up with theorems. You know in certain limits it should show certain behaviors. That's where statistical mechanics becomes useful. One of the nicest things in statistical mechanics is this notion of signal is to noise, or, noise is to signal. Statistical mechanics tells you that as you increase the number of players in a system, the noise goes down as one over square root of n relative to the signal. For example, if I am talking about the temperature of the human body, I don't have to say 38.4 plus/minute something—it is a certain temperature—but if you're trying to make the temperature of a system made up of only 100,000 atoms, the fluctuations will be larger. Statistical mechanics gives us very good guidelines on how things should look, and we can easily check whether those are being followed or not. A simple one is conservation of energy. If you run a simulation long enough, and if it is an isolated simulation which is not interacting with the rest of the universe, then the total energy should be constant. If it is not constant, then it's a wrong simulation. These are so-called necessary but not sufficient guidelines to trust a simulation, but you should always test those. We follow guidelines coming from physics and math.
ZIERLER: Is your starting point when an experiment is impossible to conduct and therefore that's when you go to simulations, or is your starting point simply simulation from the beginning precisely because of its inherent value as you see it?
TIWARY: It's all of those. My group, primarily what we do is we develop simulation methods that others can also use. We write open-source software. Most of these methods we test on systems where there is no need to do a simulation because the experiments are already there, so we can benchmark against experiments or against much longer simulations. Then we start working on the first class of problems you mentioned, where it's either very difficult to do an experiment or very expensive to do an experiment, so high that simulations can help you in a nice way.
ZIERLER: I wonder if there are some happy surprises where you do a simulation for its own sake but then as you mentioned, this is an enabling technology that other people then use to take it into research directions that you didn't even see.
TIWARY: That's happening all the time. The only catch there is that people also start changing your method, and at some point you don't even know whether they are using your method or someone else's method. That happens very frequently. At that point I just remind myself that imitation is the best form of flattery, and I live with that.
ZIERLER: [laughs]
TIWARY: Many of the metrics that are published are changed into some other forms, and others are also using it, and we can see that the general ideas that motivated us are clearly helping facilitate a lot of different research. That's very exciting.
ZIERLER: Let's now go back to Caltech. You mentioned rare events can be either on a geologic timescale, or it could be picoseconds, femtoseconds, attoseconds. For you, for your dissertation research, what was the timescale that was most relevant?
TIWARY: Anything slower than a microsecond.
ZIERLER: What was the research? What were you working on?
TIWARY: I was very excited by my neighbor, Julia Greer. Julia had just started there. I was very fond of Julia and still am. If you have seen Julia, she is always roller-skating across the campus—
ZIERLER: That's right!
TIWARY: —sometimes with two kids. Julia was doing these nanopillar experiments. She was compressing nanopillars. When you compress a nanopillar, defects form in it. She would compress them at a certain strain rate, which is how much are you compressing them every second. In simulations, you have to compress them much faster. I wanted to reduce the gap. That was the first motivation. But actually I just remembered the original motivation was due to a professor who was perhaps the most inspiring professor of my life. I think he's emeritus now. His name is Bill Johnson. When I first met him at Caltech, I took his thermodynamics and statistical mechanics course. He used to teach it one quarter, and then Brent Fultz, who was also very inspiring, used to teach it another quarter. Bill was and is one of my role models. He just blew my mind away. He would go off on tangents, and I would fall asleep in the class, because it was at 1:00 p.m., and I would wake up, and then I wouldn't have missed anything because he had just gone off on a tangent! That was great! And he would tell all these stories!
ZIERLER: [laughs]
TIWARY: For him the problem was—Bill was one of the people who had discovered a whole class of materials known as bulk metallic glasses. As you know, we see glass all the time. It is on the screen that I am using to talk to you. But metals don't form glasses, metals form crystals, and that's why they are very tactile and malleable and things like that. They're very strong. Bill and really his PhD advisor back in the 1960s at Caltech also, Pol Duwez, discovered that if you cool a metal fast enough it will form a glass. Bill discovered a whole area due to this, which has led to many spinoffs and things like that. That was a problem I wanted to study. I have never gotten back to it, but that was my original inspiration, I think. Because when Bill would do his experiments, even though he was cooling it very fast, it was still way slower than what you could do on a computer simulation. Bill's experiments would cool as fast as a million degrees Celsius per second, which is incredibly fast, but in computer simulations, you will be cooling at a million multiplied by a million degrees Celsius per second. I thought, if I can do that, that would be awesome! I've never gotten around to doing it because glasses are very complicated. That's one of the problems I want to get back to soon.
ZIERLER: What was the laboratory environment like at Caltech? What did it look like? Where did you work?
TIWARY: Oh, outside Red Door Café, most of the time! When I go back there, the ladies who work there, they still remember me, which is a lot of fun. It's Southern California, so you can work outside all the time!
ZIERLER: You mean you're mostly working on the computer?
TIWARY: Yeah, because I am a theorist, so I had that flexibility. Or I would go climbing to Joshua Tree and just take my computer and do some work from there.
ZIERLER: Were you always inclined toward outdoor recreation or this is something you picked up in California?
TIWARY: No, absolutely California. In India, you had to study hard. You had to do your math. I was very unhealthy. I was even a chain smoker at some point. I got to Caltech and I was like, well, I'm not going to stand out just by the virtue of being smart. Everyone is very smart here. There was this desire to distinguish myself, which everyone had. I was like, only science is not going to do that. I should do something else. Then I saw these people, including my friend Aron Varga who is a very good friend—he was also in Materials Science—Aron told me that people run marathons, and he had just run a marathon in under 3 hours or something like that. Okay, that sounds very cool. So, I started doing that. Then I saw some posters from the Caltech Alpine Club about climbing and things like that, and I started getting into it. With time, I started getting fitter and started enjoying things more and more. Now in my job, it's so important—I don't climb anymore because I had some accidents, but I run a lot, and this job makes it—running is critical, for just staying in the job. [laughs]
ZIERLER: Do you think that exercise and having that outlet made you a better scientist?
TIWARY: Oh, one hundred percent. There is no question about it. I have seen my work productivity increase as a monotonic function of my marathon pace. [laughs]
ZIERLER: Are you thinking about science while you're running, or is the value that you're not thinking about science while you're running?
TIWARY: I'm rarely thinking about science while running. That said, often big blocks rearrange themselves. Often it is an HR problem. I don't know how to deal with a certain situation. Or often I'm taking the whole wrong approach to look at the problem. Instead of using method A, I should be using method B. I run something like 10 hours a week, which is a fair amount of time. On those 10 hours, definitely on a regular basis, your head clears up, the first thing—you think better—but also during the run, big blocks just move. You're like, "Oh, I had not thought of this." You get moments of clarity.
ZIERLER: How did you know your research was done? When were you ready to defend? What felt complete about your work?
TIWARY: There was a certain number of papers. I could have kept doing it more, but there was a clean story. It was not complete and it's still not complete, in a sense, because I am still thinking about the relevant problem, and I don't think it will ever be complete, because there will always be new things to think about, but clearly it was the end of a chapter. Or the end of a season; let's put it that way. It was time to move. Axel had also moved to Brown University, so it was clear I had to move on.
ZIERLER: Where was computational power relevant to what you wanted to simulate? Were you ahead of it? Were you catching up to it? What did that look like as a grad student?
TIWARY: Computational power was never an issue. I made the best of what I had. We had ample computer power. There was always a question of you should be using even more specialized computers to do more, but since what I was doing was to develop algorithms which could speed up the process exponentially, the computer power became somewhat irrelevant. You don't want it to be too slow, but a computer would make things maybe five times faster, ten times faster, twenty times faster; I am developing methods which are going to make things a billion times faster, a billion times a billion times faster. Twenty times ten is still a factor, but it's not as significant as you might think. We are always one step shy of using the best powerful computers even now for the simple reason that—as I explained—and the second reason—for the best powerful computers, there is always too much of a rush. I don't want to deal with that.
ZIERLER: Were you thinking about machine learning and AI at Caltech?
TIWARY: I was thinking a lot about applied mathematics techniques, which are related to machine learning. I was interested in things like compressive sensing and things like that. So, not in its direct form.
ZIERLER: Was JPL an asset for you at all? Did you ever spend time at the Lab?
TIWARY: I did not spend time at JPL. My wife used to interact with them a lot, but not me.
ZIERLER: Who was on your thesis committee?
TIWARY: It was Bill Johnson, Bill Goddard, Julia Greer, Brent Fultz, and Axel.
ZIERLER: That must have been a very interesting discussion at your defense!
TIWARY: Yeah, it definitely was. Bill Goddard was late to my defense so we had to call him. Now that I'm on the other side, I see that happening to myself every once in a while, so I know how it goes. [laughs] It was great. It was highly enjoyable.
ZIERLER: Any memorable questions or things that challenged you?
TIWARY: No, I don't remember that, but I do remember like three hours after my defense I got a call from Los Alamos National Lab that I'm not going to get a postdoc position there. [laughs] That's what I remember most. Apart from that, the questions were good questions about what would it take to get these metals working for glasses, the role of entropy, things like that. Now it has been a while, but mostly it was a fun experience. A PhD defense is like a celebration. Most of the work is done. I can't think of many people who don't pass their PhD at the defense stage. You fail your PhD at the qualifying exam stage, but if you make it to the defense, it's people getting there to celebrate what you have done. It's just celebratory, which was nice.
Chemistry after Caltech
ZIERLER: At this point, what kind of scientist did you consider yourself? What kind of postdoc programs did you want to join that seemed most appropriate for you?
TIWARY: I wanted to get into chemistry. I knew that materials science is more applied, and I wanted to do more fundamental work. I opened my textbook on molecular simulations, the most famous textbook, which is called Understanding Molecular Simulations, and the most important person in that textbook was a person by the name of Michele Parrinello, who is in Switzerland. I wrote Michele an email, he offered me a Skype interview back then, and I got the position. I packed up my bags and went there. It was very hard because my then-girlfriend, now-wife was still a PhD student at Caltech, so we did long distance two years, Southern California to Southern Switzerland. But it was clear to me that this is the guy I want to learn from.
ZIERLER: Tell me about Michele's research. What is he known for?
TIWARY: Michele is known for the first person who was—he did so many things. He has one of the top ten most important PRL papers, which is known as Car-Parrinello molecular dynamics. We all know Born-Oppenheimer; Car-Parrinello is a similar framework where you can combine molecular dynamics with density functional theory. He was the first person to do that, and that was a super hit. Not just that, before that he did something known as the Parrinello-Rahman algorithm, which is how to do constant pressure simulations. He was the first person to show that. So, here are these methods which are textbook methods now. They are so important that people even forget to cite them. They have reached that level. He was that guy. When I joined his group, he was around late sixties, and I had a great time learning how to develop methods from him. Just like myself, he was also doing biological applications but he had no background in biology. I thought that was a good. If he can do it, I can do it, too.
ZIERLER: Tell me about the ETH. What was that like for you? What was it like living in Switzerland?
TIWARY: Switzerland was great. We were employed by ETH, but Michele managed to keep his group in a little resort town in Southern Switzerland known as Lugano, which is on the border of Switzerland and Italy. That's where we lived. It's the warmest city in Switzerland. It's super nice. On the other side of the lake is another city known as Como, so it's in that area. Como is very famous because George Clooney has a house there and things like that. Lugano was awesome. We got ETH salaries but we lived in Lugano, so it was cheaper. I spent exactly two years there. I had a great time. By this time I had stopped climbing. I was not climbing that much, but I would still go hiking on the weekends. I also learned—the biggest thing there—I think Europeans have better work-life balance than Americans, I would like to say, especially in science. No one came to the lab on weekends. No one worked in the labs after 6:00 p.m. Every once in a while you would see this, but people would stop things, do other things. So on the weekends I would not work; I would go hiking, things like that. Of course there would be a moment of inspiration where you have to go into the lab on a Saturday and get something done because you really want to do something.
ZIERLER: Did you continue in the rare events aspect of your research?
TIWARY: Absolutely, yeah.
ZIERLER: How did that expand during your time at ETH?
TIWARY: I started thinking about it more deeply. I started thinking about metrics which don't just apply to materials but also to biological systems. And, I really started addressing the core problems. During my PhD, it was clear that I had made progress, but there were some core problems I was not addressing, so during my postdoc I started addressing that. I did two postdocs. I did one postdoc at ETH, and the other postdoc at Columbia University. Both were two years each. That's where I really shaped the way of thinking I have now.
ZIERLER: At Columbia, this was where you were working with Bruce Berne?
TIWARY: Bruce Berne, yeah.
ZIERLER: Tell me about Bruce Berne.
TIWARY: Bruce Berne is amazing! Bruce became a professor at Columbia in 1962 or 1963, something like that. I was his last postdoc. I had a great time, because I would just get to talk to him a lot. He gave me a lot of freedom. He really let me think about things on my own which was super nice. The other thing that really happened at Columbia for me is just down the hall at Columbia University was Rich Friesner, who was the founder of Schrödinger. I also collaborated with Schrödinger back then, and that's a relation I have continued to maintain. I loved living in New York City.
ZIERLER: What aspects of your research at that point were relevant for Schrödinger?
TIWARY: Schrödinger sells a lot of software. One of their main things is how do drug molecules bind to proteins, or where do they bind. This is the docking problem, which is something every pharma company uses. The problem in docking is that a docking method will give you 10 structures, and it's likely that out of those 10 or out of the top 20, some structure is the correct structure, but most of them will be garbage. What I was able to show is that by using these rare event methods for speeding up the process of drug dissociation, I could easily rule out most of those structures. They packaged it into a method that gets now used heavily in drug discovery.
Identifying the Hardest Problems in Drug Discovery
ZIERLER: It was at Columbia that your research really became more biological?
TIWARY: Yeah, it started with Parrinello, but at Columbia I really went heavily into that.
ZIERLER: What did you see as your point of entrée into biology? What could you offer?
TIWARY: I just found the hardest problem that no one could solve, and I thought I would go at it. This hardest problem was, everyone was thinking about what is known as the free energy of a drug. You have a drug which is bound, and a drug which is unbound, to a protein. You calculate, what is the relative free energy difference? This is known as the binding constant or the thermodynamic affinity. Most drug design paradigms optimize this thing, the idea being that once you get there, you will find it very, very stable. What they ignore is the barrier between the two. What if you get there but the barrier to get out is very small? Then you will not stay there. On an average you will spend a lot of time there, but you will be shuttling back and forth. This is known as the residence time of a drug. One of the papers I wrote with Parrinello was one of the first papers to show that all-atom simulations can be used to calculate residence time of generic drugs. We did that when I was in Michele's group, and then with Bruce I realized, there is still a lot of work to be done in this, and I started working on this, and it's something I am still working on, on how to do this.
This is something which is actually quite hard in experiments. In experiments when you calculate this residence time, the error bars can be huge. Secondly, you don't really see, what is it that contributes to the residence time being longer? Why does the residence time matter? Think about this. If you take a certain drug, and the drug on an average spends a millisecond in the protein of interest—let's say it's a pill for headache—if it only spends a millisecond, then you will be popping that pill every millisecond. That's not going to be practical. You want it to be something like maybe the residence time should be on the order of hours or days. You might say, why not extend it to weeks or months? That's also bad, because any drug in the end is toxic to your body. It's going to cause harm, so you want it to get there, stop the bad thing from happening, and then leave. There is a whole range of tuning that needs to be done. You can call it pharmacokinetics. That's what my research, starting with Michele and more with Bruce and then now here independently, has been able to do computationally.
ZIERLER: Are there specific classes of human ailments that you are focused on, or is this a universality of drug delivery and drug discovery?
TIWARY: It's quite universal, but these days I have become more interested in cancer, just due to my collaborators being interested in cancer.
ZIERLER: What kind of cancer and what are the frontiers of knowledge there?
TIWARY: Mostly we are looking at things involving different forms of leukemia. This work that I'm doing on T-cells, which is immunotherapy, that's for patients where chemo does not work. You cannot blast anymore with chemo. Immunotherapy is the newest frontier in cancer treatment, so that's one thing we are looking at. The second thing we are looking at is these protein kinases, which is more traditional chemo, and how to improve drugs over there. And also predict. Here is the question. If you take a drug and you bind a protein with that drug, evolution is happening at all length scales and all time scales. The protein is going to say, "You are trying to block me from doing what I like to do. I'm going to change so you can no longer block me." This is known as a mutation. People who do chemo, the cancer goes away—and I'm sure many of us have had our family members—where there's a relapse, the cancer comes back. There are many reasons, but one of the reasons is that the protein that you were trying to block, one little residue will mutate to something else, and the drug becomes ineffective. We showed that actually using computer simulations, we can start predicting which mutations will be ineffective against the second drug. It's a frontier area. We are not quite there yet but we are starting to see that we can do this. Most of it so far has been retrospective; we haven't really done prospective. Some cases are there where we were able to show that if you make this mutation, the drug will not work, and indeed it does not work.
ZIERLER: Where is the data coming from? Are you working with clinical data from patient populations?
TIWARY: Yeah, we get clinical data. We also get in vitro data, in vivo data in mice, but also clinical information.
ZIERLER: You're doing simulations with this data?
TIWARY: Yeah, we do simulations on systems that are implicated in the data, and then we compare results with the data. That's the idea.
Rare Events at the Biological Scale
ZIERLER: What's the rare event as it is applied in this biological context?
TIWARY: That's a wonderful question. There are two types of rare events, just like you mentioned, ChatGPT for the normal population. In sciences, the equivalent of ChatGPT is AlphaFold. That's what everyone is very excited about. It's wonderful, but it has got its limitations, so I am up against a barrier where I have to convince a lot of people, "Hold on, take it with a grain of salt." So, what is the rare event of interest here? What has AlphaFold done? AlphaFold takes a sequence and tells you the protein it is going to form, the structure that it is going to form. But the protein is not one static object. It's constantly changing shapes. It's constantly changing. These shapes are known as conformations. It's adopting different conformations. So, the protein folding problem, which AlphaFold claims to have solved, is not just about predicting the most stable structure; it's about predicting an ensemble of structures. You cannot do that with AlphaFold. There is no notion of thermodynamics. You cannot talk about the relative free energies of these different conformations. If you were to do molecular dynamics simulations, solving Newton's law of motion, you could sample movements between these conformations, but that's a rare event, because this is a millisecond-long process.
My methods make it possible to take a sequence and not just predict structure but predict what we call Boltzmann ranked structures. We have a method which we call AlphaFold-RAVE. RAVE is a method that I developed in my group when I started here. It stands for Reweighted Autoencoded Variational Bayes for Enhanced Sampling. That's a long one. But [laughs] there's a story behind it. We just call it AlphaFold-RAVE. With this, we can sample different conformations. That's one rare event of interest. The other one is, once you have a conformation of interest, how does a drug bind to it? Have you read Anna Karenina?
ZIERLER: Of course!
TIWARY: Do you remember the first line? "All happy families are similar. Every unhappy family is unhappy in its own way."
ZIERLER: Right, right!
TIWARY: I can draw an analogy from this, to the most stable form of a protein is the active form of the protein. I like to say all active proteins, all active kinases, are similar; every inactive kinase is inactive in its own way. So, if you can predict what are the inactive conformations of a particular protein, a kinase or some other protein would take, and now you design a drug to bind this one, it will be less toxic—just going back to Anna Karenina—because you're no longer trying to do something which works on every family; you're doing something that is focused. That's the big picture that we are looking at.
ZIERLER: From Columbia, were you thinking about biotechnology? Did you ever consider joining a pharmacological organization?
TIWARY: Yeah, when I went on the faculty market, I thought if I don't get a job, maybe I would consider that, but it was never at the top of my head.
ZIERLER: The University of Maryland, did that resolve the two-body problem for you finally?
TIWARY: Megan and I, we were both at Columbia. I got the job here first. I was able to delay it a little bit so I could spend more time in New York, but even then I started here before Megan. Then, we were again nervous. College Park to New York, as you might recall, is not that far. The Amtrak actually works here. Then Megan went on the job market, and she is more clearly talented than I am, and she had many, many offers, Maryland was one of them, and thankfully she was genuinely excited about Maryland even without the fact of me being here, so neither of us had to make a compromise. She came to Maryland for her own scientific growth, and that resolved the problem.
The Heart of Statistical Mechanics at UMD
ZIERLER: What was attractive to you about joining the faculty at the University of Maryland?
TIWARY: This institute, IPST, that I told you about. This is the heart of statistical mechanics. When I joined here, at that point in my little building there were actually four National Academy of Sciences members. It was the highest concentration of National Academy members on the Maryland campus. So, all these people! One of my colleagues, the director of the Institute back then, was Chris Jarzynski, who is very famous for generalizing the Second Law of Thermodynamics. Jarzynski's relation, we used to talk about all the time, and he was my colleague! I still pinch myself that, wow, I am working with these people who have really—who are some of the—or John Weeks, who has now gone emeritus—these are some of the deepest thinkers in thermodynamics and statistical mechanics, and that I could rub shoulders with them. Or Millard Alexander who I mentioned earlier, and also Garegin Papoian. So, I had all these colleagues. It's the people. That's your short answer. The people and the heritage.
ZIERLER: Did you get a sense of the origin story of IPST, how it started, why it started?
TIWARY: Yeah, it was started in the 1940s or 1950s. Again, a Caltech connection! Professor Shih-I Pai, who got his PhD from GALCIT, aeronautics at Caltech, he started this institute. Back then, I think it was some other name. Institutes at universities come and go, and this institution has survived many recessions, [laughs] many, many other things, and it has stayed over the years. It's an old one.
ZIERLER: The term "chemical intelligence," did you start in this area once you got to Maryland or does this have a backstory?
TIWARY: No, it's Maryland. We really started to think about not just AI methods to improve chemistry but chemistry to improve AI. That's the idea, so that's where the term comes from.
ZIERLER: I wonder if you can elaborate on that. Chemistry to improve AI, what does that mean? What does it look like?
TIWARY: Good chemistry often happens when we have limited data and we have to make predictions on what's going to happen next. This is the heart of the problem in machine learning also. For difficult problems there is limited data, and you have to fit models, you have to make predictions on what's going to happen. That's the one angle, is how can methods in chemistry, theoretical chemistry and also physical chemistry, how can they improve design of AI algorithms? I'm not talking about AI algorithms only for chemistry applications, but generally speaking, AI algorithms.
I mentioned to you this diffusion model which is used in DALL-E. The original paper on diffusion models came out in 2015 out of Stanford. If you read this paper, you will see it's called something like deep unsupervised learning using ideas from non-equilibrium thermodynamics. The paper was not cited. For eight years, nine years, there were like two citations, three citations. Then DALL-E came, and it just shot up. This year it has already been cited 2,000 times. If you get into the math of this paper, you will see how all it's doing is taking ideas which are very familiar to theoretical chemists and convert it into the language of doing machine learning. That's the first idea behind chemical intelligence.
The second one, which is more common, is that chemistry molecules need their own representations. You cannot treat chemicals the same way as you treat people interacting on Facebook. Even though I say that, ironically the same frameworks are applicable. In the end you think about everything as a graph. You have a network of friends talking to each other, or you have a network of molecules talking to each other. How do you represent chemical structures? That's yet another angle which is different from application of AI. It's the network aspect that is common, but not all problems have the same.
ZIERLER: Tell me about building up your research group, attracting graduate students, getting funding. How did you get it all started?
TIWARY: Surprisingly, none of it has been as difficult as I thought it would be. Graduate students, I was very careful in hiring—my first two students were amazing. They were really a good fit personality-wise, scientifically. One of them, Yihang Wang, he is now Eric and Wendy Schmidt AI in Science Postdoctoral Fellow at University of Chicago. He was this person who I have never seen not smile, and that was critical. Even when I got low, he was always smiling. The other one, he would smile a bit less, but he was really, really smart, Sun-Ting Tsai. He's now a postdoc with Sharon Glotzer at the University of Michigan. These two were very solid, very hardworking, very gung-ho. They got everything going. Once you have that nucleus, it was—I have never had to go out and recruit. I always get a good pool of students here. They do their rotation, and we find students who are a good fit with us and they stay with us. Finding postdocs is a bit harder because we don't have a rotation system, but we are getting good postdocs also now.
Your second question was funding. Funding was very hard initially. That's how I resumed running. Every time a grant proposal would get rejected, I would drink a bottle of beer. I thought, "Okay, this is not healthy." I said, "How about instead go for a run?" since that also generates endorphins, something like that. I started running, and then I remembered that my first marathon, the L.A. Marathon in Pasadena, was five hours, 59 minutes, so that was terribly slow. I sat on the sidewalk and cried. I thought, if I can cut it down my half by the time I get tenure, that would be cool. So, I started running. Every time a grant proposal would get rejected, I said I would run a half mile just on my own. But I started doing that, started getting fitter, and at some point proposals started going through, and I kept running. [laughs]
ZIERLER: I wonder if part of the difficulty was because of the conservative nature of grant applications. In other words, maybe committees didn't know what to do with you. Are you a chemist doing AI? Are you a data scientist doing chemistry?
TIWARY: That's absolutely true! Currently I am funded, thankfully, by three federal agencies. I am funded by NSF, by NIH, and DOE. NSF is very ambitious. I really don't think I had much difficulty there. They were very warm, and it took me two tries, and I got it. DOE also is remarkably receptive to ideas that can really do fundamental theory for energy sciences. That was also not too hard. NIH, on the other hand, was the hardest. It is the biggest part of my funding. The reason why NIH was hard is because the traditional model in NIH tends to be hypothesis-driven. You have a hypothesis, and you are going to do experiments that you are either going to validate or invalidate the hypothesis. I am not hypothesis-driven. I am curiosity-driven. "This looks good. Let's go pursue this." NIH the traditional grant mechanism is something known as an R01, which is a hypothesis-driven mechanism. I submitted my proposal. It was given a score of "not discussed," which means it was so bad that the panelists did not even discuss it.
ZIERLER: [laughs]
TIWARY: But then they had a new program called R35, which is not a fund-the-project program but a fund-the-PI program. There, it scored the highest possible score. You can score a theoretical maximum of—it's a reverse scale. The worst you can score is 90, the best you can score is 10. It scored a score of 14. Because that type of panel really liked what I was doing. I was like, well, these are the big-picture ideas. It's the sort of proposal where you give the big picture of what you are going to do, and they loved it. Here, it was a question of finding a good fit. It worked out in the end.
Abstract Language and Protein Molecules
ZIERLER: At the time you started interacting with NIH, institutionally had they already embraced AI as a research tool?
TIWARY: Yeah, they had, and at this point proposals that use AI or machine learning are no longer novel. There are too many. Some proposals that use AI and machine learning in a novel way, they are novel, if you know what I mean. Most of the proposals are just like, "There is data and I am going to use machine learning." That's normally not a good strategy, because the data, as I said, is limited, so you have to really think hard as to why will machine learning help, or why do you even want to use machine learning. NIH had embraced it at this point. It's no longer novel, to be honest. The community is starting to look at AI proposals with some skepticism because everyone is doing it. Why do you want to do it? This is a question people have to explain.
ZIERLER: Your work on predicting the behavior of protein molecules, when did that start? Did that start right when you got to Maryland? Was that in your first year?
TIWARY: Yeah, this is where it started, because early on I was thinking more about predicting when do drugs dissociate from protein molecules, but I was not thinking so much about how does the protein molecule change. Then I realized that this is a problem which is extremely hard, and I guess I already had this intuition that this is a problem that—I saw AI coming in 2017, and I got a sense that this is a problem that purely AI is not going to be able to beat. That was good intuition so I stayed with that. It's going to be fundamentally data-sparse. That's the nature of the problem. For a variety of reasons, you're not going to have good data on this problem. That's when I started.
ZIERLER: In creating this abstract language, what aspects of it are valuable, going back to this idea of fundamental research, just understanding how proteins work, regardless of what we do with that discovery?
TIWARY: It's absolutely important. It gives insight into evolution. Why are we the way we are? I think the information I produce is useful for drug discovery but also for just fundamental understanding of why are we the way we are.
ZIERLER: What about the value of this abstract language for drug discovery, for learning what proteins do and then learning what drugs to develop as a result?
TIWARY: It just makes the computation easier. That is the whole point. Because we can simulate things, but simulations—there is no free lunch, right? Simulations are insightful. They make you look at everything in all-atom resolution. You can see everything in all-atom resolution. But it's a lot of data. For example, let's consider—I don't write Mandarin, I can't read Mandarin, but when I look at it, it seems complicated, it seems expressive. Let's use Mandarin as an example and go one step from there. Let's think of a language where every alphabet is actually multidimensional. It's going to be very complicated. You cannot write it on a piece of paper. You have to have projections in different directions. If you can compress it into a clean language, that allows you to communicate. That is the whole point of a language, right? It allows communication. Why are we talking? Because we both can speak English. If you could not speak English, it would have probably been a bit complicated. Although AI would have helped us! So, that's the thing. It allows us to compress information, to communicate it, and then to act upon it in a clean way.
ZIERLER: Have we seen this research progress to the point of clinical trials? Has the FDA gotten involved yet?
TIWARY: The most ambitious ideas—my group, we have more ideas than we can keep pursuing. Some ideas come. Some ideas keep going. I wouldn't say it's one idea which is going towards clinical trial or which is going towards even in vivo validation. It's a combination of things which are helping our experimentalist collaborators. It's always a combination of our methods.
ZIERLER: When COVID hit, what did that mean for your research group? Doing models, working on computers, was it relatively easy to switch to a remote learning environment?
TIWARY: It was relatively easy, also because my group, we had already become big followers of Slack well before COVID. I keep an open door, but I travel a lot, so I always can communicate with my group using Slack. The fact that we were very proficient Slack users helped us a lot. We kept doing computations, but then there is no substitute to sitting down and just scribbling ideas on a blackboard. That was definitely affected. It was hard to keep group morale up, things like that. I'm glad that it got over for so many reasons and then we could go back to meeting each other. The students really started missing the office. Even I started missing coming to the office. It gives a structure to life, right? To go to a place and do something.
ZIERLER: Yeah! Did you ever think about getting involved in COVID research, applying machine learning to virology?
TIWARY: No. I had a lot of friends who were doing it. I was not so sure I would be able to contribute to that for many reasons. Firstly, it was quite crowded already. A lot of people were working on it. I felt like my methods were not quite there yet. If something like this happens now—I hope not—then I might be more persuaded to do that. Back then, I was still testing my ideas. I still am, but back then definitely more so.
ZIERLER: Heaven forbid there's another pandemic; how might you be prepared for it that time around? What have you learned as a result?
TIWARY: I have seen the type of research people did for COVID, and I know how I can do that better; let's just put it that way. I think I could get into it pretty quickly. We are ready for that if it happens.
ZIERLER: Tell me about receiving the Faculty Early Career Development, the CAREER Award from the NSF. What did that mean? What did it allow you to do?
TIWARY: Oh, it's a great award! It's five years of funding and it's relatively flexible in what you do with the money. That's great. It's not just project oriented, so that's really nice. That was great. It led to many other awards. That happened, and now it seems like a long time ago. The other nice thing about this award is outreach. You don't get the CAREER Award just for doing science. You also have to have science research integrated with education, an outreach component. That's something I am quite passionate about. In my group, I have people who often come with no machine learning background, and they learn how to do machine learning. I've seen this process. Through this CAREER Award, supported by this, every summer I do outreach camps, online mostly, where I have students from local HBCUs and state universities, elsewhere, who come and attend these classes. They have never ever programmed, and by the end of these 12 hours of Zoom instruction, they have some sense of what it's all about. When I first started it, I did all the instruction myself. Last year, I tweeted about this and I said, "Would any experts like to come and teach?" It just went viral. I had so many people who were willing to come and teach! I had people from big pharma companies, I had people from MIT, from Stanford, everywhere, who were like, "I want to teach some of your class." So, I'm continuing this, and this is supported by the CAREER program. So it's not just the research, but it's also the outreach, which makes it very, very special.
ZIERLER: When it was time for you to come up to tenure, is there an opportunity to step back and think about what it all means and make the case for yourself? How does that process work at Maryland?
TIWARY: I feel like I did everything in a rush. I went up for tenure one year early. Normally people file for it at the end of the fifth year. I filed for it at the end of the fourth year, so I guess I was feeling confident enough. My mentors told me, "You are a good case." I did that, and then it's a long way. When you file for tenure, they go out for letters, then the department votes, then the college votes, then the president votes, and by the end of that year, you're still waiting for that. You know you will get it, because the department vote was good, and everyone tells you you're going to get it, but you never know. What I do remember is when you get a letter from the president. That, and the day I got my PhD at Caltech, when they address you for the first time as the doctor, those are very defining moments. You know that you have jumped to a new quantum level which cannot be taken back. It's an irreversible process. I guess tenure can be taken back, but you know what I mean.
ZIERLER: Either by looking in the mirror or if there's a tenure talk, how did you think about your contributions at that point? How did you package it all together for yourself?
TIWARY: I made a clear case that I'm doing machine learning combined with statistical mechanics for practical problems, which not many other people are doing. People are doing parts of it, but I am doing things which are fundamentally exciting from a chemistry perspective but are also going to have long-term repercussions. Then my department was kind enough to put me up for full professor basically around the similar time I was given official associate professorship, so it all happened in kind of a haste. By the time my tenure was not even official, I was already filing for full professorship. That's why I said it seems that it went too fast. [laughs]
Predicting RNA Structure
ZIERLER: There's many perspectives on this I'm sure that you can appreciate—getting tenure, some people find it liberating, that they can take on new avenues of research without being concerned about tenure, while thers, they're on their path and that's just what they continue on. I know this is a recent development for you. How have you responded to tenure? What has it made you do?
TIWARY: Absolutely, I remember very clearly. I did two things. Number one, I got more into RNA. I was talking about proteins, but RNA is the final frontier, I think, because I don't know if you know this, but in the human body there are 10 times more RNA at least than the number of proteins. And they are so crazy. There is very famous book, The Double Helix, by Watson or Crick or whatever. Did you know that RNA can form a triple helix? It's crazy!
ZIERLER: Linus Pauling thought about that many years ago.
TIWARY: Yeah! But it does! So, we have this chemical structure that forms as triple helix, and RNA is evolved that way. DNAs are designed to maintain a double helix structure so they preserve that. RNAs, on the other hand, for their role in the biology, they are messengers, right? The messenger RNA. It has to give information to do something, so it has to be able to zip out and zip back and take different forms very, very quickly. This is a problem I have become really fascinated by—how to predict RNA structures. My collaborator at National Cancer Institute, Jay Schneekloth, he can validate our predictions. I have two students here, Shams Mehdi and Lukas Herron here, they are working on this RNA structure prediction problem. Our goal is to come up with compounds that bind to RNA that you would have never thought they would bind to them because you would have never thought that this is a structure the RNA would like. That's one thing I started doing. The second thing was something basic. It's very easy in science to, as you progress, become an administrator, to become a manager. And, it's important. I have come to appreciate how hard it is to be a good manager—to motivate people, to raise funds, to go out giving talks. I used to think almost in a bad way of—scientists who I thought were managers, I would think, oh, he or she is just a manager. Now I realize it's really, really hard to do that.
I did not want to become only a manager, even though I think it's really important and it directly contributes to my group being happy and successful. So one thing I did after tenure is to start doing more programming myself. In my calendar for the week, I mark that I have to do eight hours of programming. I mark out blocks in the calendar as meeting time. I call it—actually my wife Megan calls it "meeting with myself." I don't take any meetings in that period and I just code. There is a Slack channel on my group slack where I ask the naivest, dumbest questions about programming and my students help me answer them. This has been extremely useful. My hope is to write a first-author paper next year with my students as a senior author, but more than that this allows me to help my students more. Because there is a joke about chemists—What's the difference between machine learning and AI? If it's a senior PI they call it artificial intelligence, and if it's a younger person they call it machine learning.
ZIERLER: [laughs]
TIWARY: If you see what I mean. I am really starting to appreciate the nitty-gritties of the algorithms by coding myself. That's where ChatGPT has been liberating. A lot of the work that would have been very hard, now due to ChatGPT—like if I want to make a plot to analyze my data, I would have spent one hour trying to answer that question, and I can do it in five minutes thanks to ChatGPT. So it's helping me do some research.
ZIERLER: The irony here, of course, is that they say coding is becoming a dying art because it's going to be taken over by AI completely. What's the value for you in walling yourself off and doing the coding yourself?
TIWARY: Translating ideas, translating intuition into AI is still very, very hard. One of the most important models in physics, and I'm sure you know of it—the Ising model—you can go to ChatGPT and ask it to write an Ising model code to calculate the energy. You will notice—at least I noticed—that it was getting the periodic boundary conditions completely wrong. It was doubling it. That's interesting. You can double-count. Your energy will be double. It won't affect double derivatives of the energy, because if something is doubled, the first derivative will also be doubled but the second derivative will not be doubled. In order to debug something like this, you really need to know the physics. So, I think it's a helper tool, and these helper tools will always be there, but good programs, good algorithms are flashes of inspiration. There is something else. They are inspiration mixed with experience. Really it's like what I told you about running—all of a sudden, a block moves, that type of thing. It's hard to say 50 years from now how things are going to look, or even 40 years from now. Or even perhaps 20 years from now! Or even 10 years from now! I think it's very hard to predict.
ZIERLER: It's hard to predict, meaning that coding would be something that would be valuable to somebody like you?
TIWARY: Yeah, how much of it will become truly automated. It will always be valuable, but do I need my students to know any programming or can AI tools do it completely? These type of things. What is going to be the impact of helper tools, of things like this, that's not clear to me. I think it will stick. Just to give you an example, we have calculators. That does not mean multiplication tables are not useful. They are extremely useful. In fact, the students who did well in my courses, chemical physics or physics, are the ones who are very familiar with basic integration, and they don't have to use a calculator for this. They can see the big picture in a much more easier manner. That's what gives me hope, just like calculators have not replaced calculus.
The Local and Global Promise of Health Computing
ZIERLER: We'll move our conversation closer to the present. Your more recent venture, the Institute for Health Computing and within that, Therapeutic Drug Discovery, first tell me, what is the Institute for Health Computing? How did it get started?
TIWARY: It's very, very interesting. The story goes back that our dean, Amitabh Varshney, got involved with Montgomery County. The county executives were very interested. Their big vision is to make Montgomery County the Silicon Valley of the East, really looking forward. Which is ambitious; we know there's a lot of pharma based in the Boston area and things like that. They want to bring a lot of stuff happening over here. Amitabh, our dean, got involved in it. Then it became bigger and bigger. They recruited Brad Maron, who is the co-director of this Institute, from Harvard, Broad Institute. Brad came down here to lead it. Then they were looking for people who can do computing, who can do drug discovery, and they contacted me. It sounded very cool to me. I was going to go on a sabbatical this year, but then this happened, so I thought, well, let's just do this. To some extent, it's an open slate. We are being given a lot of freedom in how to shape things, what to do. The whole idea is to use computing for health problems that can benefit everyone, broadly speaking, but also specifically Montgomery County, make it a hub, where we can have startups coming out of here and really do stuff. My dream is, for example, to have AlphaFold for RNA come out of the Institute for Health Computing, something like that.
ZIERLER: Where administratively and even scientifically does the Therapeutic Drug Discovery initiative fit within the Institute for Health Computing?
TIWARY: It's a part of it. They have other stuff also going on. They are very interested in understanding equity in drug discovery. For example, when we design drugs, the trials generally happen on some block of people. Are those same drugs going to work across income backgrounds? Are they going to work across racial backgrounds? It's not clear. Can we think about things like this? These are open questions that have not been addressed. Since we will have access to clinical data, I think—I might be wrong here—from hospitals in the University of Maryland system, we can look at these things in a very, very careful manner. Also, it really builds up on, as we discussed in the very beginning, on NIH and NCI and things like thing being in the area. We can really collaborate with them and do things.
ZIERLER: The patients, the data is coming from Montgomery County?
TIWARY: And, broadly speaking, Baltimore and elsewhere. There are lots of hospitals in the area. They are still trying to figure that out because it's patient data, so it's sensitive information. You have to be careful.
ZIERLER: Because this is so new, what do you hope this affiliation, this initiative, will do for your research?
TIWARY: It will really reduce the time to get to translational research. That's the number one thing I am hoping, while staying true to my core, which is artificial chemical intelligence, which is statistical mechanics and theoretical chemistry and machine learning. I want to stay close to these things but also do things which can be validated very, very quickly.
ZIERLER: What's the timeline? When you talk about translational breakthroughs, what are we talking about? Years, decades, in your lifetime? What does that look like?
TIWARY: Most drugs, if you look at drugs that are currently in Phase 2 trials or even Phase 3, they were first proposed around 10 years ago. It takes time for things to go through, for a valid reason. There are a couple of things we would like to do here. First of all, come up with algorithms that not just me but also others can use to do drug discovery, which is something I have been doing already. But to have clear tangible candidates that are validated and that are ready to go towards Phase 1, for example, by the end of three years—that would be our target, for example.
ZIERLER: From ten years to three years, I can't help but ask, is artificial intelligence itself creating efficiencies in the pipeline that might make these drug discoveries, these translational breakthroughs faster?
TIWARY: That is exactly what we are hoping.
ZIERLER: I wonder if you could walk me through that. What are the efficiencies that might be yielded as a result?
TIWARY: The first one will be through the speed at which we can screen through protein conformations. What I mentioned to you—go back to the Anna Karenina quote—are the speed at which we can go through libraries of compounds. There are libraries where you have a billion compounds that exist out there, and if you were to do an experiment you would test every single compound at the same time. With AI, what you hope to do is to get a sense that these billion compounds are not actually a billion compounds; there are kind of classes that exist for a particular purpose, so you can really speed up the process for a given application. It's really speeding up every aspect of the process.
ZIERLER: One complication here might be that one of the things that you're working on is looking at human health challenges really at the individual patient-specific level. What aspects of this research are just broadly relevant, and where might this research be relevant for one person but not another? How do you develop drugs on that basis?
TIWARY: That's a wonderful question. I think the research is always available across a range of patients with the possibility to move towards precision medicine given a particular patient. It's general with the possibility of becoming specific. That's the way I look at it.
ZIERLER: Of all of the health conditions out there, either from personal experience or however you're inspired, is there one that's closest to you, that's most meaningful to you, that you really hope to be a part of solving?
TIWARY: I would not like to die. The world is so exciting. But I don't think I want to get into aging, because there are too many people in it. I would like to keep working on cancer. Then secondly I am starting to get quite interested in diabetes. One of my collaborators in this Institute is Rozalina McCoy. She is an expert in diabetes. She came here from the Mayo Clinic. We are talking already how—she has a deep understanding of diabetes, how it happens across different patients, and I want to work with her on diabetes. In my family—I come from India, it's a giant family. If you name a disease, someone has it. I've seen all of it. Anything we can make any dent in is going to be very, very useful.
Now that you say it, I also would like to point out there's some really hard diseases which have parallels to materials science. For example, the neurodegenerative diseases like Alzheimer's, effectively a lot of them involve these fibril formations in the brain, and that's like crystallization. Once a crystal forms, it keeps growing. You cannot bring it back. That fascinates me, if I could tackle that problem, because my DOE work is exactly on thinking about crystallization. That's for energy applications, but I think there are parallels, and to look at this problem of crystallization and see if we can use it to predict, for example, early onset Alzheimer's. The problem with Alzheimer's is when it happens, you have so many fibrils that it's done. If you can predict it, even the slightest advance is going to make it much more useful.
ZIERLER: A theme of our discussions had been finding efficiencies as a result of AI. In the background of all of this health science research of course is the enormous, skyrocketing cost of healthcare. Where might AI play a foundational role in making healthcare cheaper and even more accessible not just in Montgomery County, but the United States and across the world?
TIWARY: That's a beautiful question. I wish I knew the answer. As someone who has spent, as of now, 24 years in India and 16 years in the United States, there are so many things we could be doing better, and it makes me sad to say there are so many things that even India is doing better in the healthcare system. If you have a simple ailment, it's easy to go to the doctor and get it done. Here, it's not. It's a mess. There are policy related issues that we definitely need to do better, and I don't know how they will happen.
As far as the drug development paradigm goes, just like for computing we have the Moore's Law of computing—that for every dollar the chip size is becoming twice as compact—in drug discovery there is the Eroom's law, which is Moore spelled opposite. Eroom's law say that the dollar cost it takes to develop a drug is doubling every year, after taking inflation into account. There are two reasons for that. Number one, diseases are also becoming complicated. We have already helped with many of the diseases. Second, it's just expensive to do drug discovery. It is not a cheap endeavor, if you look at the amount of manpower, testing, everything that goes into it.
I think AI will help with so many aspects of this and hopefully it will bring down the cost. That's something we are very interested in. For example, one of the drugs that I work on, Gleevec, it's a leukemia drug, it came out in 2001. It was even featured on the cover of Time magazine. This drug, it brought down the patient death rate from like 90 percent to 20 percent, something like that. Back then, a lot of patients developed relapse. They relapsed, they had mutations. This is somewhere where I am working with my collaborator Markus Seeliger at Stony Brook, where he has patients' specific information to show which mutations make Gleevec ineffective. If I can replace that and do it just through simulation for other drugs, maybe I can start proposing therapies. I can look at patients and say—because why do mutations happen? They happen due to the drug but also there are evolutionary reasons, there are societal reasons, and things like that, so I could start predicting that this class of patients is going to have this mutation which is going to make Gleevec ineffective, so maybe you should start thinking about a cocktail of drugs to target that, things like that. That would make the whole process cheaper. You don't have to just rely on Gleevec. You don't have to go back to it. You are actually solving the problem.
The other aspect is a dose of Gleevec in India costs around $100,000 for a year. Here, I don't know the price; I just remember for some reason the number in India. That is super expensive, right? That connects to the residence time problem. Gleevec has a residence time of around a few hours. If I can design drugs which have a residence time of not two hours but twenty hours, then the amount of Gleevec you have to take, the dosage reduces by ten times. That will also have an effect on the price. The money angle will come from different ways, and I'm sure it will help with the price problem.
ZIERLER: We'll bring the story right up to the present. I know you can't talk much in specifics about your collaboration at the NIH, but I wonder more broadly if you can explain the enduring challenges of immunotherapy. We're old enough to remember when immunotherapy came out. It was supposed to solve cancer. It was supposed to be the magic solution to cancer. Why is it not? What is so difficult? Why is there still promise in immunotherapy?
TIWARY: People are still optimizing things. This is work that has been done by my collaborators Grégoire Altan-Bonnet and Naomi Taylor. When I talk to them, my attitude towards immunotherapy is much more optimistic. Compared to chemo, it is doing better. The way I understand it, there is still a lot of engineering that needs to be done, and there are conditions—like I was just reading a report last week—patients with immunotherapy are developing new types of cancer. That's very scary, right?
ZIERLER: As a result of the immunotherapy?
TIWARY: Yeah, yeah, yeah. It's not clear yet that this is happening. This is a news item from last week, so I don't even know. I will draw an analogy. A computational method that I developed, if no one uses it, everything is good because no one is going to criticize it.
ZIERLER: [laughs]
TIWARY: As a critical mass of people start using it, that's when the criticism start. Also good news, but also some people will be like, "Well, it didn't work here." This will happen with any technique or any therapy. You will find its limitations. But overall, I am optimistic. This is probably due to my collaborators, and they are giving me a lot of optimism for what they are seeing in their own research. NIH has a big hospital with some of the biggest datasets that people can look at, and the results are encouraging.
Reflecting on a Nonlinear Life Trajectory
ZIERLER: Now that we've worked right up to the present, for the last part of our talk I'd like to ask a few retrospective questions about your career, and then we'll end looking to the future. I wonder, if you could look back, do you see your career progression as a scientist, as a scholar, as an intellectual—what aspects do you see in linear terms, one thing very obviously and logically followed the next, and what aspects are nonlinear, where you met somebody or a new technology and it bounced you in a totally different direction? I wonder if you could reflect in both of those ways.
TIWARY: I don't think there is anything linear. There is a lot of failures. There is a lot of nonlinearity. There is a lot of things happening at the right time and somehow my being able to capitalize on them. Like getting to Varanasi. I think of Varanasi as my spiritual home. Next month I am going there just to spend four days in the city doing nothing. No Zoom meetings, nothing, just sit and read paper on the banks of the Ganges. That was something incredible. There I met my first professor, Dhananjai Pandey, who told me, "You should think about Ising models. They're really cool. You should do simulations." I started doing that. That led to me publishing three papers during undergrad, three first-author papers. That got me to Caltech. At Caltech, I don't know if I ever would have gotten in if Axel was not there. Axel was there only for five years, and I was his first student, and it just so happened that Axel was very interested in what I had done, and he went for me. I'm sure other people also—Julia was also in the admission committee probably and things like that. Yet another event. Then going to Michele Parrinello. It's a lot of nonlinear events which have switched directions. It's just being open to change. I think I have always been very not shy of taking risks. Sometimes it brings me trouble, sometimes it doesn't. Now I have come to understand how to take better risks and also how to stay hedged when the risk does not pay off. This is something I have learned with time. Not the easiest way, but I have learned with time. [laughs]
ZIERLER: In seeing your career in nonlinear terms, earlier in our conversation you mentioned you consider yourself spiritual but not religious. Do you think there's a role for such non-scientific concepts as luck or spirituality or even destiny to play a role in how one's life, one's scientific life, unfolds?
TIWARY: Yeah, I'm starting to think more and more of it. My wife, Megan, she is tenure track, and when we were both tenure track, tenure was this thing that we are working towards, and it gave us purpose. It's like everything was clear because we were working towards tenure. When I went through two promotions in two years, there was a huge vacuum. What am I working towards? What am I doing now? Because I could just stay here and, you know, be okay. Where does the drive come from? It has to be very internal. It made me question a lot of things. The biggest thing that became clear at this point was, I want to help society. I have found that to be such a push —it almost sounds trivial, but it's hard to practice. Any time when I feel tired or doubting myself about anything, I just think, how is this going to help society? If I get a clear answer, somehow things work out. It's like it just clicks, and everything finds a way to work out. That has been incredible in the last two, three years, especially since tenure, just focusing, as a professor, how can I help society? First through teaching. I teach my students and I take great passion in it. I teach on the blackboard to teach classical thermodynamics and get them excited about that, and that's a very, very good feeling. Sometimes I teach physical chemistry to a hundred students in the same class. That's a way to help society. By mentoring my students, that's another way to help society. Through science, by introducing new ways of thinking, that's yet another way to help society.
Eventually, as I rise through society and become financially more stable—I didn't tell you this, but I come from the poorest part of India, from a state called Bihar. If you rank Bihar in terms of country by GDP, it would be the fifth or sixth poorest country in the world. It would be poorer than Sudan. So I am trying to help—we had servants while growing up. It's something I'm very ashamed of, but that's how the state is. So, I'm going back there. I'm helping my servant's family. I say "servant" with so much sadness. That's how exactly Bihar is. This guy, Deva, who I grew up with—while I went to Caltech, Deva is stark illiterate. He cannot write his own name. But I am trying to put Deva's daughter Divya into college so she can go to college and get a job.
I guess that's my spirituality, if I can help society, everything works. Can science explain something like this? Maybe science can explain that helping others will help us survive in the long run, but—you kind of lose the fun! [laughs] If you try to explain it. It's easy to just think it's good to be nice to others and help everyone enjoy science. Science has given you and me and so many others so much joy. I think everyone should have the chance to do science up to a certain level and appreciate it. Then, whatever they do, they will go with a scientific mindset. This is something I want to keep doing over the coming decades.
ZIERLER: You mentioned ideas that you learned or considered at Caltech that you want to come back to, that you never got around to finishing. For all of the research that you've done—of course the work is never complete, there's always more to investigate, there's always more to discover—but what stands out in your mind as a problem that you really resolved, that has a finality to it, that is understood now in a way that wasn't before you got involved in it?
TIWARY: This is my most recent paper, which I'm so happy with. This is my proudest piece of work, I think, so far. This is a method called thermodynamic maps. The idea is very simple. Let's say you do an experiment at the temperature 300 Kelvin and you do the same experiment at 500 Kelvin, and you make a bunch of observations at these two temperatures. Now I ask you the question, "How is it going to look at some temperature in the middle?" Can you answer this? Why should you be able to answer this? There is a reason, and the reason goes to statistical mechanics. If you can write down the partition function for a system—and I'm going to get very shamelessly technical here, but this is Caltech, so we are allowed to be technical [laughs]—if you can calculate the energy eigenvalues e for a system at 0 Kelvin, it does not depend on temperature, you can write down the partition function, which is summing over e to the power of minus e. This partition function contains all thermodynamic information across all temperatures.
In this method, in the thermodynamics map, we have been able to show, using artificial intelligence, specifically diffusion models, a way that we can take information at two temperatures and learn a model that can extrapolate across any temperature. This is super powerful. For example, let's go to the Ising model, where you study the Ising model deeply quenched in the magnetic phase, and you study it deeply in the non-magnetic phase. You have two observations at two temperatures. I ask you, where is the critical temperature? How is the heat capacity going to look at the critical temperature? Using this method, just by using these two pieces of information, we can calculate all of this. This is the result I am most proudest of, maybe only because it's recent, but I don't know—there is a recency bias—but this is something which is a big, big step forward. I don't know if it has finality, but it is something that not many people have even thought of doing, and we are able to do that. That really shows that we can use AI to learn properties about the partition function which were simply not possible with any other method, using these diffusion models. That makes me very, very happy!
ZIERLER: I'll flip the question around. We have a very useful metaphor from physics—dark matter—something that we know is there, but we have no idea what it is. What is the dark matter in your research world?
TIWARY: That goes to the biological side of the problems. These are the disordered proteins, and the RNA are kind of like that. These are things that are not structured. There is no crystal structure and formation about these, so we don't know what they are. That's where we have to be very, very careful. That's why AlphaFold won't work on things like this, as well, because there is nothing to train on. We know it exists, but what does it look like? We don't quite know.
The Preciousness of Caltech Friendships
ZIERLER: As a Caltech alum, what has stayed with you from your grad school days? What have you learned about science and about working with colleagues and about asking the right questions that continues to inform your career as a scientist?
TIWARY: The number one thing that has stayed is the friends. The friendships from Caltech have been incredible. When Megan and I got married in New York City, we got married in Central Park, and it was like a Caltech wedding. There were 60 people and 45 of them had Caltech affiliations. Megan's advisor, Ed Stolper, used to be the provost of Caltech when we were there. Ed came out there with his wife Lauren and son Daniel, who is now a professor at Berkeley. All my Alpine Club friends were out there – Chirru, Hamik, Nick, Patrick. So, it's really the friendship. And these are people who are trained to think very, very rigorously. They don't do anything in a for-granted manner. They calculate, all the time. They are accustomed to calculating, very quick ways. That has stayed.
The other thing that has stayed is—I think of Rob Phillips, who is a professor in Biology. Rob was so inspiring. I audited Rob's class, and he had a photo of him climbing Mount Rainier. Interdisciplinary skills. Or he would wake up at 4:00 a.m. to go surfing. That aspect of Rob's personality has stayed with me. I still like to run a lot. I do different things. And, finally just being completely bold. When I got into Caltech, my uncle, Kamalesh Kumar, who is a professor at the University of Michigan, told me, "Son, you are going to a school which does not just answer the questions that matter to society now. You are answering the questions that will matter to society—it might matter now, but they will really matter 20 years from now."
ZIERLER: Wow.
TIWARY: That's the mindset I have kept. He was right when he said that.
ZIERLER: We'll end looking to the future. If you survey the motivations and interests of your grad students, what window does that give us into where the field—where the fields—are headed?
TIWARY: The students are very smart and perhaps they are getting smarter. They are very creative. I think they are all smarter than me. It's such a pleasure to work with them. They are awesome! They are very driven. I find in my group they have become better at having work-life balance. They are starting to appreciate the whole process. They work very, very hard, and they are ambitious. They are very rigorous, but they are also ambitious, which gives me great hope for the types of methods we are going to develop. I also think students are perhaps getting better at truly interdisciplinary training. They can wear all these hats. Just like I am trying to wear many hats, I think to my students this is coming naturally. It's just like looking at a kid, right? We look at kids who are looking at a phone and still able to hold a conversation with you. Let's go back to Feynman. Feynman in one of his books talked about he would do experiments where he would drive a car and try to calculate numbers in his head. He was like, okay, I could do two things at the same time. Then he would try to do—Feynman used to do all sorts of terrible things, and he would basically do a third thing, and then was like, "Oh, that's when I'd see I could not do it." I feel like we are evolving. We are genuinely becoming able to capture too many things at the same time. That's how I see it in my students.
ZIERLER: Of all of the things that you've worked on, all of the technologies you've embraced, all the fundamental questions you're asking, what haven't you accomplished? What aspects of science and AI, chemistry, materials, what do you want to get involved in in the future at some point that you haven't yet?
TIWARY: That's a good one. I can't think of any! I think anything that I've wanted to do so far, I have gotten into it. I would have said maybe quantum simulations; I don't know. I'm very happy with classical. With quantum computing, I don't feel inclined towards it. I think it's very exciting but I want to stay with what I am doing and just do a better job of it. It's really fascinating. I am starting to see the impact of consistency. I just turned 40 this year. I have maybe 30 years left of active research, something like that. I want to make a real solid impact that can help towards better drugs and algorithms that really goes out of the way and help people have better lives, or really look at crystal nucleation and start predicting when will crystal nucleation happen, how will it happen, things like this. I just want to do what I'm doing, better, which sounds like a boring answer, but not quite, because the problems are so hard, and the methods will have to be so general, because with the same hammer, I want to come up with methods that can deal with proteins, with DNA, with RNA, with disordered proteins, with crystals, with—everything.
ZIERLER: Most importantly, that's what excites you, and that's what continues to energize you.
TIWARY: Yes! Absolutely. [laughs]
ZIERLER: This has been a phenomenal conversation and a treasure for Caltech. I want to thank you so much for spending this time.
TIWARY: Thank you so much.
[END]
Interview Highlights
- The Enduring Utility of Thermodynamics
- Fundamental Questions and Translational Possibilities
- Hype and Reality in the Current AI Revolution
- Mathematical Foundations in India
- Shift From Metallurgy to Materials Science
- Simulation and Scientific Reality
- Chemistry after Caltech
- Identifying the Hardest Problems in Drug Discovery
- Rare Events at the Biological Scale
- The Heart of Statistical Mechanics at UMD
- Abstract Language and Protein Molecules
- Predicting RNA Structure
- The Local and Global Promise of Health Computing
- Reflecting on a Nonlinear Life Trajectory
- The Preciousness of Caltech Friendships