skip to main content
Home  /  Interviews  /  Ashish Mahabal

Ashish Mahabal

Ashish Mahabal

Lead Computational and Data Scientist, Center for Data-Driven Discovery, and Machine Learning Lead, Zwicky Transient Facility, Caltech

By David Zierler, Director of the Caltech Heritage Project

January 25, and February 1, 2023


DAVID ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Wednesday, January 25th, 2023. I am very happy to be here with Dr. Ashish Mahabal. Ashish, it is great to be with you. Thanks for joining me today.

ASHISH MAHABAL: Thank you. I'm happy to be with you as well.

ZIERLER: Ashish, to start, would you please tell me your title and affiliations here at Caltech?

MAHABAL: I am the Lead Computational and Data Scientist with The Center for Data-Driven Discovery. I'm also the Machine Learning Lead for the Zwicky Transient Facility.

ZIERLER: Tell me a little bit about what that means for a work week for you. How much of your time is spent between the two projects and what level of overlap is there between these jobs?

MAHABAL: These aren't two different projects as such. The title of Lead Computational and Data Scientist, essentially because it is under the Center for Data-Driven Discovery, means that we will liaison with various groups, different departments, and that includes JPL as well. Of my time, a good fraction is spent on projects related to JPL and associated things. It's a, I won't say open-ended, but it's a thing where I can reach out to people or we, as a group, reach out to people and see what their needs may be, what their

interests may be. Zwicky Transient Facility is a specific survey and I got involved in that because I have been involved in many sky surveys before. It is a very large survey which surveys the entire northern sky every couple of nights, and so lots and lots of data. That's why humans cannot look at all the data, and we need to deploy machine learning methodologies. But in general, even other departments within Caltech for instance who may need some assistance with machine learning are fair game for liaisons with the Center for Data Driven Discovery. We have had discussions with different groups. For example, there was a project related to water in the rivers—or one on the gut microbiome. Some of these can be very unrelated to astronomy, which was my original area, but because I got into the Big Data and methodology, methodology transfer to other cross-cutting fields is an area that I have been into in the last few years. Seeing where similar methodology can apply, that's what I have been looking at.

ZIERLER: Ashish, how long have you been at Caltech?

MAHABAL: I came here on August 31st, 1999. It's 23-and-something years.

ZIERLER: Oh, wow. You've spent your entire postgraduate career at Caltech?

MAHABAL: Except for one year when I was a postdoc in India.

ZIERLER: Did you come with the same appointments that you have now, or these are more recent developments?

MAHABAL: I came as a postdoc. The original appointment was a one-year postdoc appointment. I had even thought that, okay, after the one year is done, maybe I should go back to India. Didn't happen.

ZIERLER: And was the postdoc with George from the beginning?

MAHABAL: Yes, it was.

ZIERLER: What was George working on at that point?

MAHABAL: The postdoc appointment was related to the Digitized Palomar Observatory Sky Survey. This was the last survey with photographic plates from Palomar. Zwicky Transient Facility I mentioned, uses exactly the same telescope, but instead of having CCDs as we have now, we used photographic plates. The survey was already done when I came on board, but I was using the data. What was happening at the time was those data were being digitized. The entire Northern Sky can be covered in 1,000 plates, each about 6.5 degrees x 6.5 degrees. Three emulsions were used to cover three parts of the visible electromagnetic spectrum. Each exposure was roughly 1 hour, 1 gigabyte of data.

We are into terabytes per night now. We had gigabytes then. Total of three terabytes of data for the entire sky survey at the time. We were converting the data to be more useful. The linearity of response is very important in astronomy. You should be able to look at faint sources and you should be able to look at bright sources. Linearity of the recent detectors like charged couple devices is really good when you have N photons and you get N x Something-Electrons. Irrespective of how small or big N is, within certain bounds, of course, the linearity is very good, but with photographic plates, that was not true. We used to get curves that were quite different. To be able to make sense digitally, we wanted them to be converted into linearized solutions. That was one of the things that we were doing with the digitization process. That was a very interesting project, and I would say that that was my first introduction to Big Data because 3 terabytes 23 years ago was really big. Not anymore.

ZIERLER: Ashish, was the term Big Data by the time you arrived at Caltech Data-Driven Astronomy, was that already in use? Were you hearing that on a daily basis?

MAHABAL: Not really. Not as a term, meaning sure, some projects had more data than others, but still we were in the domain where individuals or groups of individuals could look at much of the data that were being taken. The concept of, oh, we can't look at all of this, so let's try to whittle it down to smaller subsets without fully looking but in a meaningful way so that we are still being able to do the science, so that was not really starting to happen yet.

ZIERLER: When did the terms machine learning, deep learning, artificial intelligence, when did that start having an impact in astronomy?

MAHABAL: I don't think I can quote a year off the top of my head, but it was around that time that—see what has happened is that many of these machine learning algorithms have existed for a long time. It is mainly that the way the industrialized use that we see today was not in vogue at the time because there were other developments that were yet to happen. We did not have really fast processors that were easily available. We didn't have really large datasets which had enough labels so that anyone could take a labeled dataset and run a training on those. Machine learning was already happening in some sense. We had supervised machine learning that existed. We should also remember that these days, much of the time, most of the people when they talk about deep learning or machine learning, they mean deep learning where there are lots of layers, lot of computation, but there is also the classical machine learning, and classical machine learning has been happening for a long time. So, for instance, in DPOSS, DPOSS being the Digitized Palomar Observatory Sky Survey. for Star-Galaxy separation, we were using machine learning. Steve Odewahn was one of the postdocs here at the time and Usama Fayyad, he was working at JPL and with Caltech. They were involved in applying decision trees at that time. A decision tree is also machine learning because you're asking very basic straightforward questions. With deep learning it is different. Let's talk about one of the famous problems in recent machine learning that the deep learning paradigm got people's attention was separating cats and dogs. You have pictures of cats and dogs and distinguishing them from pictures is a simple question to ask, and I think it took the fancy of people because as a human being, you can easily tell whether something is a cat or dog. But how do you teach it to a machine?

What is it that the machine is looking at? It's looking at, for instance, maybe what is the ratio of the length of the face to the breadth of the face or what is the size of the ears compared to the size of the eyes or a large number of features of these kinds. In a 2D image, in deep learning, the deep learning algorithms really go to million dimensional spaces. They can take ratios and lengths of all kinds. Each pixel is a dimension to start with, but then they can build lots and lots of mathematical entities. In this very high-dimensional space, they try to figure out what is the surface that can be drawn that will separate these two classes? What we did at that time, it was the classical machine learning. There what you could ask is that if you are to classify stars and galaxies, you look at how concentrated the light is, how spread out the light is, those would be the two simple features that you could ask, and then the brightness of the object. Then using those two, you can ask questions like the number of pixels covered, is it less than six or more than six, for instance? So that is a decision point. In some cases, that would be sufficient to tell you whether something is a star or a galaxy. But then there would be distant galaxies which are too faint that would look like stars in terms of the pixels they occupy, but then they would be fuzzy compared to the stars. How the light spreads is different in case of a star. Spreads in the sense that all light that comes to our detectors comes through the Earth's atmosphere, and so you can see that it's not a point source, but it spreads around.

But in the case of a star, given its very large distance, what happens is that because it is a point source for all practical purposes, it's only the Gaussian in that atmosphere that leads to the spreading. Whereas a galaxy, even if it is far, far away, it is made of a very large number of stars. By definition, even when it reaches the top of the Earth's atmosphere, you do have an extended surface there, extended source there. The extended-ness on the CCD or the photographic plate is larger for a galaxy. That becomes another parameter. These two parameters on their own are sufficient to separate the two types. That kind of machine learning already existed at that time too. It is just that when we had more data and we wanted to get into more kinds of things that we wanted to see, we had to ask: what are the other features that will be required? Do we need to get into images to do that, or are these features that are engineered by our domain knowledge, are they going to be sufficient? So that is how the progression started. So, yes, machine learning existed, but it was mostly classical machine learning at the time and slowly we started getting into this deep learning kind of thing.

ZIERLER: Ashish, you delineated between machine learning and deep learning. I wonder if you can explain a little further what that means in this context?

MAHABAL: They're not different, it's just that deep learning is one type of machine learning. Deep learning as it pertains today is mostly about images, though we can also use sequences with deep learning. The number of layers that are used in deep learning, is generally large, and typically, with convolution neural network, you hear it more often. You start with an image, let's say a 512 x 512 image and then you're asking questions like, what is it that I see in that? In fully connected networks each pixel from each layer is connected to each pixel in the next layer. The second layer could be smaller, say 256 x 256. The connections have weights on them. What's going to happen is that something is going to propagate from the first layer to the second layer and then third layer and so on. The total number of weights between tall the layers is going to be very large, and that is what I was talking earlier about the very high-dimensionality. In that very high dimensional space, the decision gets made by the network about the nature of the objects on the clusters of pixels, the intensities of the pixels that are seen in that. Going back to the earlier picture, like a cat-dog separator, what would happen is that in a trained deep learning algorithm, if you are shown a picture of a cat, then it would go from one layer to another layer.

In the first layer, all that it may be looking at are two-pixel correlations or linear correlations or some kinds of linear features only, whereas by the time you reach the third layer, it's starting to look at combinations of those. This kind of one-angled feature and another-angled feature, so you're seeing 2D things. Then when you go to the next layer, it could be even more complex features and so on. That is how it is building up a very large library of features on its own. There is no training involved in terms of features, but the training that is still involved is in terms of labels. It goes through all these weights and then at the end, there will be a classifier that simply puts it on the left-hand side or the right-hand side. Let's assume that by convention cats are on the left-hand side and dogs are on the right-hand side. Suppose when showing a picture of a cat, because of whatever random weights that were there initially, it calls it a cat. That's great. We know that it is a cat and it is calling it a cat, so we are happy. We tell the network you're doing good. But if it was a dog and it called it a cat, or vice-versa when it is a cat, it calls it a dog, then we have to tell the network that what you're doing is not perfectly right and you need to adjust your weights. Of course, the weights are not adjusted after everything that is shown because otherwise, it'll just keep bouncing. Many times batching is done, and so, all those complicated hyper parameters come into picture. What is the batch size? What is the learning rate? When you adjust the weights, do you tell it to say, okay, change it completely to the opposite side because you are wrong, or you would rather say that you change a little bit because there'll be other examples where things may be slightly different? Because if you keep changing it fully, then it's likely to bounce a lot and there may be overlearning that may happen where it is tuning specifically to the features of the one cat or the one dog that it has been shown rather than dog-ness or cat-ness. By slowly changing these and by doing it in batches, the network can slowly settle down onto weights which can separate cats and dogs correctly. Learning with a deep network of multiple layers is deep learning.

ZIERLER: Ashish, to go a little broader, there's so many subfields in astronomy. What kind of astronomer would you call yourself?

MAHABAL: I started as an observational astronomer. My thesis was on radio galaxies, actually optical and near-infrared observations of radio galaxy. There was already a multi-wavelength aspect there. But then of course, when you do a set of observations, you do want to theorize a bit as to what you are seeing or you have certain hypothesis in mind, and that is why you are taking observations to try to go into that. That is how I started. But I would say that in the last several years, I've moved more towards being a data scientist specializing in different aspects of astronomy. Astro-Statistician, Astro-Informaticist, that is what you may say.

ZIERLER: What about in terms of techniques? There's so many different kinds of telescopes to work with. What would you say your home technique is in your training or your interests?

MAHABAL: I have been concentrating mostly on data that comes from optical telescopes, but I have used data from all other kinds of telescopes as well. For instance, infrared, as I said, I did take data in infrared and I have made use of HST data, X-Ray data. In fact, again from machine learning perspective, I also have done some work with gravitational wave data. I don't think there is one particular thing, but clearly, optical data is what I have worked most with. In particular, last several years I have been concentrating on time-domain data, so multiple epochs of the same part of the sky and how one can look at the variability of those objects.

ZIERLER: Ashish, for a broad audience who are not experts of course in these fields, I wonder if you can explain why large-scale sky surveys are so data intensive. What exactly is going on that creates so much data?

MAHABAL: A sky survey by definition would be something that looks at all of the sky visible from the location of the observatory. For instance, if you are north of equator, there is some part of the southern hemisphere that you cannot see. In the sky, there are tens of thousands of square degrees total. When you want to observe all of that and you want it in enough detail, you get large datasets At Palomar, every now and then you get better than 1 arcsec resolution. But I would say 1 arc seconds is a reasonably good resolution to have, and better is of course desired. If you think of that, then 1 square degree as 3,600 arc seconds x 3,600 arc seconds, and if we are talking of something like floats, then you would have 2 bytes for each of those points. You immediately see that you're starting to multiply of the order of 10,000 square degrees with 3,600 x 3,600 and then at least 2 for floats or so on. That would be a single scan of the entire sky. That is why when we converted DPOSS to the digitized versions, we got 3 terabytes for a single scan each in 3 filters. The sky surveys that we do today involve hundreds of scans of the sky, plus this is just the raw data. When you analyze the data, you multiply by a factor of a few.

ZIERLER: Can you explain the role of large-scale sky surveys as they relate to other observational projects in astronomy? For example, where does a sky survey work with a flagship such as Keck or Hubble, either land-based or space-based? How does surveys work with larger instruments that operate in a more targeted fashion?

MAHABAL: Most of these larger instruments, larger telescopes have smaller fields of view. They can look deeper, but what they're doing is that they're concentrating on a smaller part of the sky. You cannot cover, for instance, the entire sky with something like HST or with Keck, unless you are ready to spend years together. With ZTF, you can cover the northern sky in two nights. What happens is that the smaller telescopes like the 1.2-meter Oschin telescope at Palomar on which ZTF currently is, and that is where the POSS plates were before, they have very large field of view. Currently the field of view of ZTF is 47 square degrees, and for comparison, the moon is 1/2 degree by 1/2 degree, which means that about 200 full moons will fit into a single ZTF image. You get that much of data at roughly 1 arc second resolution. Then using those data you can figure out what are the interesting objects. Once you figure out what are the interesting objects, then you can go to these larger telescopes to do more interesting specific observations with those.

At Palomar for instance, we have three telescopes, the 1.2-meter Oschin telescope, which has 47 square degree area, the 1.5-meter telescope which has an automated spectrograph. Automatically every night several targets from ZTF get sent to the spectrograph, and we do a quick weighting. One of the results that was published recently was we completely autonomously classified about 1,000 supernova of Type 1a using that 1.5-meter telescope. Then when we are taking the spectra which get reduced in this fashion, there may be some even more interesting objects which then get queued on the 5-meter Palomar telescope or the 10-meter Keck telescope, which have smaller fields of view, but they can go deeper, so they can collect more photons, that is their important aspect. With HST, for instance, because it is in the sky, un marred by the vagaries of atmospheric layers, the resolution that we can get at HST is really much higher. Point-like stars look like points there. There you can have pixels that are as small as 0.1 arc second for instance, rather than1 arc second.

ZIERLER: Ashish, it might sound like an ignorant question, but to the naked eye, of course the sky does not appear all that dynamic from night to night. What do we really see by pointing the survey telescopes at the sky night after night that makes it worth it to detect all the new things that happen on any given night?

MAHABAL: Even at the darkest location on earth, only a few thousand stars are visible to the naked eye. Stars have a life that is really, really long -- billions of years. Within a night or within even a human lifetime, not much is going to change in the stars, in any given star typically. But if you think of millions of years, then things are going to change there because stars are born, they go from having just hydrogen and a few elements to fusion of hydrogen to helium and fusion of helium to higher elements, etc. If you can observe millions of stars, then you can see these variations in some of them. You are essentially taking a statistical view.

Rather than being able to view a single star for a million years, observe a million stars for one year. Then depending on which stars you're looking at, because if all of them are born at the same time and are changing in the same time in that one year, none of them is going to do anything different. But because star formation happens at different rates in different galaxies, and even within our galaxy, we can learn a lot. The Milky Way that we see is towards the center of the galaxy that we see because the sun along with its planets is on one age of the galaxy. That is what we are trying to do, look at a very large number of objects again and again to be able to catch those small number of objects that we can see changing and rarely within the single 30-second observation of ZTF or of the Catalina Survey. There are objects which change also on that levels, but catching them if sporadic pointings would be rare. But if I go back to the same location tomorrow or a week later, then the probability of finding changes increases, and that is what we are keeping an eye on. That is why some of these surveys go on for a few years. We have now ability to connect not just one survey data, but also go a few years earlier and start connecting earlier surveys, etc, so we can build larger and larger time baseline to look at the changes. That is where the dynamism really comes at. There are changes that happen at all kinds of timeframes, and with these different cadences, we hope to capture different ones to understand the scope of things that we are looking at, different kinds of stars, different kinds of galaxies, active galactic nuclei, quasars and all that.

ZIERLER: I wonder if you can explain on a technical level when machine learning will help detect something interesting, when it sees a signal in the noise, what does that look like? How do you know what the machine learning is telling you?

MAHABAL: One of the things that I mentioned earlier was about training samples. Machine learning includes both unsupervised ways and supervised ways. The unsupervised ways are where you don't know what you're looking at. Let's say that I observe 1 million objects tonight and then I observe them regularly for a year. I've got, say, 100 observations of 1 million objects. Then, what I can ask these unsupervised algorithms is to cluster them in different ways. The clustering may involve things like magnitude, which is the brightness of the objects, their color, if I'm using multiple filters, so how the difference between the G and the R band look, those would be two obvious ones. But then once I start looking at the variability, there would be a majority of these objects which within a year are not going to change. If I were to plot the brightness on the Y-axis and the time on the X axis, they will be more or less flat within some noise. Whereas there will be a small number of objects which have shown suddenly some extreme activity. Think of solar storms, we see some solar storms every now and then where the sun brightens up a little bit. There are many stars that brighten up a million times more than the sun does. Those, even if they're far off, we can catch them. We can see them either when that has happened or when they have faded. Then, there are so many other stars which are pulsating. There are changes that are happening in them which we can catch as brightness changes. If you have been looking at objects [in this manner], you're going to catch that.

Once you have these time series, then you can derive features from those time series. What is the amplitude of the object? Are they periodic sources? If they're periodic, how is the period changing? You can derive tens, hundreds of such characteristics. Then the clustering algorithm can use these as inputs to do various clusters. You can see that, okay, these are pulsators, these are fast rotators, these are flaring objects, these are completely constant objects, etc. Then this is a very straightforward thing. You can see the big clusters where you have all the objects, but then you can always find a few objects that do not seem to belong to any of these clusters, or some objects that may seem to lie between these clusters in some kind of a 2D diagram that you can draw using various techniques that are available to us. Those are the ones where you can say that with the help of machine learning, you've discovered something new. That is for the unsupervised thing. Same thing goes with supervised. With supervised, if you start with labels and then you train the network to identify all your labeled classes, then what is going to happen is that it is going to at some point find objects that do not belong to any of those. What these classifiers do is that once it's trained say, and you input class A, then if the object belongs to class A, it'll give you a high confidence, say 0.7, 0.8, 0.9 out of 1. For the other classes, it'll give you a small confidence, 0.1, 0.2, 0.01, whatever. But then suppose there is an object that doesn't belong to any of those classes, then one of two things will happen. Either for all classes, it's going to give a very low confidence indicating that it does not belong to any of those classes, or to all of them it may give you some intermediate confidence, say 0.3 where none of them is above 0.5, but they're all around smaller numbers like 0.3. It may be an intermediate class. Those are the two very basic ways in which you can use very simple machine learning algorithms to figure out objects of kinds that have not been seen before.

ZIERLER: Ashish, what is the mode of transporting that signal that you're getting from the surveys from the machine learning and feeding it into one of the larger, more targeted telescopes? How does that look like?

MAHABAL:. You've got this large survey, you've got lots of objects that are coming out of that and you're looking at the time series and keeping an eye on what is likely to be changing. You can have automated methods, you can have graduate students looking at that and so on. But most of the PIs who are working with these larger telescopes, would have well-defined algorithms. What is it that they're looking for? I am looking for only objects that seem to change by magnitudes 3 within a certain period. When that is detected, then what can happen is that a TOO can be generated, a Target of Opportunity. What that means is that there is an already allocations that have been done on those larger telescopes for certain kinds of objects, and certain PIs have access to those allocations. Whenever something is seen in this fashion, then the PIs can be alerted or the team can be alerted and they can further alert the telescopes to start doing the observations. This is the TOO mode. But otherwise, there are also standard modes where one is cataloging these objects, and then every few months when the observing cycles come up, you put in a proposal and say that, we have accumulated so many objects of these types that are of interest because of such and such, please give us time to observe.

There are also services like the TNS service, the Transient Naming Service where people can send out the detections that they're finding. With ZTF for instance, and that started earlier with CRTS also, that we can find so many interesting objects in a given night that we as a team cannot follow them all up. Then if we don't announce that to the world they're going to be wasted. If anyone can observe them, that's going to be a great thing. We put out the alerts and people are listening to the alerts. There are brokers that are trying to classify these alerts because if I simply put an alert saying, "Something has happened," then not many people will be interested because you don't know what it is. Maybe it is something that most people will treat flaring stars as an ordinary star. Why go after them? We know so many of them. What is one more. That is why these brokers try to take these data and classify them and try to attach a confidence, this is an alert and that alert belongs not to a flaring star, but maybe to something much rarer like a kilonova. So when gravitational waves go off and LIGO announces a large area, then many telescopes go after that area and try to find out all the sources that may be possible counterparts to that source. Suppose you find 10 different variables in that area, then if you quickly classify 7 of them as ordinary types, then the interest narrows down to the 3 remaining objects. Then there can be more follow up observations of those 3 objects trying to figure out, is it the optical counterpart of the gravitational wave object? That's how the different flows, time flows or workflows for many of these small to large telescopes function.

ZIERLER: Ashish, is the survey technique strictly land-based? Is there such a thing as a space-based large-scale survey?

MAHABAL: Yes, there have been sky-surveys from space. For instance, IRAS in infrared. Roman will be launched soon. Most of the sky-surveys have been from Earth because it is easier to do things like getting larger apertures, and instruments with moving parts. But the desire has been to do more of those with space-based also. Then there have been some in X-Rays for instance, you can look at large spots of the sky, and there of course the techniques are somewhat different because every photon is time-tagged in some sense for X-Rays. It's not like optical astronomy.

ZIERLER: And the data is automated, the transmission from the survey to one of the higher-powered telescopes, when is it appropriate to send it to a space-based telescope and when is it going to a land-based telescope?

MAHABAL: For the follow-up?

ZIERLER: Yeah.

MAHABAL: Well, it depends, right? Meaning, what is it that you want? Do you want resolution, or do you want very specific things? Because the number of instruments that we have on space-based telescopes is limited. If you have desire for a very specific kind of spectrograph that is available on one of the ground-based ones, then that is what you would use. Yeah, by the way, one survey that I forgot I should have mentioned is TESS which has been looking for exoplanets and it's still going on. It's about to complete its fourth year, I think. It has been really covering the entire sky with a very large field of view. It has found many exoplanet candidates.

ZIERLER: I guess what I was getting at with that question is, what are space-based telescopes good at versus what land-based telescopes are good at?

MAHABAL: Space-based, they're great because they bypass the Earth's atmosphere. You can get very high-resolution observations with that. HST, again, as I said, you can go to 0.1 arc second very easily. If you're looking at some details of a galaxy, imaging details, you want to look at the center of a galaxy or you are looking at say one of the spiral arms where something is happening or H II regions in some area, then the resolution of space-based telescopes is far better, whereas if you want—on Earth you have much larger telescopes, the apertures are much larger. What you're going to get is very good spectral resolution. The number of photons that you're collecting with your spectrograph is much larger. Then, consider something like Gaia, which is another space-based survey. There, the astrometric resolution is fantastic, it's the best astrometric survey. What has Gaia been great at then is to look at proper motions of stars, mapping how our galaxy itself is changing. You were asking about static versus variable sky. Another kind of variation is actual motion. Asteroids move, but so do stars, right? We don't see the movement within one night, but if you keep looking with something like Gaia, which is high precision, even the small motions that exist within a few years, you can note them more easily.

ZIERLER: Ashish, to clarify, now in astronomy writ large, is there any subfield that really has not yet been touched by the advent of Big Data and machine learning?

MAHABAL: I don't think so. I think we have large sets of data for many, many subfields and in cases where we don't, we won't have large numbers of objects. For instance, think of the detections from LIGO, Virgo, right? We have how many now? Something that's in the hundreds only. It's not large in numbers in terms of being able to do machine learning directly with that. But then we have a large number of simulations for that. We know how the physics should be, so we try to simulate the data and then try to use that to figure out how the objects should be.

ZIERLER: And to reverse engineer that question, to go back to the turn of the century when Big Data astronomy was really just getting started, what subfields in astronomy were really first in?

MAHABAL: First in for machine learning, kind of?

ZIERLER: First to recognize the value of machine learning,

MAHABAL: The variability definitely was one, using time series to understand how we can classify these different objects. That was in terms of the direct science. But in terms of improving astronomy, I think something like the star-galaxy classification would be a good example where that is not science in itself, but it enables science. There were some interesting offshoots that came about because of that. Quasars, which are really centers of distant galaxies, they outshine the entire galaxy because they're really bright, they look like stars in the sense that they're point-like rather than extended like galaxies. When you do star-galaxy classification, not all objects that look stellar are stars. some of them are quasars. In fact, one of the projects that I worked on early on with DPOSS was using all stellar-like objects and trying to separate out the quasars from that. We had a longstanding program with DPOSS looking for what was then the high-redshift quasars. At that time redshift 3, 4 was considered high. There were not many examples of that.

ZIERLER: Would you say that Caltech was one of the early adopters of data-driven astronomy? Was it a leading research center for this approach?

MAHABAL: Oh, definitely. It was not the only one, but definitely it was one of the leading centers. DPOSS dataset, and the star-galaxy classifications separations that we were doing here, were two early examples.

ZIERLER: Would you say that these technological developments was one of the things that kept you at Caltech after the postdoc was completed?

MAHABAL: I would say so. With Caltech there are so many different observatories here. Being able to get data easily, being able to work with latest data, being able to interface with various visitors and people, that was definitely very attractive.

ZIERLER: When did you get involved with your current appointments? When does that happen in the chronology, right after the postdoc, or later on?

MAHABAL: It's been a gradual thing. After the postdoc, which was mainly DPOSS, we worked on another survey called The Palomar-Quest Survey which was on the same telescope, 1.2-meter Oschin telescope. What was done there is that the focal plane had 112 CCDs of size 2400 x 600. We had four simultaneous filters on them. We talked a little bit about color earlier. A star has some intrinsic color because of its brightness, because of its age, because of its composition, etc. There are also stars that rotate, that pulsate. The color can change for intrinsic or extrinsic reasons. If you observe the same star quickly in two different filters, then the color that you see you can say is the intrinsic color at that time. Whereas if you took an R observation now and G observation in one hour from now, the color that you have would be partly because of the intrinsic color, but partly because of its rotation and something had changed. With Palomar Quest, what we could do is that we could take observations such that we got that color within two minutes, which was very important to get intrinsic colors for these objects. The way it was done is that the 112 CCDs were organized in four fingers. It was used in drift-scan mode, so the sky would go by, and then in software we would separate out the signals. You would get G-band observation, R-band observation, I-band observation, Z-band observation in rapid succession, and then we would know the intrinsic color at that time.

But the CCDs that are in camera were not the greatest, so we could not get to all the detailed high-redshift objects that we wanted to get to. But that is when we set up a lot of our algorithms to get into even more surveys. Then we got into the Catalina Realtime Transient Survey. The changes that were happening, newer surveys being able to work with them, that was an interesting aspect of staying here and continuing to work with those. While that was happening, I was also getting involved with other international organizations. I was of course a SS member, but also the NVO was founded fairly early on, the National Virtual Observatory. In that, we were talking about how we could connect large number of datasets together and how we could bring about more of the computational aspects that the world is looking at, so we can get more out of all the astronomy datasets that we currently have by combining them with other datasets. All those developments were of interest and I got even more involved later on with some other organizations.

ZIERLER: Tell me about the development of the Center for Data-Driven Discovery. First of all, were you present at the creation? Are you one of the founding members?

MAHABAL: Yes, I'm one of the founding members, and George Djorgovski is the founding director of it. The way it was formed is that we realized that what we can do goes beyond what can be done in astronomy. There were a few other members: Ciro Donalek, Matthew Graham and Julian Bunn, I think there were eight or nine of us, Santiago Lombeyda with whom I continued to work in some VR aspects. Then we were starting to talk more with JPL about various data science projects. It was not only astronomy, but also related to things like early detection of cancer. I liaison-ed with JPL's Data Science group. They have associations with large consortiums of cancer. I bring some of the methodology of astronomy to bear on that, how one can look for early detection of cancer. Similarly, I tried to look at methods that they may be using that we can bring back to astronomy and that's where the methodology transfer parts come in.

ZIERLER: You mentioned JPL, that's something I definitely wanted to talk about. If you can explain just at a broad level how having JPL is really an asset for what you do?

MAHABAL: It's really good, but I laugh because I want to tell a funny story. Fairly early on when I came here, we had this collaboration with JPL. I'm a US citizen now, but at that time, I didn't even have a Green Card, so I was an alien. If an alien wants to go to JPL, then the rules are that you are going to be escorted by someone all the time. You go there only after specific permissions, you go there escorted by someone, stay there for specific times and get out and so on. I said, I don't want to bother with that, let's not do that. Our collaborators kept on coming to Caltech and we worked here. But then a few years later I realized, oh, that's not really a good thing, JPL is such a big resource, I am missing out on that. That's when I started getting more involved in that and so on. Then it has been great because going there—I don't go there every day. I typically would go there now once or twice a month, but you have meetings there and I have been involved in projects with many people there. That's been really good because it's such a great resource for all kinds of things. First of all, even in astronomy, they bring interesting aspects and interesting angles, other aspects of methodology and technology and all that. It's been great.

ZIERLER: What are the resources that JPL offers that might not be available just being a Caltech researcher?

MAHABAL: Resources in what sense?

ZIERLER: Every sense, computation, observation, you name it.

MAHABAL: I think the most important connectivity to NASA is what I would say is a big resource. Whereas they do have some computation aspects, but I've not really explored those directly. That is because we do have resources that we can use here as we need. But they do have the datasets or databases that they have from NASA, those are there, and definitely it is those datasets that we can use with JPL. Not just in astronomy for instance, but also Earth Science datasets, or all their observations of the Earth the satellites have been taking, so those are available. That is where, again, in terms of methodology, we can look at what advancements are happening. We had, for instance, a Poster Session for data science at JPL, and there were small number of astronomy things, but there are so many other things that you can see, and those connections become quite important, like the Mars Rover for instance, or the helicopter that was developed there, automated machine learning for the images on Mars, or dust devils on Mars.

Those are very analogous to what we do in astronomy, but some of the datasets are completely different, or their planetary science dataset. I've been involved in some Keck Institute of Space Studies, work studies with some people at JPL. For example, if we want to send out spacecraft, what kind of questions we can be asking in terms of longevity, in terms of bringing the data back, in terms of telemetry, in terms of being able to do machine learning on the spacecraft because we want it to be autonomous but we also don't want it to be taking decisions where human interventions may be required early on until we understand what is happening on that other body. For instance, if we were to take a training set on earth and assume that a foreign body has similar textures, then what kind of mistakes are we making? And if we don't want to make those mistakes, what is it that we need to take into consideration? Those kinds of cross-cutting things, they would come only because of JPL. I wouldn't be working on things like that at Caltech. Very related things, but—yeah, that's what I would say.

ZIERLER: Ashish, thinking about your involvement with the Early Detection Research Network for cancer, I wonder just intellectually if you can explain how you and your colleagues figured out that data-driven astronomy, the methodologies there might be of value for research fields that have nothing to do with astronomy.

MAHABAL: First of all, I should say that when you're working with something like cancer, it's a complete paradigm shift. In astronomy if I classify a thousand objects wrongly, nothing's going to go wrong, in some sense. But with cancer, I don't want to be doing that. One of the things that I'm very interested in even when using machine learning is interpretability and explainability, because in machine learning, many of the techniques are like a black box. When you give them data, they're always going to give you an answer. How much to believe that answer, especially the deep learning techniques, they can be rather non-transparent. The way JPL got involved in some of these things is that JPL organizes its planetary data really well, and there were talks given by some people at JPL to National Cancer Institute. Dan Crichton, with whom I work at JPL for instance, he showed them how the planetary science data were being organized, and they said, "Oh, this is fantastic. Can you help us organize our data?" And that is how some of the earlier things started. If you look at images of tissue or CT scan images, they're not really too different from the images that we look at. When I do star-galaxy classification, I'm looking at fuzzy things and things that are not fuzzy. When you're looking at tumors or other cells and so on, there are things that you can relate to directly. But one thing that I would definitely emphasize is that it is very critical that the domain experts stay in the loop.

It is easy to say that, oh, I can take those images and do fantastic things with that, but if I don't have domain experts in the loop, then there is a good chance that I'll make some mistakes which I won't realize because of not having that domain knowledge. We have seen it ourselves when in the early days we asked statisticians who didn't know about astronomy to help us with some things and they would come up with really trivial correlations which we would outright say, oh, you don't even need to consider that. We didn't want to be in the same boat when working with cancer. When we started looking at these images and started finding these correlations and working with the domain experts, they gave us feedback which was quite useful in terms of both that things that we should look at more carefully, ignore, but also the fact that our techniques are actually already starting to help them see something they have not thought about. Bringing this perspective, I think, is important.

One of the things that I could say is that in astronomy it is very common to use different filters that I mentioned before. You not only get brightnesses, but you also get colors of objects. I said, why can't we do the same thing with cancer images taken in different modalities? And people had the modalities, they used to use them independently, but they had not really tried to combine the modalities to do something. We're using something of that kind right now with prostate cancer data. Or there are interesting back and forths between—what happens is that when cancer grows, you can see it growing along arteries, and large-scale structure in cosmology you have clusters of galaxies and then there are thin strands connecting these clusters. You can see something similar in breast cancer, for instance, and there having been some back and forth between astronomy and breast cancer biology where methodologies have been exchanged to look at these density fields in different ways. Both sides recognize that doing some methodology transfer of this kind can be helpful. I think that's how it has continued.

ZIERLER: Ashish, maybe it's as much a philosophical question as a scientific question, but what can we make of the fact that there are similarities between cancers and galaxies?

MAHABAL: As a philosophical question, I don't think that there's too much that I can say about that. I won't say for instance that we are a cancer to our galaxy. But yeah, so what I would say is that the number of shapes or number of sizes is something that you will see occurring again and again at different scales. In some sense what is happening is that in cancer, what you have is normal tissue being taken over by the cancer cells and you see them coagulating differently, whereas in galaxies, while individual stars don't really do that because we are looking at it from such a great distance, then it's the projection in which we are seeing the 2D nature of it. If we were really inside a galaxy as we are inside our own galaxy, then it would look very different. I think it's just a perspective that is helping us in this case, and that is why it is superficial. I may not take that analogy much farther than what we currently have to the level that, okay, we can look for structures of that kind, because finally, in 2D data, all that you're looking at are segmented versions of it, 2D structures of it. The shapes, similarity of shapes, that's what one is going after.

ZIERLER: On the question of early detection, tell me about some of your work with asteroid research.

MAHABAL: Early detection of asteroids, yeah, that's a good question. After Palomar Quest, when we had developed these technologies, and because Palomar-Quest didn't work out as well as we had hoped it would in terms of the quality of the data, we started to work with the Catalina Sky Survey. The Catalina Sky Survey is done from Arizona. There were three telescopes involved at the time, two in Arizona and one in Australia. CSS, MLS, and SSS. What we said is that we can take the data and look for transients in them because they were primarily interested in looking for asteroids. The way the survey imaging, the cadence is done, is that you take 4 images within 30 minutes. Thirty-second images separated by 10 minutes each. You take an image, go somewhere else, come back in 10 minutes, go somewhere else, come back in 10 minutes, and then again, a fourth time.

Because asteroids move, what you want to do is that you want to look at say the first three images and identify moving objects, and then to understand that that is indeed a moving object of the kind that you think it is, predict its position in the fourth image and see whether you actually find it there. That's how these asteroids are found. They've been doing a great work with that, but we said that, "Because you have got this fantastic cadence, why don't we look for other kinds of variability in that where objects are not moving, that's your domain, but why don't we find all kinds of variable objects in that?" And that's where we started using those data and looking for quasars and variable stars. That was a fantastic time-series resource. We put together that dataset and we were announcing our discoveries in real-time to the entire world. Various pieces of software were developed at that time.

ZIERLER: Do you see this approach in asteroid research to really contributing toward planetary defense?

MAHABAL: Yes, definitely. What CSS and other groups like Pan-STARRS and so on and ATLAS have contributed is larger and larger number of asteroids that we did not know before. Even ZTF has been helping in that with its Twilight Surveys. What needs to be done is that find all large asteroids and try to go down to as small size as possible so that we can know their orbits. If any one of them is likely to veer towards Earth, then we can know about that and get ready for doing whatever needs to be done with respect to that.

ZIERLER: How big of a threat are asteroids to earth? What do we know now that we might not have known 20 or 30 years ago?

MAHABAL: I think what we did not know then is not very different from what we don't know now in terms of the fact that they can be a big threat, but we cannot really predict when they're going to be a threat. We have found at least a few objects before they crashed into earth, but we knew that they're going to crash into earth in a day or two. Luckily, these were not very big objects. One of the first was found by CSS. There was an asteroid that fell over Sudan and it was predicted before it did that, so people could go out there into the desert and get pieces of the asteroid that was fallen. That was one of the cheapest missions to collect material from an asteroid. We have had a few more examples since then of that kind. It's clear that the threat exists, but because we don't have a full catalog, we cannot really say how much of a threat is there. But given that we have been cataloging things better and better, it gives us a better idea and get us better prepared.

With something like LSST, the Large Synoptic Survey Telescope, that will go much fainter it'll be able to catalog objects that are fainter and so be more complete. What happens is that because asteroids are outside of the Earth's orbit, it takes them three to four years to complete one orbit, and so relative position of Earth and relative position of the asteroids, the way they change, it takes several years before we have a good idea of what a given asteroid is doing. Twenty years is long enough time, but because we have not been looking at the entire part of the sky where we expect to find asteroids all the time, we haven't really managed to catch all of them. Also, what happens is that connecting them into links so that we can define the orbits, it's getting better and better, but it's also not fully done. There are many observations that sit with Minor Planet Center, which are not long enough to have them converted into orbits. These are portions of orbits, and with better data that keeps on getting better and better.

ZIERLER: Ashish, what are astronomical transients, and why have they been a focus of your research?

MAHABAL: Astronomical transients are objects that change in brightness in a relatively short amount of time. Most objects don't change much. Most astronomical objects don't change much for millions of years, but then there are these which may be closer to their birth or closer to their death or closer to some transition happening because of nearby presence of other objects, or there are some objects which may have some catastrophic events happening on them. Studying these and classifying those is what caught my interest because they're rare and that is one way to apply the Big Data science methodology to astronomical data which was gathering. There's a lot of data out there and not all of it, or not even a big fraction of it really has been used yet. We can do so much more with the existing dataset. I have not been doing active observations for a very long time. Sporadically, yes, here and there, but when I started at Caltech for instance, every month I would go to Palomar and do a few nights there looking at various kinds of objects and that gradually reduced a few years later. There was a time when staff scientists were not allowed to put in proposals to Palomar for whatever reason, and then at the time I realized something. With all the existing data do I really need to observe? And then I think that was really a transition when I started looking at even more datasets that were around and started combining those datasets.

ZIERLER: Tell me about how the Zwicky Transient Facility got started, and in what ways specifically it improved upon the Palomar Transient Factory?

MAHABAL: Members of the Palomar Transient Factory were mostly interested in explosive events, so looking for objects that change in brightness in very explosive ways. The term factory was specifically for that, that, "Let's be like a factory, let's try to find as many as possible and be very good about that." And PTF was good about that. But within ZTF what happened was that Caltech decided to get into a wider partnership. This is how the thinking went: Yes, we have found all these supernovae and we have done some good work on that, but there were other aspects which we could have done better and we did not do that because we didn't have enough human power, and let's try to expand on that. One of the things that happened was that in PTF there were 8 CCDs, whereas with ZTF we went onto a much larger area of the sky per image.

Remember I mentioned that with DPOSS, the photographic plates were 6.5 degree by 6.5 degree, so that was almost completely the focal plane. PTF was not using all of that. In ZTF, we decided to go even bigger than what DPOSS was doing by putting large CCDs. We covered this entire focal plane with CCDs and the CCDs were better. Then we said that we'll change the cadence and we will try to do more varied science with it. It's not just the explosive events but all kinds of variability. ZTF in fact even added a couple of other modes recently. For example, we have a mode called read while expose. The way CCDs are read out is that there is a frame transfer that happens. You expose for 30 seconds and the charge that has accumulated because of the photons there, you shift that and then the image is formed. But in this particular mode, what we are doing now is every row is written out so that charge is transferred continuously, so we can get a resolution of 6 milliseconds. Instead of waiting 30 seconds before we write out the data, we can just get really high-resolution data. But it's expensive of course, because if one is looking at 47 square degree and writing it all out every 6 milliseconds, it really becomes large. Then because we are exposing for a short time at a time, though it is continuous, we can go only up to about 14th magnitude or so rather than the 20th magnitude that we can reach with the 30-second exposure. I don't know if I answered your original question, but yes, with Zwicky Transient Facility we got into more kinds of science. One other thing that was happening is that the LIGO connection also became very critical and how Caltech is big into the gravitational waves. With ZTF, the large area allowed us to go after the possible LIGO detections covering the error circles where the counterpart may be and looking at those. The new thing that'll be happening, I think it's in May that O4 will start for LIGO.

ZIERLER: I wonder if you see a specific connection to the research of Fritz Zwicky, or this was just a nice opportunity to honor his legacy?

MAHABAL: He was very interested in different kinds of odd objects, and so he worked on something called morphological classifier, and I think there's very good connection with the kind of machine learning that we are doing with what he would have thought to be a very good way of doing things also. I don't know about the methodology, whether he would have liked the methodology, especially the black-boxiness. Maybe that is the part that he wouldn't have liked. But that is also where what we want to do is that make sure that we don't stay in black boxes, that we are able to explain things as they are. So, yes, there is a connection I would say.

ZIERLER: Now given Zwicky's focus on the northern hemisphere, is there a parallel project that's doing the same in the southern hemisphere?

MAHABAL: Actually, there is one called BlackGEM that is doing a little bit. There is another one that's going to start soon. But the big elephant in the room really is the Large Synoptic Survey Telescope or Rubin Observatory that should start in 2025. It's a very large telescope which has an area of, I think, 8 square degrees. Not as big as Zwicky, but it'll go much deeper because of the larger aperture. It'll use u-g-r-i-z-y, 6 filters. The thing is though, it'll not have as many epochs as ZTF has. We already have 1,000 epochs in 5 years for most part of the northern sky that we are looking at. They will have 1,000 epochs over 10 years together in all those filters. They'll go much deeper.

There will be very interesting things that come up and why machine learning becomes even more critical for something like that. What happens with something like Rubin is that because they will go much deeper, right now we talked earlier about how we identify objects with something like ZTF and then go to Keck to get observations. There are some extremely large telescopes coming up, but there are only a few of them. Second thing, they can go after only so many objects because of how few they are and because there are other projects that are doing other science. Also, even they can take spectra only up to a certain depth, whereas LSST will find objects that go much fainter. We won't have any direct way of classifying those objects by spectroscopy. That is where machine learning will become even more critical. The better we are at assigning explainability to our models, the more satisfied we can be that the machine learning is giving us the right answers without having to do spectroscopy. One of the use of machine learning is not just doing these classifications, but doing the classifications in a way where we don't need to do spectroscopy and essentially being able to do population studies. In the old times, what will happen is that people would define certain samples and they would go and try to observe them. But then these samples would always be a small set, whereas when you are doing all sky survey, you can say that it is possible for you to get all objects of a kind, well, more or less all modulo of what you can see, what is being hidden by dust or not hidden by dust and so on. But going into those population studies is what is going to be enabled by the sky surveys. Again, sorry, the answer to your southern hemisphere question is that, yes, there are a couple of telescopes, there are survey telescopes, but the bigger one is going to be LSST.

ZIERLER: You mentioned of course LSST. What do you see as ZTF's contributions that have allowed or will allow LSST to get up and running? And then once it is operational, what does that mean for ZTF?

MAHABAL: ZTF is in Phase II right now and the Phase II is supposed to get over in a few months' time and we are thinking of ZTF III, which may go on for some more time. We hope that there will be an overlap with ZTF and LSST. But even now what is happening is that LSST because of the reason that I said earlier that much of their classifications will have to be done by machine learning rather than by spectroscopy, and you need data to classify these things, so the way training datasets are generated is that you may observe with your own telescope for about an year and then create reference images with that and use those reference images to start identifying interesting objects later on. But you don't really have to wait one year with your data. You can do something called transfer learning to start learning from other telescopes. That is why where something like ZTF and other southern hemisphere surveys can be very useful, but in particular, ZTF because of the large field that ZTF covers. Here what's going to happen is that the brokers that are going to work with LSST are already using ZTF data. There's ALeRCE in Chile for instance or Fink or ANTARES from NOAO. They are using ZTF data to classify astronomy objects, ZTF objects. Because the methodology needs to be perfected as well, so in that sense, ZTF has been very, very useful to LSST and LSST is the first to acknowledge that, and they've been saying that it's been really good too.

ZIERLER: Now the project, Automatic Learning for the Rapid Classification of Events, where does that play in both to ZTF and LSST?

MAHABAL: There are many other projects of that kind also. That one works with supernovae and what they do is that they essentially try to understand from old data as much as can be possible and include transfer learning to see how they can be working with LSST in the future. There have also been a couple of data challenges, for example one called PLAsTiCC. Some use simulated data, some combined with ZTF data and the challenge was to see if you can classify objects given a small number of observations. Recently the ELAsTiCC challenge, which is an extension of PLAsTiCC but also incorporating various brokers has started. All these teams are trying to use both real data as well as some simulations and then transfer learning to get ready for not just LSST but other setups that may be out there as well.

ZIERLER: Ashish, where do you see yourself applying—you said before you're getting pulled so much into data science. Beyond astronomy, where are you lending your expertise?

MAHABAL: Cancer is one of the primary areas, but there have been many other projects, and even within astronomy there have been other areas where I've been doing things. For instance, I worked on a project to look for exoplanets using the TESS light curves using machine learning. Then, there were other projects involving the gravitational wave data, not just the data from the detectors but the auxiliary channels. Using the auxiliary channels to see whether we can predict when LIGO is going to go down. Once you have access to these large datasets, you can ask interesting questions. Then within cancer, I've been looking at prostate cancer, pancreatic cancer, and breast cancer. In addition to that, there have been other projects where I've been doing things like, JPL is very interested in planetary protection where when we launch spacecraft, then is it possible that we look for life, but we end up finding life that we took with us? Is it possible that we'll be taking bacteria from here and plant them there and end up finding them? That would be the last thing that we want to do. Trying to understand whether there are certain bacteria that may survive space flight in terms of their radiation resistance, in terms of their spore formation ability, in terms of their ability to survive at different temperatures, etc. We have this setup, we call it check contamination where we have a set of bacteria properties which have been listed, and then when you take a swab out of a spacecraft and analyze that for DNA, that can be compared to this setup.

Then we can say that of the list of bacteria that we know can survive spacecraft, are any of them in that sample? And then you can put various thresholds for example ask questions like are there at least 2,000 viable DNA, or are the reads above a certain number etc.? Because the training set that we have doesn't really exist for all bacteria, we're starting to use machine learning methods to extend that dataset. With machine learning, what we'll do is to start filling the blanks of the bacterial properties that we don't have. But there are very interesting questions. One thing that we need to identify are even if some properties are missing, from the remaining properties, can we already say that these bacteria are likely to be contaminating the spacecraft surfaces that we have, etc.? Those are some of the areas where I have been lending my machine learning expertise. And there are a few more where on the side things have been happening.

ZIERLER: Ashish, looking at your own career, I wonder if you can reflect more broadly on what this means as the way computation is affecting the lives of all scientists.

MAHABAL: I think computation is really pervading everyone's lives, whether they realize it or not. There are so many things. All the advertisements that you are being served when you browse the internet, they are driven by machine learning. To that extent, there is computing everywhere around you. The more we understand that, the better we can handle that in terms of trying to see how we can incorporate it for good. I think the latest example is something like that of ChatGPT which is based on the natural language processing. OpenAI brought this out, it's an assistant they say. For basic English related thing, for basic language related thing, it's fantastic and people are already asking questions whether the university essay requirement should be done away with because we have a machine that can write excellent essays. But some of the things that are missing are personalization or individualism.

When you look at things related to science that are at the edge or even mathematics, there's a huge problem of hallucinating. It knows how the pattern should look, but beyond that it doesn't know and it very confidently can keep on giving you answers. I think that is the edge where we can already see where computing is. But to understand what it is doing right and doing wrong, you need to study a little bit of these methods. For all scientists being aware of what is out there—so I mentioned earlier for instance, that with deep learning you're always going to get an answer, but whether you can believe it or not, to understand that, you need interpretability and explainability and so on. I would say that all scientists should be aware of and doing computing, but at the same time, they should also be very wary of what it is that they're getting in exchange. I think that was one of the things that I really got into early on because I have always been interested in mathematics and puzzles and understanding what consciousness is, how we get answers, etc. For me it seemed quite natural to try to figure out that the answers that we are getting, are they real or not? Can we depend on them or not? In general, bringing rationalism to computing in some sense. I will say that, yes, everyone is doing computing, whether they know it or not, but they should be more rational about their approach to it.

ZIERLER: Ashish, for the last part of our talk today, given that the field is somewhere around a quarter-century old, I'd like to ask some broadly retrospective questions about what has been made possible as a result of the rise of data-driven astronomy. Perhaps the most fundamental, what do we know now about the universe that never would've been possible without machine learning and powerful computation?

MAHABAL: I think with machine learning and powerful computation, we have definitely found out many, many outliers that we wouldn't have been able to find out. Rather than taking a direct machine learning example, I would like to take an example of the Citizen Science project from Zooniverse where Hanny's Voorwerp was found. This was an object that was of somewhat different color than other objects nearby, and that was not even the main object that one was looking at. Then one person who was going through this Zooniverse dataset asked, "What is this?" And just started looking at that, and that led to additional people looking at it and then more objects of that kind being found. Those are the kinds of things that machine learning has been leading us to. Anomaly detection is something that I'm very interested in and machine learning helps a lot with that.

One of the statements that I like to make is that there is no such thing as an anomaly. Most anomalies that we find are artifacts of some kind, and finding artifacts is an important aspect, first step of research really, because even when we detect all the objects in the sky, first thing that we want to ask is that, are any of these bogus? We have something called real-bogus separator initially, because there can be things that happen in the electronics that cause this, or you may have satellite trails which are not really the kind of things that you're trying to look at. They may not really be bogus in the sense that they are from real objects, but they may not be the science that one is looking at. We need to separate all those before. We do the real bogus separation. Then even in the genuine objects, we want to find cases that are outliers. That's where we do anomaly detection. But then sometimes we start finding artifacts that we had not seen before. Like sometimes what happens is that we have these multiple amplifiers. A bright star in one amplifier can give rise to a signal in another CCD. That may seem very strange, but that's happening within the electronics, not a physical manifestation of that object. We sometimes find those by looking for anomalies.

Those apart, then the anomalies that you find, those I think are mainly indicators of entire populations that exist but we have not yet found because we have not been looking at the right position, right location. Bringing together of these datasets—and one of the ideas that maybe we can cover at some other point, one of the things that I would like all of us to get to is, so when you look at time-series data, with time variability on one axis and energy on another axis, so if you make this multi-dimensional plot, then different kinds of objects are going to occupy different parts in this space. They're like clouds, so quasars sit here, AGNs sit very close to quasars but slightly differently, and then there'll be other sets of objects. When you have very few observations of an object, you may not be able to quickly say what class that object belongs to, but you'll still be able to eliminate a very large number of types. By being able to eliminate a large number of types and then by defining what is the next observation that you could be taking to eliminate even more types, you'll be able to zone in much quickly to what class the object belongs to. That is something that I've been thinking about a lot and want to develop that more into something how we could bring together the datasets, not having to bring together everything really, but just the pertinent aspects of different data.

I would think that big data and machine learning have really helped us understand the wide variety of variables and transients that exist and their families. For instance, here at Caltech, Mansi Kasliwal, her PhD thesis was about finding gaps and gap transient in that. If you have delta time on one axis and at what level you find transients, then we saw there were some areas where there was nothing, and why is there nothing? Should there be something? So then can we have specific programs to look for objects there? And she drove that during her PhD thesis, found a bunch of objects. But there are more and more smaller gaps that are also there too. Understanding those is something that machine learning is bringing us closer to by looking at all the data. That is where my earlier statement that we have not been looking at all the data. If we can, again, computing, human power, all those are the shortcomings that we have, but that is what could bring us together to understanding our universe even better.

ZIERLER: Ashish, I wonder in the grand sweep of history, how mature a field do you see data-driven astronomy at this point? Does it feel mature? Does it feel like it's just getting started?

MAHABAL: I would say that there are elements of maturity all around, but we have not really taken advantage of everything. Partly that is because many groups have still been working in silos. There's still this tendency of trying to say how good our own project is without saying other projects are bad, but that stops people from really pulling together everything that they can. Of course, there is the dearth of funding. I would say that in terms of methodology, there are aspects where we have matured. I won't say that we are fully matured yet, but we are not really, really at the starting point. Asking the right kind of questions, people have been doing that, but putting enough resources and combining enough datasets, that hasn't really fully happened yet I'd say.

ZIERLER: Ashish, looking at your own career, as you get more involved in data and informatics, do you ever think you'll reach a point where you're not doing very much astronomy and you're fully involved in data science?

MAHABAL: I don't see these two as really different, right? Even when I'm doing data science, I am doing astronomy because if I am able to find, say a new subclass of objects, that is genuine astronomy. I may not be taking a spectrum myself, I may not be taking an observation myself, but that is just one aspect of astronomy. I don't see these as really different.

ZIERLER: Finally, Ashish, last question for today, the question that brings us together. Tell me about your motivations to celebrate the history of data astronomy, both as it exists as a Caltech story and as a worldwide collaboration. What's so important for you to bring this story to a broader audience?

MAHABAL: There have been so many things that have happened in astronomy where again—so it's not just the history of astronomy, but history of data-driven astronomy where there are all these large datasets, large amounts of data have been captured. Mostly it is the low-hanging fruit from these large datasets that are celebrated and that are put out there and so on. What I would like to show is, how these different things are connected with each other, how small datasets that were taken in one location have a relationship with a large dataset taken somewhere else. For instance, if 10 spectra from Keck are celebrated, their origins may be in a large survey that happened somewhere else. The beginning of that survey may have been in some other survey that happened somewhere else. How we came to be able to take these large surveys in the first place, what are the technologies that led us to that, and how did it connect with each other? And what kinds of developments led to that, was it just in astronomy, was it elsewhere? Essentially, it's like a story that pervades the entire Earth. If you look at the general public, they're very interested in astronomy, but they do not know enough about astronomy at some level. How can that be bridged by something like this? Bringing about an additional wonder related to that is something. Knowing about what else is hidden in the sky. For instance, one of the things that we are doing right now is developing an outreach game with ZTF called ZARTH, where we'll be able to take transients from ZTF to the general public. We want to really bring about gamification by making it parallel to something like Pokémon GO where people can catch things, have leaderboards, etc.

But coming back to your question about, why my interest in all of this? Because I have worked in all these large sky surveys. In addition to the surveys that I mentioned that have happened at Caltech, I was also a Co-chair of the Transients and Variables group of the LSST science collaboration. I've seen these large number of collaborations happening and how they're connected to each other and so on. I would like to tell that story. Two of the other things that I would like to mention is that at the American Astronomical Society until June, I was the chair of the Working Group on Astroinformatics and Astrostatistics. Similarly, I'm currently the president of the B3 Commission of the IAUs, Astroinformatics and Astrostatistics Group Commission. There again, there is a far bigger wealth of material that is out there that could reach the general populace. In terms of the support that we can get as an astronomy community, I think that'll be really very helpful. Plus, what's going to happen is that these trends define for us something that we may want to learn about our future, can we predict something? And we may be able to predict only for a small number of years, but then how prepared can we be for that? In terms of the number of people who should be working in this, the number of analysis elements that we need to have for this. Learn from the history in order to predict where we may be going and see what we can be doing. Looking at this as a connected world. One of my favorite books is The Glass Bead Game. In The Glass Bead Game how everything is connected, I somehow see things like that and I would like to bring about an essence of that to the surface, to the fore and take it to more people. That is my interest.

ZIERLER: Ashish, on that note, this has been a terrific initial conversation. In our next we'll go back to India, learn about your family and childhood, educational trajectory, and we'll work on the story from there.

[End of Recording]

ZIERLER: This is David Zierler, Director of the Caltech Heritage Project. It is Wednesday, February 1, 2023. It is great to be back with Dr. Ashish Mahabal. Ashish, it's great to be with you again. Thank you so much.

MAHABAL: Thank you. Happy to be here again.

ZIERLER: In our first conversation, Ashish, we did a great tour of your approach to the big questions in astronomy and machine learning. Today, let's take it all the way back to the beginning, learn about your family background in India. Let's start first with your parents. Tell me about them and where they're from.

MAHABAL: My parents are from a small place called Yavatmal, that is not far from the center of India, the geographic center, north, south, and east, west. Far, far away from the oceans, and any big hills or mountains.

ZIERLER: What were their professions when you were a kid?

MAHABAL: My father worked with an insurance company, the Life Insurance Company of India. My mother was a housewife as I grew up. He continued in that company until he retired.

ZIERLER: What languages were spoken in your household?

MAHABAL: The mother tongue is Marathi, the same as the state of Maharashtra. But my dad was good in English. My mom could also speak English and Hindi is also commonly spoken. I would say Marathi was the most commonly used language. But English was understood and Hindi was also understood.

ZIERLER: Was religion a big deal in your household growing up?

MAHABAL: Well, my parents were religious, definitely. Because of that, initially, I was also inculcated into the religious practices until I learned about astronomy and the bigger world.

ZIERLER: This would be Hinduism that your parents were part of?

MAHABAL: Yes, my parents practiced Hinduism. That's right.

ZIERLER: Was there anybody in your family that was interested in science that you might have grabbed on to when you were a kid?

MAHABAL: My father was definitely interested in science. The religion in India works rather differently from religion here. For instance, unlike in Christianity, the Hindu universe was not created 6,000 years ago. It was created long, long back. There are even cycles and epicycles. What that means is that most Hindus don't have any problem with evolution. Not just that, the avatars of one of the main gods of Trinity, Vishnu, are equated to evolution, how the initial avatars were in the water, and then they came out of water and the later avatars have been in human forms. Something like evolution is quite well-ingrained in some sense, but also a lot of scientific methodology and techniques. I think that is where they generally don't have issues with. I don't know how to put it. Many engineers, for instance, are very good in their handling of science during work hours, and they can go home and become a bit superstitious. They're all scientific-minded if you corner them correctly, and talk to them about it and so on.

ZIERLER: What kinds of schools did you go to growing up?

MAHABAL: My father had a transferable job. What that meant is that when he had a transfer he would move, and with him we would move. I went to many different schools. All of them were English-medium schools. In fact, English was my first language. Marathi and Hindi were both secondary languages as I grew up. All my science education happened in English. We went to various cities. For the first few years of my life, I was in Yavatmal, the same place where my parents were born and I was born. Then we moved to Parbhani, a town that was a few hundred kilometers away, and then to Chandrapur and then to Nagpur. In between I came back to Yavatmal for a couple of years also. These are the four places where much of my education took place.

ZIERLER: Were you always interested in math and science? Was that always your favorite subjects in schools?

MAHABAL: I got interested fairly early on, it wasn't too difficult to handle that. I got very good teachers. The teachers were very patient. They would describe various things. Many times, I remember I used to have questions that were not directly in the books. If the teachers didn't know the answers, they were good enough to go to the library with me, find out other books where we could look those up and describe those to me. I think my teachers as well as my parents had a big role in ensuring that I continued in science and mathematics.

ZIERLER: What about astronomy specifically? Did you look up at the sky? Did you see the stars and wonder?

MAHABAL: Yes, we did. In fact, my father knew the constellations. The place I grew up in, Yavatmal used to have dark skies. Especially during summer when we would be outside walking, and sometimes he would put me on his shoulders when I was really small and point to constellations, and especially things like Orion and the Ursa Major and the areas through which the planets pass, etc., and sometimes even connect those to mythology because there are mythological stories, but also astronomy itself. That is where my initial astronomy really started. Then when I was in ninth grade, that is when Halley's comet came around. At the time, my father bought me a small telescope, a 2-inch telescope, and being able to spot the comet through that, though it was like a cottony-ball and showing it to our neighbors, and then my schoolmates would come home to watch it and so on.

It was thrilling that we could spot it, that it was far away, it was visiting after several decades, and that others get thrilled as well when I can show it to them. I think that really was a big boost in me thinking, oh, this looks like a great field. In fact, a few years later, my father bought me a bigger telescope, a 5-inch telescope. The first one was a very simple mount, whereas the second one was equatorial mount, meaning, in principle, it could be connected to a motor and then we could track it so that it would stay on a given object. I didn't go and build any motor and so on. I was not that industrious in terms of that. But because it was a bigger telescope, what we could do is that we could look at more details of many bodies. Not only planets but trying to look at some of the Messier objects, etc. Then again, that meant that more people were interested in seeing those as well. Of course, it was a bit heavier, so carrying it away from home was a bit of a problem that we used to manage every now and then.

ZIERLER: Ashish, what about computers? Were you interested in computers and what they could do when you were growing up?

MAHABAL: The first computer that I touched, I think it was when I was in 11th grade, if I'm right. For the first several years of my life, I had not even touched a computer. There was no computer in my school or in my college, but one of my earlier classmates went to an engineering college, and their college had a computer, so I had to sneak in with him to his lab, and then we learned various things like Logo and Basic and then C. It was fun. I got hooked on immediately to it. We would have fun solving some very simple problems, test examples, and so on.

ZIERLER: When it was time to think about colleges, what was available to you? What did you want to pursue?

MAHABAL: Again, everyone has heard of IITs. Even here now, most people know. Until I got to my 11th grade, I had not even heard of IITs. When I heard in 11th that one can give IIT entrance exams, and we prepared for it a little bit, but I couldn't get in at that time because there was not really time to prepare for that after I heard about it. My long sight was really the nearby city of Nagpur, so not really far, far off. That's where I went and did my bachelor's and did my master's. By that time of course, I knew of other places, but my parents were staying there and the university there was fine. I ended up doing both bachelor's and master's in Nagpur, not too far from Yavatmal.

ZIERLER: Is college in India like the British system where you declare a major or a focus right away?

MAHABAL: There are different things in there. There are the bachelor's in science and commerce and arts, which are 3-year bachelor's, and in engineering there are 4-year bachelor's programs. In both, there are ways in which you can maneuver a little bit. But the college that I went to for bachelor's, it did not have a major in fact. It had three subjects which were taught at equal strength or equal depth. In fact, my bachelor's is in electronics and physics and mathematics, because we had multiple papers of each of these subjects. That was in a way good because that gave us a very good grounding in all three subjects. Sometimes the syllabus used to be topsy-turvy a little bit in the sense that the first year you would learn about applications, and learn the basics in the second year, so what the applications were based on, etc. That happened a little bit, especially in the mathematics class.

ZIERLER: When did you start to think that you might apply for graduate school and pursue a career in science?

MAHABAL: As I said, I was interested in astronomy early on. We even had an amateur astronomy club, Kutuhal, that we had formed when I was in bachelor's with a few other friends and that group loosely exists even now. The Nagpur University did not have astronomy courses. It did have a possibility of starting a theoretical physics course, which comes as close to astronomy as you can get without an observational part. The requirement for the course was that six students enlist, and a willing teacher. There was a willing teacher, who was very interested in teaching, as he knew his subject quite well. We managed to get four students but not six. He couldn't start the course. I ended up learning a lot of quantum mechanics and electronics during my master's and some solid-state physics. But I continued to think about astronomy. Then, when I was in the first year of master's, there was an opportunity to go to a summer school in Pune, where a new institute had been founded, the Inter-University Centre for Astronomy and Astrophysics. They together with the National Centre for Radio Astrophysics, which is the radio arm of Tata Institute of Fundamental Research, they were holding a summer school, and there were posters out for that. I decided to apply for that.

ZIERLER: What was interesting to you about that? What opportunities might that have opened up?

MAHABAL: There, I would have met—and I did meet -- various lecturers from different places in astronomy, and more like-minded students, students of my age who were thinking of getting into something similar whom I would have met, and that did happen, and I formed some friendships at the time. I'm still in contact with many people from the time.

ZIERLER: This was just for one summer? Did you return for additional summers?

MAHABAL: This was just for one summer, but that got me introduced to several people. What happened after that is that next year, there was the Visiting Students Program, which was a more focused program, rather than something like a summer school where it was mostly lectures and a small project. I went back to the Visiting Students Program. It's a fairly complicated, but interesting story, because the Visiting Students Program, is around the same time that the entrance exams for graduate students for the next year were happening, so people who were one year senior to what we were at the time. Because we were there and we had been selected from a large number of applications, we were also asked to give that exam. And we did. It was not for any selection or any such process. But we went ahead and gave that and didn't think about it. Then two-thirds way through my Visiting Students Program, I fell seriously ill, and I couldn't complete the program. I had to go back. Then I wrote back to them. Can I come back and complete the remaining work? And they were very good, and they agreed to that. During Diwali, which was a few months after summer, I went back and completed the project.

But at the time, after that, my sickness came back, actually. What had happened is that I had nephrotic syndrome, which means that the structure of nephrons in my kidneys had changed and that led to a lot of water getting out of my cells and a lot of swelling around my knees and so on. In fact, when I gave my master's exams, it was in that state that I really gave my exam. It wasn't a very happy state, and then I had to go through a biopsy and I was on steroids for several months. I didn't eat any salt for eight months, for instance. It was quite a time of my life. Then things got better and then I was weaned off of steroids, and I was fine. But what that meant is that I had missed all the entrance exams for all graduate schools because it is around that that those happened. I was considering even alternate occupations. That is because I was already with my girlfriend at the time, who's now my wife of a few decades, and so we're wondering, okay, maybe we should settle down. If I cannot go and get into grad school, and if it's going to take one more year before I get into grad school, and then several years before I get a PhD, and rather than that, there were many other opportunities, why don't we do that, and so on. I was thinking on those lines too. Then suddenly, I get a telegram, there was such a thing as telegram still at that time, this is 1992 or so, from the director of IUCAA that the grad school is starting, and based on the exam that I had given one year earlier, they would be happy to invite me to grad school. It was completely out of the blue, I hadn't thought of that, and of course, took it and went to IUCAA.

ZIERLER: Ashish, were you thinking before this opportunity came about that you might have to leave science entirely?

MAHABAL: At least to start with for the time, right, meaning I hadn't really thought of long term, but to get going and get on my own, yes, that was a possibility.

ZIERLER: Had you known about IUCAA before?

MAHABAL: Oh, yeah. IUCAA is where I went both for the summer school and the Visiting Students Program. In fact, the way I went to IUCAA first was also interesting. In Nagpur where I went to my bachelor's, there were a couple of people who had some good influence on me, someone called Vivek Wagh. He used to teach math. He was only a few years older than I am. Then he and some of our friends, we would organize some club activities like discussing various physics related things, math related things. His older brother was an astronomer, astrophysicist. He knew about IUCAA. I think he was a postdoc there at the time. He had told me about it. I was just visiting Pune for some other reason, visiting some family members, and I decided to just go to IUCAA on my own. I didn't know anyone in person there at the time because this other person was also in Nagpur. I went up and met one of the secretaries there, and the secretary, Chella, he dutifully wrote down my address and name. When the summer school was decided, I actually got a poster at home, that IUCAA is having this summer school, and maybe you want to apply or whatever. That's how the association really began. So, yes, I went to summer school, I went to Visiting Students Program. Then, I got invited to go to grad school. That's where I ended up doing my graduation, and now, I'm an adjunct faculty for IUCAA.

ZIERLER: Oh, wow. What areas in astronomy are you particularly strong in?

MAHABAL: I love various things related to transients and classification now, but when I started, it was very different. I was interested in observational astronomy because of my amateur astronomy background too. But as you get deeper and deeper into that, you can start seeing various connections and how theory plays an important role, how many unknowns that are there, and so on. But observational astronomy was very much the main area I was interested in. But over the course of last few decades, I have touched many, many different parts of astronomy and I dabbled here and there a lot. But one of the main things that has stayed is the mathematical and computational connections of all these. Machine learning is something I do extensively. I have applied machine learning related techniques to gravitational wave data, to transients, TDEs and supernovae and asteroids, touching all kinds of different things. That's really something that I would say that being able to use the large data, being able to ask the data questions that may not have been asked before, and exploring what may be out there, including anomalies, subclasses, etc., that's where I've been going a lot recently.

ZIERLER: Ashish, as a grad student in India, I wonder if you can explain the funding structure for astronomy. How is astronomical research supported in India?

MAHABAL: There are not too many universities where astronomy takes place. It's been growing. It was a much smaller number when I was growing up. We had to give an exam called the JRF, the Junior Research Fellow exam. This is a central exam. If you pass that exam, then it allows you to do two things. It's like a certificate to teach later, because teachers also need that, but it also gives you a stipend while you're doing your PhD. After I joined for my PhD, the stipend that I got was 1,800 rupees. If you do the conversion of 1,800 rupees to today's dollar amount, it's something like 80 rupees is to $1. I would get something like $20, $25 a month by today's standards. It was very different at that time, of course. Within that 1,800 rupees, we could rent a place of 600 rupees, and then there was discounted food available at IUCAA and so on. Then there used to be a little bit of money we would get to buy books. One could just about persist on that.

ZIERLER: What kind of opportunities did you have with instrumentation or observational projects in grad school?

MAHABAL: At that time, IUCAA did not have its own telescope. There were two or three telescopes, optical telescopes in India. IUCAA was from the central body called UGC, University Grants Commission. Because it was an Inter-University Center, one of the things that IUCAA was involved in is connecting with other universities. There used to be visitors all the time from other universities, and there used to be opportunities to go and visit other universities. We could also apply for telescope time to the telescopes in India and abroad. Within India, it was a fairly straightforward thing to be able to visit. I used to go to the Kavalur Observatory, which is not far from Bangalore, that is where my initiation into observational astronomy took place.

Then, if there were projects that came along where you could apply to foreign telescopes, you would do that. Then one would have to write to a couple of central government agencies to see if they would fund your project. IUCAA would have some money, but they would encourage us and our advisors, meaning just students writing may not work well, but with the help of the advisor, the student would apply to these agencies. Sometimes one agency would give a few thousand rupees and another agency would give a few thousand rupees and combining those together, that is when you could then go outside. In fact, I did visit South Africa for observations when I was doing PhD. I went to Australia for a conference when I was doing my PhD. Then I did go to—in fact, the visit to South Africa was when I was trying to observe a set of spiral galaxies. I tried observing—there is a class of objects called Sérsic-Pastoriza galaxies. These are spiral galaxies with hotspots. That is the topic on which I had decided to do my thesis. In trying to observe that, I went to South Africa, I went to Kitt Peak in the US also.

ZIERLER: Was that your first time in the United States?

MAHABAL: Yes, it was. I also went to a couple of telescopes in India. At all these places, I couldn't get any good data. It was as if the universe was trying to tell me something, don't observe these spiral galaxies.

ZIERLER: I wonder if you can explain the difference, what is good data, what is bad data?

MAHABAL: If you have clouds, optical data is not going to be good. In fact, in most cases, you're not even allowed to open the telescope. The observatories that I went to in India, Kavalur Observatory, Nainital Observatory, they are normally fully staffed. What that means is that you have night assistants with you who do all the operations. You have to tell them where to point the telescopes, and they point them. This is from 30 years ago that I'm talking about, things have changed a little bit, there are even some automated telescopes now. They would do the observations and they would do the pointing and you look at the data and so on. Kavalur was another—Kitt Peak was the first observatory where I get to the telescope, someone comes with me, shows me the telescope, and hands me the keys and says, okay, the dome is yours. I say, what do you mean? Then they described what we can do and so on. Then it's foggy, so I cannot open the dome. Later on, it clears a bit, so I start opening the dome and they have given me a radio set on which there is two-way communication. Immediately the radio crankles and a voice says, don't open the dome, it's clear but it's still a lot humid out there. He said to close the dome. I couldn't get any observations at Kitt Peak. The site was beautiful, it was really nice monsoon and so on, but no data. That is what happened at that time. Sorry, what was your question?

ZIERLER: And that is how you got interested in spiral galaxies. What were some of the big questions in spiral galaxies?

MAHABAL: There is a particular type of galaxy, the Sérsic-Pastoriza galaxy that one of my early mentors, Tushar Prabhu, had studied, and he had studied them using photographic plates. One of the questions that was being asked there was about star formation histories of those galaxies because in spiral galaxies, you have many kinds of stars, but in these particular galaxies, there are some specific regions where there are many young stars that one can see and trying to understand those populations. I hadn't done any research of my own before. It looked like a good problem to start. But then when we could not start after going to so many telescopes, and I mentioned that there were many university visitors who would come to IUCAA. IUCAA also used to have many, many international meetings.

That is how I got introduced to Patrick McCarthy, who's also—or used to be from Pasadena, from the Carnegie Institute of Washington at the time, Carnegie Observatories now. My PhD advisor, Ajit Kembhavi, he knew him well. We started talking, and we decided, instead of spiral, why don't we go to the other kind of galaxy? Maybe we should have a project on elliptical galaxies. Pat was working on a sample of bright radio galaxies, 1 Jansky sources, originally selected from the Molonglo Radio Catalog. Molonglo is a place in Australia, so it was a southern sample. What that meant is that if we wanted to follow-up these objects, then they will have to be observed from the southern hemisphere. Then we applied for time in Chile. Las Campanas observatory. Carnegie Institute owns that telescope in Chile. With Patrick, I went to Chile, there 15 nights, not a single cloud seen. Lots and lots of good data. Half my PhD data was obtained at that time. One year later, we went back a second time. Again, several days, no clouds, and remaining PhD data obtained. Those are the two runs on which I could analyze data fully, and write my thesis on that.

ZIERLER: Who was your thesis advisor?

MAHABAL: Professor Ajit Kembhavi.

ZIERLER: What was your advisor's specialty?

MAHABAL: He came from TIFR and he had done—again, he had done lots of different things. Observational astronomy, he was one of the few people at IUCAA who was doing it at the time. He was also interested in theory. He had done a few things on black holes. But in general, he was very interested in forward-looking things. Computation was an area that he was also interested in. He, in fact, went on to found the Virtual Observatory of India later on. He went on to become the director of IUCAA. He's still professor emeritus there. But he got me interested in observational astronomy. He had these very good connections with several other observatories, meaning it was with him that I talked to Tushar Prabhu first about the Sérsic-Pastoriza galaxies, and then with Patrick McCarthy about the elliptical galaxies. He was a good support in the different things that I ended up doing.

ZIERLER: What would you say the main conclusions or contributions of your thesis research was?

MAHABAL: For the thesis research I studied bright radio galaxies that are typically at the centers of large clusters. They are ellipticals. Because they're radio, they often have a disk in them and a radio jet. The main study that I was doing was using optical and infrared data to look at relationship between possible dust lanes in these galaxies with the larger radio structure, and in general the morphology of the galaxies. So, though, that was the main thing, what ended up happening is that I also went deeper into what morphology means, applying mathematical techniques to understand how we can compute various things related to morphology. There were some offshoots related to that from my thesis. I was able to show the relationship between dust lanes and these larger radio structures, but also dwell a bit into these mathematical techniques.

ZIERLER: Would you say that even in graduate school with your thesis research you were starting to appreciate the impact of computation and maybe even machine learning in the research?

MAHABAL: Not so much machine learning, but definitely computation. I was very interested in different algorithms, understanding how you could automate things to that extent. Machine learning, yes, but not in the same sense as it is used today in terms of supervised or unsupervised, but automating. I was very interested in automating, in different languages, and trying to bring techniques that may not have been used before into astronomy e.g., many mathematical techniques.

ZIERLER: If you look back at your thesis today, what would need to be updated? What has stood the test of time?

MAHABAL: I think the techniques are still alright, but what may be done better is now there are so many different wavelengths that we have data from, and combining them together would be an interesting endeavor. So much more is known. Also, the imaging resolution has improved. Even in radio, we have got so many wavelengths. Now the most critical thing, I think, is the temporal stream. We have observations for the same objects at so many different epochs, and that makes a big difference because we understand variability of these objects. When it comes to galaxies, we don't have good enough resolutions to understand of the individual stars, but we can see how the centers of these galaxies are varying as a function of time. If they're active galactic nuclei—and these are 1 Jansky sources, so all of them would be very active in that sense.

ZIERLER: When you defended, what were your opportunities at that point? Did you specifically want to come to the United States?

MAHABAL: No, I didn't specifically want to come here. I did apply to a few places here. I applied to a few places in India after I defended. When I went to Chile I'd gone through Pasadena both times. There's an interesting connection there. George Djorgovski, he had visited IUCAA during one of the international meetings there. He and a few others wanted to go visit some nearby places, and I had accompanied them as their local guide. At that time, I had some very good conversations with all these professors visiting from the US. When I was visiting Chile and passing through Pasadena, I had let George know that I'm going to be giving a talk at Carnegie Institute. He was good to come and listen to the talk. He liked the talk, and then we kept on talking. I knew that an opportunity to work with him was going to come up. The first postdoc that I did in India was at Physical Research Laboratory. But at the time already, at that announcement, the advertisement had come up. I applied to work with him. There were a few other applications of course, and then I got selected through that process. After my PhD, I spent about 14 months in Ahmedabad doing my first postdoc, and then came over to Caltech.

ZIERLER: For the first postdoc, were you mostly focused on continuing in your thesis research, or did you take on new work?

MAHABAL: There was some new work because the Physical Research Laboratory has its own observatory, the Mount Abu Observatory, and what I had proposed to do there were additional observations, also on elliptical galaxies, but not just the radio ellipticals that I'd worked on earlier.

ZIERLER: Now, had you known about Caltech, its reputation in the field? Was that one of the reasons you applied here?

MAHABAL: Absolutely. I did not know too much when I was growing up. It was only after I started working at IUCAA that I got the details about all of this. It's interesting that I have known about scientists for a long time and their work, but I have never paid too much attention to where they're from, so associating them—for instance, I knew about Feynman, but not that Feynman was at Caltech.

ZIERLER: What about Chandrasekhar? Did you know he was from Chicago?

MAHABAL: No, I did not, but I did meet him at IUCAA because at the Foundation Day of IUCAA, he was the one who inaugurated the auditorium there. There is an auditorium by his name. I did meet him there.

ZIERLER: Tell me about when you first arrived in Pasadena.

MAHABAL: I first arrived here when I was going to Chile, which is not what you're saying, I think so?

ZIERLER: Right, right.

MAHABAL: Yeah, it was interesting. I came with my wife and our son was 4 years old at the time. We landed here. I can go into slightly—a story unrelated to science. We were in Mumbai ready to come to Los Angeles. Our flight was via Zurich. This was the first flight for my son who was 4 years old, but also the first international flight for my wife. We are standing in the queue. There are these people going around -- the agents -- and someone is looking at the passports. When the agent saw my passport, he shouts to someone else, hey, the Mahabals are here. We didn't know what was happening. It turns out that the flight we were on was oversold. They put us through another flight. They actually gave us business class up to Zurich. And this is their first international flight. My wife thought, "Oh, this is fantastic. This is how airplane travel is." And then you go into your ordinary class after that.

When we came here, at that time, Astronomy department used to have a beautiful three-bedroom house. We were stationed there for a few days until we could find our own place. At the same time, there was another postdoc who happens to be visiting here right now actually, Fabian Walters, he also was there. It was a beautiful house on San Pasqual Street in Pasadena. It's a lovely area. The next day, George comes and gives us two bikes for a few days, which we could use until we find out what other means we could use. I didn't drive at the time, neither did my wife, and we had to get our licenses here. Two days later, I went to Palomar, so observing immediately after coming here, that was also fun. Everything was happening quickly.

ZIERLER: What was George working on when you first connected?

MAHABAL: The Digitized Palomar Observatory Sky Survey. The sky survey from Palomar, this is the second sky survey using plates that had taken place, and he and the small team were working on converting them to digitized versions so that one can study them very easily using computers rather than the old ways where you would keep on using the photographic plates and writing down numbers from them and so on. The entire northern sky survey was being digitized in this way. There were a few thousand exabyte tapes, several exabyte readers, and then there was a pipeline taking the raw exabyte data to the first processed version, the second processed version, converting that into catalogs, doing things like star galaxy classifications and so on. Very cutting-edge things at the time, because we would use things like decision trees and similar algorithms to go from those initial non-digitized versions to digitized versions to numerical catalogs.

That was the main area, that was the main work that was being done. Of course, once you convert these, then using the catalogs to start looking for interesting objects in those was the main objective. One of the early projects that we were doing, was to look for quasars using colors. The JFN photographic emulsions that we had were being converted to g-r-i, those were the nearby filters. Then in that space, the g minus r versus r minus i stars form a particular locus, whereas some of the high-redshift quasars, in a particular redshift range, they form a locus that is away from the stellar locus. Looking for objects in that area, making sure that they are not artifacts, and then taking spectra from Palomar. That was one of the pipelines that we had, for instance.

ZIERLER: Ashish, in our first conversation we talked about, in broad historical terms, the development of data-driven astronomy. When you first arrived at Caltech, where was it in that development?

MAHABAL: I wasn't really much aware of all these things until I came here because my study earlier for the thesis had been relatively small datasets. I knew about places like CDS and NED. I had used them extensively in my thesis. I knew that there were data collections of this kind, at NED and even SIMBAD. NED for galaxies and SIMBAD for stars. I knew that people were collecting datasets from different locations and putting them together. But these were not really data-driven in the same sense that we could talk about today. There were these collections and you could query the collections and you could get things out. You could write programs to associate them with each other. But around the same time that I came here, there was also the discussion about putting things together to more formalize combining such elements of large datasets. That is how, for instance, the first Virtual Observatory was born, I think it was in the year 2000 or 2001. Just one year after I came here, that we had the first Virtual Observatory meeting at Caltech. It was a formal formulation of that concept. Then, for several years, there were many things in which I participated and got to know more and more about all the things that were happening and can be done and contributing to those, essentially.

ZIERLER: Ashish, what were some of the big questions or opportunities in those early discussions that compelled people to create this Virtual Observatory?

MAHABAL: One of the things was that there were different telescopes with different apertures and different filters. There's non-uniformity. In some sense, they are looking at different aspects of the universe, different aspects of specific types of stars and galaxies in that way. Being able to combine them, one would be able to go to a much higher level like—each of these is a dimension, and each observatory has contributions in part of this high-dimensional space. Being able to combine those using these different observatories is going to help us answer many different questions. That's the area that one wanted to go to, but also combining information that is in images and catalogs and understanding the gaps, that is a critical thing, because with each of the telescope you do something, by combining them you can do more. But where is it that we cannot reach by combining those as well? And how is it that we can rectify that situation that leads to newer questions, newer instrumentation, newer projects? That is where—we wanted to go. Also, when people take data, they are the curators, they are the best people to disseminate information about the datasets. But how to abstract that information so that others can use it more easily was also a question that was being talked about -- interoperability. We talk about open science a lot these days, but at that time, those were sort of nascent concepts. But that view, you can say in some sense was roughly going in the direction of open science, so how to democratize science, how to make data access available to everybody. That would be good for everybody because you're getting much more than what you're putting in that way.

ZIERLER: Were the discussions exclusively centered at Caltech? Were there other astronomers or institutions involved?

MAHABAL: It was a US thing. There were many other places too. Caltech did take a leading role, but there was Johns Hopkins, Alex Szalay, SDSS that was a big player at the time. Already, there were discussions, possibilities of much larger telescopes and much larger surveys. There were interests from NOAO, Tucson, and University of Washington. There were lots of different players. There was some interest from international institutions, and so on.

ZIERLER: Who at Caltech took a leading role in the Virtual Observatory project?

MAHABAL: George Djorgovski, Tom Prince, those were two names that I can mention, but there were many other professors who had also indicated that they would be in that. It was generally seen to be a good thing.

ZIERLER: Administratively, it's virtual, of course, but what were the questions about where VO could be headquartered or centered?

MAHABAL: I don't remember all the details of that. But there was quite some discussion on that. It was going to be distributed. It was not thought that we will have just one location where we will bring all the data together. I think that was quite clear in terms of not wanting to do that in one place. In this distributed system the emphasis was on software. How do you write programs that allow you to access things? And also, interoperability. When I said earlier that people who take data are the best curators for that. That is why we don't want to be moving the data too much from them because they can do things well. If you had three different datasets that were to come together, let's say there's Chandra which is at Harvard-Smithsonian, and NED had something here and SDSS had something at some location, but we don't want to bring them all together but we do want methodology that can access all of those, so being able to put an interface that will allow us to connect to whatever is behind that and pull whatever bits are needed from that, so it is mainly that that was being talked about. It was going to be a fluid, a distributed thing, which will live in cyberspace, and you can connect to it from your computer.

ZIERLER: Ashish, what were some of the advances in computation itself that compelled the field to say, we really need to change things up at this point?

MAHABAL: I think one of the big things that was happening was developments in databases, how you could have many diverse tables, and being able to combine them in meaningful ways, being able to sort them etc. If you have a dataset, then number of rows there is one aspect of scalability, so you have more objects of a given type, but when you have more measurements for the same object, the complexities increase. If I had a table from, let's say, optical observations and another table from near-infrared observations, and I wanted to combine them, then the resolution of the two telescopes is different, the filters are slightly different. When I try to combine them how exactly I do that. Methodology for that was being developed. That was clearly something that not everyone can do. Even now these are hard questions. They have become simpler with some of our techniques that have been developed, etc. But at that time, people had to think, astronomers had to think from scratch as to how we'll go about doing these things like cross-matching. The developments that were happening in the databases, that was one of the main things that enabled bringing these data together.

ZIERLER: The question in astronomy, what the field knows, it doesn't know. Just looking up at the sky observations, was there anything specific? Were there any big question marks in astronomy that compelled people to take a computational or more computational approach to help answer those unknown questions?

MAHABAL: There wasn't only one kind of question, meaning there has always been an interplay in observational astronomy, cosmology, where it's more driving towards theoretical things and going to high-redshift objects. Also simulations. Each of these was driving the other. Let's take, for instance, things like Sloan or the Rubin and so on. They're interested in cosmology, understanding things like lensing, or the distribution of galaxies as we go farther and farther out. Supernovae, how far can we see supernovae Type 1a? Is the frequency or relative frequency of supernovae 1a with respect to other supernovae the same as we go to galaxies that are farther out? Or the color evolution of these galaxies. There were many, many different questions. For each of these questions, the kinds of observations that one may need would be slightly different. But then sky surveys were one way of answering many of these questions. Then what you could not clearly answer, or where you did not have enough observations, you would try to simulate, try to understand those things through simulations.

What would happen is that each of them was pushing the other in different ways, and there was not one set of questions. I mentioned earlier, for instance, that we were looking at the high-redshift quasars with the DPOSS data. But then, when we started looking at these, we started seeing that there is another group of objects that are not part of the stellar locus, but they don't overlap with the high-redshift quasars either, and these were the Type II quasars which had different emission lines and they occupied a slightly different area in the phase-space, so we could go after them as a population. When you start finding outliers, then that starts driving how you do your observational work. In this case, it was not new observations necessarily that we're doing in terms of getting data for those, but we were taking spectra for those with the 5-meter telescope at Palomar. But that also drove things into understanding why are these occupying this area in phase-space? What are the emission lines that are putting them there? And then, because of that, one could also go and do polarization studies of those. We would go and observe some of these with the Keck telescopes. So, both theory and observations would drive each other in that way.

ZIERLER: Once the Virtual Observatory was up and running, how did that change things? What new possibilities did it make possible?

MAHABAL: More people could access the data, and they could come in and look at different kinds of things that were happening. But what I think happened mainly is some of the basic things like the cross-matching services, etc., that we have made available, people could now use them in a more efficient way. Rather than having to have all the software on their own computers, rather than having to get data themselves, they could log in here and be able to pull up some programs and be able to do the cross-match. I think what ended up happening is that a few algorithms became more transparent to people, and they became more user-friendly, and they got utilized more in astronomy. People got to know about more datasets, their accessibility more easily. In general, while the VO itself may not always get very good remarks from some astronomers, that is because the initial expectation that was created by VO was very large, and many people think that it was not met, but at the same time, what VO enabled got ingrained a lot in day-to-day astronomy that everybody does. When something becomes ingrained and invisible, you forget that that has come from somewhere else, and you feel that it has always been there. I think that is what happened to Big Data astronomy in some sense, that more people started using it without really realizing it.

ZIERLER: What about the idea of citizen astronomy, getting more people outside of the academy involved? Did that impact things also?

MAHABAL: Do you mean Citizen Science?

ZIERLER: I mean, backyard astronomers, people that were interested in astronomy, but they were not professional astronomers. Was access to this data useful for the science as well?

MAHABAL: Yeah, so anybody could go in, get an account, and be able to get data, but then it is only a handful of people who are enthusiasts like this, will go and do things on their own. If, on the other hand, there are specific programs that are set up and there to do, you can contribute to this or you can participate in that, then there are more people who are enthusiastic about it, get in and try to do that. Because otherwise, many times many of the enthusiasts are mainly interested in—not mainly, that will be not the right wording, but many of them are interested in seeing beautiful pictures or seeing images, etc. When it comes to doing things, if they're told what to do, then they can come and contribute and help more easily.

ZIERLER: Ashish, how long was your postdoc appointment for?

MAHABAL: The initial postdoc appointment was for a period of one year when I came, and then it got extended for one more year. That's around the time when we found the Postdoc Association here. Actually, I was one of the founder members of the Postdoc Association.

ZIERLER: Oh, I don't know about that. What is the Postdoc Association?

MAHABAL: Well, there's a group. Now, the Postdoc Association has some rights and they meet regularly, they get various speakers from outside and all that, but no such thing existed at that time. One of the things that we said is that when you come in here, you don't know the person. Giving them a one-year appointment is fine, but after that, Caltech used to give only one-year extensions at a time, so you have to get a second one-year and then a third one-year. If you already know this person who has come in, then you should give him or her two straight years after that, because otherwise you spend a lot of time thinking about whether it's going to happen, then the visa process, and then there are issues of going in and out and so on. That was one of the things that we talked about and got sorted. But, okay, to answer your question, it was a one-year appointment and I got an extension for another one year and then one more year.

ZIERLER: Do you have a clear memory of when you decided just on a personal level that you and your family wanted to stay here to make this more of a long-term proposition?

MAHABAL: Actually, it was starting to become more and more home because my wife got a job here several months after, in the very first year, and also in the astronomy department. As a system administrator. Staying here was becoming more attractive in terms of the different tools that are available here, different telescopes that one has access to and datasets and people and everything. I would say there was no one clear point. I did consider—well, I didn't apply anywhere formally, but whether I should go back to India, think about it and so on. There was a time, for instance, when IUCAA was getting more interested in the 30-meter telescope, and I went and spent a few months there. That was years later, of course, but I went and spent a few months considering whether I should be moving back taking a position working with TMT, and so on. Then TMT itself had some issues, and so that never went through. But I was comfortable here.

ZIERLER: The point of transition beyond your postdoc, what was your research at that point? What were you focused on?

MAHABAL: Initially, it was the Digitized Palomar Observatory Sky Survey looking at the quasars of different types, etc., but in general, the variability. In DPOSS, I don't know if we talked about it last time, but—we did not. DPOSS has these photographic plates that are 6.5 degrees wide. The centers are 5 degrees apart when photographic plates are taken. There is a small strip with overlaps between the two plates. We have got 1.5-degree-wide strips and 3 different emulsions. Using those 6 points, we started looking for variability in that and we found different kinds of objects. That was very interesting because no one had done something like that before. After DPOSS, when we decided to do the Palomar-Quest Survey, that is, again, an area where variability was involved. That is what was driving my science, so simultaneously doing algorithmic development and computation related developments on all these datasets.

ZIERLER: What were some of the big findings coming out of Palomar at that point?

MAHABAL: From Palomar in general?

ZIERLER: Specifically related to what you were doing.

MAHABAL: We were able to find these quasars—lots and lots of redshift 4 quasars, of course, but we were also able to find this Type II quasars. Then we started finding some high-redshift quasars by combining datasets from here and there. We did a lot of polarization studies. Then, because this was a dataset, there were many outliers that were being found. We started connecting to different groups, but it was mainly in this AGN and quasars related space that I worked a little bit. I didn't get too much involved in other kinds of variables at the time.

ZIERLER: Were you involved in the creation of getting Palomar Transient Factory up and running in 2009?

MAHABAL: No, I was not directly involved in the early part of PTF because at that time, I was working on CRTS, the other survey from Arizona for which we were getting data directly and doing lots of things here. Again, for CRTS, I wrote a lot of automation related things, classification related things, pipelines, etc. Each of these surveys was one step of an improvement, in some sense, from what we were doing before in terms of automation and real-time responses, etc. PTF was starting at the same time. I did help a little bit with some aspects, but it was later on in PTF.

ZIERLER: Would you say that PTF was the major observational project that really demonstrated the power of computation and machine learning in astronomy?

MAHABAL: I don't think so. It was not on its own. PTF definitely came after what we had done with Palomar-Quest, and we were doing simultaneously with CRTS. What Palomar Transient Factory did is it showed that you can do something on an industrial level. You have one objective and be able to churn out those kinds of things in a very methodical way. In that sense, PTF was good. But again, partly because it had one objective, that let's find these kinds of objects really well, they did that well.

ZIERLER: What were some of the most significant findings that came out of PTF?

MAHABAL: PTF was mostly about explosive events. There was the diagram where you can show the timescales of events and how many of those are found at different energies. That used to have some gaps before. Using PTF, they were able to find objects that filled some of those gaps, so improving our understanding of that. Gap transients as those are called.

ZIERLER: Did PTF have a relatively short lifespan? Was it intended for it to go on longer than it did?

MAHABAL: PPF originally, I don't remember the exact duration, it was either three years or five years.

ZIERLER: Yeah, I believe it was 2009 to 2012.

MAHABAL: Then there was the iPTF, intermediate PTF. That was the extension that went on. At that time, I think already ZTF was planned, but it took more time for the CCDs or the focal plane to be put together using the CCDs, in general, the camera development for ZTF. So, it was good that IPTF could continue during that time, I think to 2014 is when ZTF started maybe. Yeah, so ‘12 to ‘14 may have been iPTF then.

ZIERLER: Did you have opportunity to work with Shri Kulkarni at all?

MAHABAL: Yeah, in ZTF I've been doing that. As I said, with PTF I did a little bit, but not really extensively. With ZTF I have been leading the machine learning for ZTF. During that time, I have been working with Shri in that sense.

ZIERLER: Now with the intermediate PTF, what was the intermediate about it? What was improved?

MAHABAL: I think that's a good question. I don't think I have an exact answer for that. Yeah, I think PTF was starting to approach ZTF but ZTF wasn't ready yet. I don't know if it was the new CCDs that—no, I think the CCDs were the same as before. I'm not sure.

ZIERLER: What were the considerations for getting Zwicky up and running?

MAHABAL: It was going to have a much bigger field of view. PTF had only 8 CCDs, whereas ZTF had 16 bigger CCDs. What was done is that the entire focal plane was now covered with CCDs with 1 arc second pixels, and better everything. Even the focal plane, it was re-planed. Then there's a robotic arm that was put in to do the g, r filters and for partnership, also the i filter.

ZIERLER: Now the bigger plane of view, is that to suggest that there are technological advances in the CCDs, or there's just simply more of them?

MAHABAL: In this case, there are more CCDs. But the way one has to do is that one has to be able to jut them close enough together so there are no large gaps. Plus, also when you go to the edge of the field, there is vignetting. You don't get the same quality of light at the edges. There are correctors used to make sure that you can get as good a light as possible. Then post that, during the pipeline, you do additional corrections to make sure that you're recovering what you may have lost because the pixels were at one of the edge of the field of view.

ZIERLER: What is the impact on the amount of data that the facility is producing? Is it double simply by doubling the CCDs?

MAHABAL: Yeah, if you simply double the CCDs, it will double, yes. But that's only for the raw data, because then you also have to add a factor for the processed amount of data and various other things that you do with that. To compare that, I think, with Palomar-Quest, we used to get 50 gigabyte a night maybe, and with ZTF it's 1.4 terabytes, so an order of magnitude more.

ZIERLER: The designation leading the machine learning effort for ZTF, what does that look like? What are your responsibilities?

MAHABAL: Even in machine learning, there are many different aspects. The two broad aspects are that one is related to making sure that the objects that one is getting does not include artifacts. The other is the good objects that one is getting, being able to do different things with that. The first thing where we want to make sure that there are no artifacts, there is the real-bogus classifier that we have for objects that are point-like, and then there is the streak classifier for nearby asteroids, where we want to make sure that the streaks that we see are because of an asteroid rather than a satellite.

The algorithms that do those two separations are one main aspect. The second aspect is where then you look at star-galaxy classification, or when you have these supernovas being classified, transients being classified in being able to understand their nature, being able to decide which ones of those should be sent to a larger telescope for observation, etc. That is very broadly speaking. Then one also tries to keep on pushing the envelope. For instance, you have now identified stars as stars, and then identified variables among those, but what kinds of variables are those? Running programs for that particular thing, subdividing variables into different sub-classes. But then also there are newer and newer techniques coming in all the time. Understanding how you could use those, how you could apply those, what is it that you could do faster, or even the earlier models that have been built, updating those models so they become more efficient. One of the first things that happens is that, try to reduce the amount of time that people spend in looking for bogus objects, and then going to all these other things.

At one of the meetings, I think it was the last ZTF team meeting, one of the supernova researchers made a statement that the real-bogus classification these days is really good. We don't get any bogus objects in our stream these days. I was alarmed by that because that's not a good thing to hear. That's a good thing to hear that there is no bogus object, but it's a bad thing to hear that there is no bogus object because what that means is that maybe we are also throwing out some good objects, right? What needs to happen is that when you have a certain confidence limit, if you put a threshold somewhere, then there are false positives and false negatives. We don't want any false positives, but we try to minimize false negatives also. False negatives are the objects which are real, but are not being called real. If you put a threshold very high, then everything you are getting is pure, but then maybe you are missing a few. The ideal thing to do is to have two thresholds. You get pure things, but also look at the difference between the smaller threshold and the bigger threshold so that you can keep on sampling it to make sure that you can keep on excluding the false positives there, but whatever real objects are there, you can push it on the other side. Those are the kinds of things that one needs to keep an eye on.

ZIERLER: I wonder if you can explain in some technical detail, managing the machine learning operations for ZTF. What's the mode of—the machine learning suggests something interesting. How do you translate that into focus on that signal, and additional observation with higher resolution instruments?

MAHABAL: Again, there are different sub-teams within ZTF, and it differs from one team to another how they do it. But in case of supernovae, for instance, one of the foci is finding them as early as possible. What that means is that you typically have a stellar explosion, and so its brightness increases. You see that it has a peak and then it starts going down. But you're not doing these observations continuously, you have sporadic observations. In the old days, typically supernovae would be found a few days after the peak, but because now we are very good at automating many of these methods, we find supernovae before the peak has been reached. Because we have multiple filters, g and r filters and many other bits, what one can do is that by using that information, be able to say roughly what the character of the object is. Classification is a stronger word, being able to put it in a class, but characterization is what you try to do quickly.

One of the machine learning processes that we have in place is for supernova classification. There is a SEDM machine, Spectral Energy Distribution Machine, which is at the 1.5-meter telescope. From the 1.2-meter telescope, you get transients, you characterize them early on, and then, a small number of them you can take spectra of, low resolution spectra of. We have a machine learning algorithm which classifies those automatically, and then registers them with the TNS, the Transient Naming Server. Trying to keep on pushing on that, improving that, so you separate Type 1a and non-1a. The Type 1a get their own follow-up because you can do various things related to cosmology with them. But the non-Type 1a, there are other researchers who are interested in them for various other reasons. We try to classify them better using machine learning, but then we quickly discovered that we don't have enough numbers for doing that automatically with the small number of points we have in their light curves. We started looking for other things, and we soon realized that if we concentrated on whether they're hydrogen-rich, or there is more helium in them, then that classification is simpler. Trying to understand what kind of questions you could direct yourself towards based on the data available, that becomes possible as you start addressing these questions. That's one example how we wanted to do one thing, but the data indicated that we should be asking a slightly different question.

ZIERLER: How do you determine what higher resolution instrument to send the signal to?

MAHABAL: It's generally a choice that gets defined to you by what is available in terms of where the spectroscopes are, what time of day it is at different places. GROWTH network is one of the networks that I could say where they've been trying to follow various objects e.g., kilonova and so on. There are many telescopes around the globe in different locations. When you have a few objects, then, given how bright they are, what spectroscope is available, and what time-zone it is, that will determine a smaller number of telescopes that will allow you to schedule some of them. It is not different from how, for instance, the ZTF observations are scheduled. There are hundreds of fields that can be observed by ZTF, and then you have a certain cadence with which you want to observe those, and so many have been observed N times, M times, K times, whatever. At the start of the night, the automated scheduler goes and decides how the observations are going to happen. If a TOO – an interrupt – comes along, how that would change and how you could go back to the original schedule. Similarly, there are schedulers which look at these observations, and then be able to say, okay, for this object, we want this depth because of its brightness. Many telescopes are available, only these have spectroscopes, these are in the correct time-zone or not, and that is how you can do the scheduling.

ZIERLER: I wonder if you can point to a specific example throughout the process. There's the survey, there's the machine learning, you're alerted to something interesting, you send this on to another telescope. What's an example of something that we've learned in astronomy that almost certainly would have been missed, absent this process?

MAHABAL: Gravitational waves would be a great example of that. LIGO detects a transient, and then it sends out an alert over a very large area, several tens of square degrees. Immediately, there are so many listeners who are listening to exactly those kinds of things. Their observatories get triggered. This is a slightly different case because we don't know the identity of the object. What these telescopes then do is that they scour the sky in that area. Both, new observations because you want to see what the object is doing now, but also compare that with looking at old reference images. Doing these comparisons, then you come to a few objects that are likely a counterpart, and then you do all kinds of things like more observations and simultaneously reasoning and theory as to which ones of them cannot be or which ones are likely to be a counterpart, and then you come down and pinpoint some. These are the discoveries that wouldn't have happened without such a setup.

ZIERLER: With the detection of gravitational waves in 2015, did you see this really as a triumph of machine learning?

MAHABAL: Yes, there was definitely a lot of machine learning at various different stages that was involved in that.

ZIERLER: Were you involved at all in the LIGO collaboration?

MAHABAL: I was involved in the LIGO collaboration very broadly speaking. I have been using some of the data, but I was not doing anything with respect to these detections or the follow up of that.

ZIERLER: Given the excitement, the Nobel Prize, in what ways did the detection of gravitational waves really put machine learning on the map for astronomy?

MAHABAL: I don't think anyone looked at it as something putting machine learning on the map really because the gravitational waves had been predicted, they had also been indirectly observed, and this was indication of that really. Then machine learning, though it did play a major role, it was still tiny compared to this entire new window opening up because of these gravitational waves and their detection, but definitely it was always there at some level.

ZIERLER: To be clear, I mean, it's a counterfactual, you can never know, but absent machine learning, it's quite possible that we would have missed those signals?

MAHABAL: Yes. Absolutely.

ZIERLER: Ashish, I'm curious, just either intellectually or scientifically, what level of overlap is there between your responsibilities for machine learning at ZTF and your work at the Center for Data-Driven Discovery?

MAHABAL: Oh, there's a lot. Because I'm primarily interested in data related things, and machine learning uses a lot of data, so that's really the currency, data is the currency for that. I'm always on the lookout within ZTF for different areas in which I can apply machine learning. The same is true elsewhere. For instance, the methodology transfer that I do through CDDD to early detection of cancer. There are so many parallels there. Even the data types, for instance in ZTF, we have catalogs and images which we use for machine learning. Similarly, in other areas, that's the same. Trying to think back and forth between the two always happens and more or less transparently. Abstraction of the data, that is something where I really emphasize a lot to people that, yes, you take data from an instrument, the scientist is very important and critical, that's there, but how do we abstract the data so that we can do more in an automated way? And then when it comes back to doing decisions about what we have found, then make sure that the domain expert is there. There are quite a few parallels between the two, happens all the time.

ZIERLER: Ashish, what institutional relationships do you have with IPAC?

MAHABAL: Well, IPAC maintains all of the ZTF data. The IPAC members are on the machine learning meetings, they're on the ZTF meeting. I have discussions with them all the time, all of the solar system related data from ZTF, etc. That's the relationship that I have. I have not had anything specific other than that for some time. I talk to many of the people there all the time.

ZIERLER: Is your relationship with JPL's Data Science group, is that through IPAC, or that's separate?

MAHABAL: That's separate.

ZIERLER: What is that? What is the Data Science group at JPL?

MAHABAL: The Data Science group at JPL, they have a working group, and they do lots of things there about all kinds of things related to JPL actually. There is spacecraft autonomy, but they also have things related to—the cancer related work that I mentioned for instance, is about organizing the data for the National Cancer Institute. I'm involved in that. But there are also other initiatives, like just today I had a meeting where we were discussing how they have organized some papers from Cassini and Titan and so on into a knowledge base. That knowledge base can be questioned to essentially seek out resources that are in the database, how we could extend that to other areas, including for cancer, bioinformatics and so on. Somewhat like ChatGPT, but not at the same level, but being able to question it and then retrieve essentially different datasets. Then there is also the Planetary Science Data group there. They have lots of things happening with planetary datasets and I have been working with them on a few things as well.

ZIERLER: One of the most exciting areas, of course, in astronomy right now is exoplanet research. What opportunities do you see in learning about exoplanets from a machine learning perspective?

MAHABAL: Again, machine learning has been used very extensively in exoplanet detection. For instance with Kepler, we found several, and right now, the main machine that has been finding exoplanets is TESS. TESS has very large pixels, but the fantastic thing with TESS is that it has very uniform time series. What that means is that one can apply many, many time-series-based algorithms to that, even some that we normally do not apply to the gappy time series from rest of astronomy. In an exoplanet detection the planet goes around the star and blocks the light of the star for a tiny bit. It is those dips in the time series that algorithms can look for. When you find these dips, it doesn't necessarily mean that it's an exoplanet because there are many other possibilities why there could be a dip. For instance, there could be binary stars, two stars going around each other, that's the commonest example.

But there can be variables where you can find dips because, just like sunspots, there can be star spots, and those star spots can cause that. We use machine learning algorithm with TESS data to look for possible exoplanet candidates. You create a training set from known examples -- which one of these are binaries, which one of these are actual exoplanet candidates, and which one of these are junk or whatnot, and then train the algorithm with that, and then you unleash it on a much larger set of light curves, and then it gives you back your exoplanet candidates. That's one of the simplest things that is there, but there are also more advanced ones. Because TESS pixels are so large, once you have found exoplanet candidates, you need to compare that with other datasets to make sure that you are finding the right thing. One of the things that is then done is compare it with Gaia dataset which has exquisite astrometry to see if there are likely to be binaries in that pixel and exclude it that way. The time-series itself doesn't tell you, but combine it with Gaia data, and then it tells you that it is something different.

ZIERLER: On the all-exciting question of detecting biosignatures or even technosignatures in exoplanets, what role do you see machine learning playing?

MAHABAL: Again, a huge role, because there too what's going to happen is that, how do you define a technosignature, that is one interesting question. There are many, many different ways. Then, how do you determine if something is a technosignature once you think you have found it, is another question. In most of these cases, what is critical to say is that there should be a null hypothesis, assumption that there is no technosignature and then be able to discard many other things before you can say, oh, because we have eliminated all the other possibilities, this looks like a technosignature. There may be a signature, which is really, really obvious, but in most cases, it's really subtle, and almost all the cases so far have been where we've been able to explain it away with something. You could find a light curve, for instance, where you have got certain dips, and then the dips have certain patterns. It could be because of something like a Dyson sphere, for instance, where you have the civilization that has built intricate things around their star or their planet which is what is blocking the stellar light. But before you say that that is what it is, you should be able to eliminate other possibilities like the changes in the star itself, or there being debris that is causing that rather than a structure that beings there have built, right? All this needs quite intricate machine learning to be able to tell apart, because the signals that we have are going to be fragmentary mostly just because of the nature of how observations are done.

ZIERLER: Ashish, what about much closer to home in our own solar system, looking for life on the icy moons, even signs of extant life on Mars, where do you see machine learning contributing?

MAHABAL: Again, everywhere. For instance, when we visit nearby moons or other planets, what needs to happen is that we there will be cameras that will be looking at these bodies from a distance first. Then the comparison is going to be done with models that we have on Earth using large datasets that we have gathered. How does life look like, what are the telltale signatures of life both visually and in terms of motions when we have images, etc.? Those are the models that we will start with even when these algorithms along with the compute infrastructure go to distant planets, and they will be looking at things there. That will again form the very basis of how things are. Then once we start learning more from there, that will improve our training samples locally. There are these things like generative adversarial networks which try to generate new images and then compare it with what had been naturally seen to try to attach significance to those. That is where again, when we get real footage from many of those bodies that can be incorporated in our training sets to improve that.

ZIERLER: Ashish, moving our conversation closer to the present, when the COVID pandemic hit and everybody was remote, did you feel well-positioned just by nature of what you do that you were able to keep up operations remotely?

MAHABAL: Yes. The Palomar Observatory, for instance, had been closed for a small amount of time, but it's a very isolated place. It was very easy to say that people who are there should be there, and so all the visitations from here had stopped. In terms of the day-to-day thing that we did Caltech also was fairly good about how to isolate things, we could work from home, come here, and so on — travel had stopped, of course. There were no conferences for a long time. In fact, I came back from Puerto Rico in March, and then I think it was one or two days later that LA shut down. But apart from travel, the other things continued. In fact, one thing that happened during that time because of the pandemic is number of meetings went up, because everyone was home, and no one could say, "I'm on travel. I can't attend this meeting." That changed definitely in the other direction.

ZIERLER: And just to move it right up to the present, what are you currently working on, besides, of course, the proposal?

MAHABAL: Yeah, ZTF ZARTH that I may or may not have mentioned—

ZIERLER: I don't think so. No.

MAHABAL: —this is a game that we're developing initially for outreach.

ZIERLER: Oh, this what you showed me on your phone?

MAHABAL: Yes.

ZIERLER: Yes.

MAHABAL: Initially for outreach, but we would like to extend it for Citizen Science also. What we do is that nightly, about 100 transients that we find from ZTF we put on the phone. This is an extension of the Sky Map app. Sky Map is an app which you point to the sky, then you see the sky that is behind on the phone, so you can identify constellations and planets etc. What we have managed to do is that put an extra layer on that of transients from ZTF. There, right now, we have four different kinds of transients in four different icons. As you move your phone around, if there is a transient in a particular direction, you will be able to see it there, and when you touch it, you get some metadata about that object like what is its nature, how bright it is, what its location is numerically etc. But then also the exciting feature – you can catch it. We are doing it like Pokémon GO, but for transients, so you can catch the object and then it will show you more metadata as to how rare it is, and then, are there other objects of that type, etc., various things. Then you can have a collection of it, etc.

ZIERLER: Is ZTF going strong? Are there plans for upgrades or for a new instrument at Palomar?

MAHABAL: We are currently in ZTF Phase II, because the first five-year survey was done, and it's the second three-year survey that's going on, but we will be finishing officially around September this year. Then there may be a small extension from NSF to that. We are also thinking of something like ZTF III. We may change the cadence a little bit, etc. It's up there right now, but very likely something will happen. LSST, which is LSST Rubin, which from the southern hemisphere will be the bigger telescope that will take a lot of data, its start date has been pushed back a little bit for various reasons. The astronomy community would definitely want a survey like ZTF to continue at least until the time that Rubin starts because many of the brokers are currently using ZTF data for classification, but build in general the infrastructure for LSST/Rubin. Not having anything would not be a good thing for the brokers.

ZIERLER: To be clear, ZTF has utility even when Rubin is up and running?

MAHABAL: Oh, absolutely. Also, because ZTF is in the north and Rubin is in the south. There's also that issue. There will be a small overlap region, it will be good in itself because you can be observing at different times, but also because we'll be covering some parts that LSST will not be looking at.

ZIERLER: Ashish, when Astro2020 came out, the Decadal Report from the National Academy, what were some of the most important takeaways for you? What was its approach to machine learning, for example?

MAHABAL: Machine learning was not really as highlighted as I would have liked. There could have been much more there because it is encompassing everything. I think partly the reason could also be because machine learning is so ingrained, somewhat like the VO tools that I mentioned, that people don't see as an obvious thing that you should highlight or that you should say that this is important, because in machine learning, there are so many things. For instance, if you don't have a good training set, machine learning is generally useless. There are ways in which you can do unsupervised learning, clustering, etc., but a majority of machine learning that happens in astronomy is supervised where you need a good training set, and unless you have a good training set, then you are not going to get good answers. That's where we can talk about trash in, trash out kind of thing. You need to make these good sets, and to make these good sets you need to be able to invest in that. I think on that point of view, it's very critical that astronomy keeps up to date with all the data science advances and investing in more machine learning.

ZIERLER: Do you think for Astro2030, to really look to the future, will machine learning be so deeply ingrained that it's going to be unavoidable to treat it more fulsomely than previously?

MAHABAL: I think so because if you, again, take examples like ChatGPT, which suddenly sprung upon people. ChatGPT is not general AI, it is not there, it is not even close to how humans are, but it is so much better than anything that has existed before. I can think of calling it as four different levels. One thing is all the other programs at one level, ChatGPT above it, we above it, and general AI above it. What's starting to happen is that there are many things that are starting to go at least to the level of ChatGPT. Even achieving that level would be a very good compared to everything else that is way down there at the floor. By 2030, which is several years away still, I am sure there'll be huge developments where you will see far more machine learning. More than machine learning, artificial intelligence really, right, because machine learning talks about the training aspect, whereas artificial intelligence, as the name suggests, is being able to connect more dots together in more automated fashion. We are going to be seeing more of that.

ZIERLER: Ashish, you mentioned that your involvement in the thirty meter telescope goes back quite a ways. The saga continues. If it does get built, what will that mean for you?

MAHABAL: Again, the thirty meter telescope, because of its large mirror, is going to be able to get very high-resolution observations for fairly faint objects. What that means is that many things that we find with our bigger surveys we'll be capable of actually resolving what they are seeing using spectroscopy, one of the modes, for instance. So right now, not having very large telescopes means that for fainter objects, we may need to rely on machine learning to tell us what they are, which is fine if we can extend our machine learning quite well. But with something like TMT, you can get exquisite actual observations for some of those objects, but also, you'll be able to get great imaging farther and farther in the universe which can form newer datasets for machine learning. It's not as if it's there only to vindicate what machine learning has been finding, but providing new targets.

ZIERLER: Ashish, now that we've worked right up to the present, for the last part of our talk, I'd like to ask a few retrospective questions about your career, then we'll end looking to the future. I'll return first, of course, to the topic that brought us together. When did you and George and others start to recognize that the computational and machine learning impact in astronomy and then all of science was significant enough that we needed to think about it in historical terms? When did those conversations really start?

MAHABAL: Actually, that conversation we have been having, in general it's important, but when it really solidified was with a colleague in India. I was talking to him during my last visit about a year or so back, that—so he, Pranav Sharma, he's a curator and astronomer and enthusiast. He has created the Space Museum in Hyderabad at the Birla Planetarium there. We were talking about the big datasets and so on, and it came out that maybe it's a good idea to be able to document these things because there are so many details. Just like we're talking now, it's all over the place, and being able to make it into a journey that people can look at it, understand what is happening would be good. George was, of course, involved. He and I have been discussing Big Data for more than two decades now. It comes up all the time in our Astroinformatics series of conferences and so on. In some sense, some of that history has been created here. Caltech, in fact, has been a leader in that. He was very receptive to the idea of documentation, as were you when we talked about it, especially from Caltech's point of view. The discussions had been going on at a level but not in terms of wanting to do it ourselves now, which I think is a very good time, so.

ZIERLER: When all is said and done, best case scenario, we're working hard at this for years on then, what are the outcomes for you? What do you want to see achieved?

MAHABAL: Through the history part or in general, the Big Data part?

ZIERLER: Documenting the history of Big Data. What will be accomplished as a result?

MAHABAL: I think with the documenting what will happen is that it will be for other people to understand what it is, but also for ourselves the most beautiful thing that can happen is our understanding the gaps and the biases that have been there. Once that happens, it will help us rectify some of those things also. You said, for instance, that in 2020 decadal survey, what was there about machine learning? And I said there was nothing. That is clearly a gap. If there is a something that documents how astronomy has been leading this and what has not happened in astronomy still, then I think that will be a good thing for future.

ZIERLER: I'm curious, Ashish, what do you think historians will find most interesting in these developments, as they think more broadly behind beyond the technical aspects of astronomy?

MAHABAL: Just like we find gaps, there may be also some redundancies. Multiple groups trying to do the same thing at the same time. In some sense, similar outcomes coming, whereas—I'm not saying that they could have been doing different things necessarily, but if they had been collaborating, then there was a bigger whole that could have come out there. I think, first of all, people will be able to realize what beautiful work has happened. That I hope is the biggest thing that comes out when the whole picture is seen, the jigsaw is seen. But then there will also be these gaps and redundancies that will be seen and whether there would be ways to improve that going forward, how do we channelize things so that money could be used in a better way and bring about goals that are combined, rather than go by the vagaries of a few people or a few institutes and so on. Again, the democratization of science that has happened, all the science, because of the openness, because of the large data archives, and data releases going to a large number of places, people will see how enabling that has become, and being able to take it to a new level. NASA has been taking this approach of more open data. Again, this has not been fully implemented, but they would like to do that. If more projects do that, and we see the democratization through that, then I think that'll be great. That's another thing that I hope people can see, and we all can see. That is what I feel, and I hope I can see that also.

ZIERLER: Ashish, as you survey your own career so far, what has brought you the most satisfaction, either in terms of your own discoveries in astronomy, or the methodologies that you've innovated that have allowed discovery from others?

MAHABAL: Well, I think, both. I see that astronomy is very fundamental to me. You asked earlier about the religious incline of my parents, and it is mainly through astronomy that I got to become an atheist. I could see the universe as the answer, not necessarily created by anyone, but essentially, a very fantastic thing which is there for us to explore. Being able to explore that is something that gives me a great amount of joy, and finding new things, understanding things we did not know before, but all the time we are getting to know more and more. That doesn't mean we know everything, or we are close enough to knowing everything, I think that is far away, that is something that we may never reach, but I don't think that matters. I would rather continue to discover things on my own and with others with me, rather than being told the answers, that this is the answer, and that has been told to us. I don't find that interesting. Finding answers to questions that we may pose or others help pose for us is what is, I think, an interesting journey. Maybe that's through algorithm or through data or through astronomy or through cancer research, whatever. In fact, one of the things that I do is that I also write science fiction—

ZIERLER: Oh, really?

MAHABAL: —some of these aspects into that. I have published in Marathi, my mother tongue, and my book should be coming out in hopefully a couple of months actually.

ZIERLER: Oh, wow. Will it have an English translation?

MAHABAL: I've been translating some of the stories myself and the TechLit organization, which is at Caltech, I'm a member of that also. They have been critiquing some of those stories. What I'm hoping to do is that maybe after this book is published in Marathi, and I have translated all of them, maybe bring out an English version. There is an anthology that is coming out of Caltech, stories written by Caltech and JPL authors, and that should happen in May, I believe. One of my English stories appears in that.

ZIERLER: I'll have to check it out. Wow. Finally, Ashish, some questions looking to the future. When we look at programs like ChatGPT, it's really, it's almost mind-boggling the level of advances in machine learning. A few questions there. First, in the humanities, there's a lot of handwringing going on right now because we're getting to the point where we don't know what the computer has created, or what a human has created. Are those concerns transferrable in the science in terms of knowing what's true, being able to figure out what's real and what's a fake?

MAHABAL: Well, science has always dealt with being able to go to first principles, if required. We have scientists—we, I don't know if that's the right word, but adore because of the work they have done. But we do not say that because such and such scientists had said such and such thing, that we should accept them. We should be able to prove or disprove that, we should be able to—anyone should be able to prove or disprove that by going to the first principle. When someone says that, okay, this is how it is, we normally take that to be true not because that person has said it, but because of the reputation of that person, plus the fact that we can check it if needed. Whereas with something like ChatGPT, it's very good with patterns that have been used again and again by a large number of people for a long time. If you ask it questions that are at the edge, it hallucinates hugely. It gives answers that seem very confident and very eloquent, but wrong.

I say that for something like ChatGPT, one can ask it questions that one knows the answers to, or very close to the answers. In science, when one talks about things that may or may not be true, again, the scientific method is, I think, the key. So long as we can continue to follow the scientific method, then one can keep on getting answers. When we make mistakes it is something that can be corrected. When we see a correction, we know that what was there before was wrong. Newton's laws, for instance, of motion, compared to Einstein's laws of motion are an approximation, but they're not wrong. For most practical purposes, for most things, we can use them and the error that we get is tiny, but on the other level, they are also wrong in the sense that they can be superseded, and we understand that. But that doesn't reduce the importance of what Newton did. The same way for most of the science, because of the scientific method we have, we can know what is right and wrong for whatever one is doing at the time. Following the scientific method is what I would say would be the critical thing.

ZIERLER: Given all of the phenomenal advances in machine learning, are you at all concerned that machine learning's capabilities will outstrip our abilities in instrumentation and observation? Are there limits in the physical capacities of these instruments that machine learning will just outpace them, it'll tell us everything that's interesting, and the instruments can't keep up?

MAHABAL: I don't think machine learning by itself is going to be able to do that because it is always going to be dependent on data. Just like as I said before that with the current observatories which can go up to 20, 21 magnitudes like ZTF, we can take spectra and find out the ground truth of those objects. With machine learning, we can extrapolate it to beyond—so for instance, with LSST Rubin, gets it observations, machine learning will be able to classify those objects, whether or not we get spectra to a magnitude or two fainter than that. But what if we want to go even fainter? Then, machine learning, the extrapolation that it can do will not really be true because there has to be improved ground truth.

The training set that I talked about before, it has to improve, and that improvement will have to happen through instrumentation or actual visits to other places or change of point of view or theoretical knowledge that we can bring about and so on. I don't think machine learning is going to be able to do that by itself. We'll need other things. We'll need things that can step out of the machine learning. Machine learning is a closed world in some sense, and there are things that it won't be able to reach. It's something like where one can talk about Gödel's theorem coming in some fashion, so some kind of incompleteness, because you have a set of consistent set of axioms, and those consistent set of axioms, either they're powerful enough or not powerful enough. If they're not powerful enough to reach everything, then that's no good anyway. But if they're really good to encompass, then there can always be an additional axiom that you can add to it which will take you further in that sense. A statement that you cannot say true or false within that machine learning thing. There will be things that it cannot really see as truth or not truth, and you can extend that system.

ZIERLER: Ashish, what about the impacts of machine learning for career prospects for up-and-coming astronomers? I'm sure you've heard AI is now threatening jobs like pharmacists or lawyers or accountants. What about astronomers? Are you ever concerned that you're contributing to the demise of the profession?

MAHABAL: I don't think so. Even the other professions where things are going away, it is the automatic parts, the sort of lower-rung parts that are really going away, not the highest one. Human thinking is still far, far above the current automation, current AI or ML capabilities that are there, and it will be for some time, a few decades, at least. But eventually, yes, AI will be the next step in natural evolution from unicellular to nonconscious multicellular to humans, and that's inevitable. I feel that if it happens in our generation, that's so much better for us because we'll be able to go along and do something.

One of the things that's not going to happen, at least for a very long time, is AI getting ambition. AI trying to ask itself, why am I—or different AIs being able to come together and do things to say annihilate humanity or something. Those are still science fiction things, or science fantasy even. There'll be better and better AIs, but they will still be expert systems. ChatCPT is fantastic at doing various things in English, but it cannot come out of the computer and do something. When someone loses a job, that is a real thing, and that is bad. The way I put it is that ChatCPT is like a young person who has learned the patterns of language without learning the grammar. Now, it also knows a little bit of grammar because it has also learned the teaching of grammar in patterns in some sense. Anyone can pick up a foreign language without really learning the grammar if you have stayed with locals, if you keep talking to a lot of people. ChatGPT is something like that, and anything, human jobs that can be fit into that, so those are the ones that can go. But there are also other things like DALL-E, for instance, or Stable Diffusion, which can also draw pictures. Now, does that mean that artists will lose their job? Not necessarily, because there are media that DALL-E cannot handle. There are various things that you can do with those. There are always going to be things that humans will be able to do for a long time, for at least a few decades, where machine learning is not really going to replace us or take over or belittle us.

ZIERLER: You're saying that at least for the duration of your career, astronomy will still need astronomers?

MAHABAL: I think for a long time. Astronomy even more so. I was talking about the other things where for a few decades, there will always be things that you can find to do. But astronomy, I think, as we talked about, there's instrumentation and so on, so until we have autonomous robots building their own astronomy observatories to replace astronomers, but then that will be a fantastic day, right, meaning if AI is trying to do astronomy.

ZIERLER: Finally, Ashish, last question. Looking to the future, what are you most excited about personally? What are the prospects for discovery in astronomy made possible by machine learning?

MAHABAL: I think just being able to do a complete census of our galaxy and thereby other galaxies, trying to understand all kinds of things that we have in our galaxy, classes and subclasses and all the details related to that, we are not quite there yet, but that's where we are going in the next steps. Even understanding all kinds of asteroids within our solar system starting right from there. ZTF discovered Vatira, an asteroid inside the orbit of Venus, and we didn't think one exists or maybe one existed. Just understanding how things are around us and then going farther and understanding about how our neighbors are and going all the way within our galaxy, understanding all different kinds of phenomena that are there and what's happening, I think that's fantastic.

ZIERLER: It's exciting indeed. Ashish, it's been a great pleasure spending this time with you, and I look forward to our additional collaborations. I'd like to thank you so much.

MAHABAL: Thank you. Thank you so much.

[END]