
The transcript of AI & I with Bradley Love is below for paying subscribers.
Timestamps
- Introduction: 00:01:00
- The motivations behind building a LLM that can predict the future: 00:01:58
- How studying the brain can solve the AI revolution’s energy problem: 00:11:14
- Dr. Love and his team have developed a new way to prompt AI: 00:13:32
- Dan’s take on how AI is changing science: 00:18:27
- Why clean scientific explanations are a thing of the past: 00:22:54
- How our understanding of explanations will evolve: 00:29:49
- Why Dr. Love thinks the way we do scientific research is flawed: 00:37:31
- Why humans are drawn to simple explanations: 00:40:42
- How Dr. Love would rebuild the field of science: 00:45:03
Transcript
Dan Shipper (00:01:00)
Bradley, welcome to the show.
Bradley Love (00:01:02)
Yeah, thanks, Dan. Thanks for having me here.
Dan Shipper (00:01:03)
Yeah, I'm super excited to have you. So, for people who don't know, you are the professor of cognitive and decision sciences in experimental psychology at University College London, and you are also one of the main builders of a large language model that is focused on helping people do better neuroscience research, called BrainGPT. And I'm super psyched to have you on. This is going to be a slightly nontraditional episode because I think we're going to go deep into science and using AI for science, and sort of how science might change as a result of AI. But, yeah, super psyched to have you.
Bradley Love (00:01:42)
No, I'm excited too. And like you hint at, I think there are larger ramifications that go beyond neuroscience, which we developed this model for, that should affect all your listeners.
Dan Shipper (00:01:53)
So, let's get started with BrainGPT. Tell us about what it is.
Bradley Love (00:01:57)
Sure. I mean, first maybe, I’ll give some of the motivation. So, like probably a lot of your listeners, I was never really a big tool development person, but I just saw how science was going and it's exponentially increasing literature. And we just can't keep up. It's just not really a human-readable literature.
And so just kind of getting a grip on it, it seems like we need tools to do that. And people are making all kinds of tools, particularly using large language models, but we kind of had a different take on it. So, there's a lot of great work that we call backward-looking, which is not to be pejorative. It's work involving summarization of the scientific literature, kind of writing instant reviews or almost meta-analyses in cases, and that's great and that's valuable. But we want to focus more on what I think is really important in science, namely prediction, what we call forward-looking, and can we actually predict the outcome of studies before they happen?
And so we just really have this project and I'm happy to dive in as far as you like, but we wanted to see can large language models—both just off-the-shelf models, but also models that we fine-tune on 20 years of the neuroscience scientific literature—can they actually predict the results of neuroscience studies or experiments better than human experts like professors of neuroscience? There's a lot to say, but in short, they can. They're a lot better at it.
Dan Shipper (00:03:28)
That's fascinating. Wait. So just to go back to the original motivation, it sounds like the first thing was, okay, there's way too much information. There's way too much science being done for anyone to keep up. But it sounds like the thing that you're building, BrainGPT, isn't necessarily about summarizing the research. It's about predicting what future research might hold. So, what's the connection between not being able to keep up with the literature and future research predictions?
Bradley Love (00:03:52)
Sure. That's a great question. Because I think to predict the future, you have to have some understanding of the past, so not necessarily a nice, clean text that summarizes, but a model that could draw on thousands of findings through different literatures at different levels, because neuroscience is multi-level. It has everything from psychology and behavior all the way down to really low-level cellular, molecular findings, things involving DNA and so forth. And so no one could really draw on all that information, but it might be that biology is really messy. It's not how computer scientists create these abstraction layers. We have the hardware, the software, and so forth.
So to make a prediction, you might very well need to draw on all that information. And why do you want to make a prediction? Because I mean, well, first, there's so many uses in science for this. So you may want to run a more informative study. So, if BrainGPT predicts that your study is going to work out 99.9 percent certain as you expect, there's really no information gain in that. There's no reason to run the study. On the other hand if you, the system says, oh no, this is unlikely to give the pattern results you expect, but you have an intuition that the literature has gone off course and there's a systematic bias, that the same bias is affecting what BrainGPT is trained on, then, in some sense, you're trailblazing, you're making a really impactful discovery. So really everything: getting into replication is a huge issue in science. Most findings just don't actually replicate. And so I think we could, in the very near future, use systems like this to kind of get a handle on what's true, what we could count on, what next step we should take in scientific discovery.
Dan Shipper (00:05:40)
When you're training it, how are you taking into account the fact that p-hacking occurs and maybe the literature is somewhat damaged. Are they training it on literature? How do you filter that out, I guess?
Bradley Love (00:05:53)
Yeah. I mean, right now, not so much, but that's something that's definitely of interest. So the way I see it, I don't think scientists see it this way. I mean, I'm a scientist, but most people think of papers as individual contributions or discoveries, but I think of each contribution as—even if it's not p-hacked—is really flawed, noisy, incomplete. You have this tapestry of thousands of papers, and so hopefully, if you just aggregate— It's almost like in machine learning, if you do an ensemble, it's an ensemble of everybody, hopefully has the signal in the correct direction. I mean, that's why I raised the possibility before that there could be systematic biases or issues, but I think a lot of the problems just really come down to underpowered studies—statistically. And that opens the door to p-hacking or just a careless mistake. And so hopefully there's not that many systematic flaws. And if we could just not start reasoning about individual papers, but about thousands of papers at once, I think we'll get the signal.
Dan Shipper (00:07:00)
That's really interesting. And you said it predicts better than neuroscience experts which studies are going to work and which won't. What are the boundaries around those kinds of predictions? What is it good at predicting?
Bradley Love (00:07:13)
Yeah, yeah. So the way we tested it, kind of taking our cue from what people do in computer science and machine learning where they make benchmarks—like, the ImageNet benchmark was really critical to developing computer vision models. So we made our own brain bench model. And what we did is we looked at the Journal of Neuroscience, which is kind of a standard, well-respected journal in neuroscience. And the reason we chose it is because it gets to your question: it really covers all of neuroscience. There's five subsections to it, and it goes from everything from cognitive behavior, pretty high-level stuff, to cellular molecular systems to developmental psychiatry-type stuff. And so our benchmark would have these five subscores. And what we did is we took recent publications from this journal that are unlikely to have leaked into the training sets of models, and we trained some models from scratch where we know what they're trained on, but what we did is we just subtly altered these abstracts. So a scientific abstract is—to back up for readers that haven't read a scientific paper—there tends to be a structure, where there's a bit of background at first a couple sentences. Then, the method—what kind of experiment was run, and then the result. And so we just altered the result in very subtle ways, keeping the linguistic flow. So if it was like, blah, blah, blah, interior hippocampus, we change it to blah, blah, blah, posterior hippocampus. Or something was this brain activity increases, we had it decrease. And so there was a multiple choice, basically with two options, and we tested, basically, neuroscientists and a whole slew of large language models, including some that we fine-tuned on neuroscience literature and just compared their accuracy.
Dan Shipper (00:09:07)
That's really interesting. And I guess, for you, what do you think is the current sort of frontier for neuroscience? What are the interesting problems right now? And how does BrainGPT fit into that?
Bradley Love (00:09:23)
Yeah. I mean, there's so many things. I mean, in some ways neuroscience has been around for a while, but in other ways, it's still a really young discipline, say, compared to physics. And some people could even say it's almost pre- sort of, there's no standard model or anything like that. And in physics—
Dan Shipper (00:09:42)
Pre-paradigm.
Bradley Love (00:09:44)
Yeah, exactly. That's what I was grasping for. Thank you. So a lot of what I do in research—I do empirical research modeling. But I feel like a lot of time, I'm just trying to figure out what the question is or how to frame things, but there's so many questions and some of them are relevant. It goes from really high-level stuff to really low-level stuff. We have synaptic change and that's the basis of learning, but even that's up for grabs and how that works, even at the very low-level, how is a memory encoded in the brain? Is it in sort of those synapses, the weights, as we do in neural networks? Is it something more internal to the cell? How does error propagate from learning? So that'd be relevant to deep learning. Does the brain do gradient descent? Does it do something else? And then high-level stuff like when we understand space is that the basis for higher-level mental concepts like freedom, justice, or just chair, or is it just a general learning mechanism? So there's all these issues that go from very low-level to high-level. It's such a huge field. I can't really say there's one issue, and I kind of gravitate towards the ones that might have a bleed over or transfer into AI machine learning personally.
Dan Shipper (00:11:01)
And what are the ones that you think are the most bleeding over into AI and machine learning?
Bradley Love (00:11:06)
Yeah, well, one that I don't work on that seems like an obvious candidate is just transferring power consumption ideas. So modern GPUs are amazing and they power transformers, which power the AI revolution and other technologies. But, of course, these data centers are going up every day and it's stressing the grid, there's carbon impacts and so forth. Whereas our brains are doing a lot of computation, but I guess you just have to eat a sandwich or something and it's a lot less power consumption. So that's sort of a neuromorphic computing application.
Dan Shipper (00:11:48)
I'm just excited for a world where Microsoft data centers, they just like ordering a big load of Subway.
Bradley Love (00:11:56)
Yeah, exactly. It’s like a reverse matrix or something—just give it a salad, it doesn't use us as batteries. But yeah, that's really funny. Yeah, but possible. I mean, there's so many things. I mean, I'm really interested in the higher-level stuff. So I still think where people are better is like, somehow we have some tricks for how we represent the world and tasks and promote generalization. And I mean, in some sense, why everyone's so excited about large language models is that they're base or foundational in some sense that you could apply them to other tasks, not just the test they're trained on. Whereas previous generations of machine learning models, even the great, convolutional models that somewhat cracked object recognition or AlphaGo doing its games and so forth. Those are specialized models and I think people are still the kings of that flexibility. So there's probably some secret sauce to gain still from humans and how we represent situations and generalize and link things up.
Dan Shipper (00:13:03)
That's really interesting. Wait, can we see a demo of it? Do you have a way to use BrainGPT that we can look at?
The transcript of AI & I with Bradley Love is below for paying subscribers.
Timestamps
- Introduction: 00:01:00
- The motivations behind building a LLM that can predict the future: 00:01:58
- How studying the brain can solve the AI revolution’s energy problem: 00:11:14
- Dr. Love and his team have developed a new way to prompt AI: 00:13:32
- Dan’s take on how AI is changing science: 00:18:27
- Why clean scientific explanations are a thing of the past: 00:22:54
- How our understanding of explanations will evolve: 00:29:49
- Why Dr. Love thinks the way we do scientific research is flawed: 00:37:31
- Why humans are drawn to simple explanations: 00:40:42
- How Dr. Love would rebuild the field of science: 00:45:03
Transcript
Dan Shipper (00:01:00)
Bradley, welcome to the show.
Bradley Love (00:01:02)
Yeah, thanks, Dan. Thanks for having me here.
Dan Shipper (00:01:03)
Yeah, I'm super excited to have you. So, for people who don't know, you are the professor of cognitive and decision sciences in experimental psychology at University College London, and you are also one of the main builders of a large language model that is focused on helping people do better neuroscience research, called BrainGPT. And I'm super psyched to have you on. This is going to be a slightly nontraditional episode because I think we're going to go deep into science and using AI for science, and sort of how science might change as a result of AI. But, yeah, super psyched to have you.
Bradley Love (00:01:42)
No, I'm excited too. And like you hint at, I think there are larger ramifications that go beyond neuroscience, which we developed this model for, that should affect all your listeners.
Dan Shipper (00:01:53)
So, let's get started with BrainGPT. Tell us about what it is.
Bradley Love (00:01:57)
Sure. I mean, first maybe, I’ll give some of the motivation. So, like probably a lot of your listeners, I was never really a big tool development person, but I just saw how science was going and it's exponentially increasing literature. And we just can't keep up. It's just not really a human-readable literature.
And so just kind of getting a grip on it, it seems like we need tools to do that. And people are making all kinds of tools, particularly using large language models, but we kind of had a different take on it. So, there's a lot of great work that we call backward-looking, which is not to be pejorative. It's work involving summarization of the scientific literature, kind of writing instant reviews or almost meta-analyses in cases, and that's great and that's valuable. But we want to focus more on what I think is really important in science, namely prediction, what we call forward-looking, and can we actually predict the outcome of studies before they happen?
And so we just really have this project and I'm happy to dive in as far as you like, but we wanted to see can large language models—both just off-the-shelf models, but also models that we fine-tune on 20 years of the neuroscience scientific literature—can they actually predict the results of neuroscience studies or experiments better than human experts like professors of neuroscience? There's a lot to say, but in short, they can. They're a lot better at it.
Dan Shipper (00:03:28)
That's fascinating. Wait. So just to go back to the original motivation, it sounds like the first thing was, okay, there's way too much information. There's way too much science being done for anyone to keep up. But it sounds like the thing that you're building, BrainGPT, isn't necessarily about summarizing the research. It's about predicting what future research might hold. So, what's the connection between not being able to keep up with the literature and future research predictions?
Bradley Love (00:03:52)
Sure. That's a great question. Because I think to predict the future, you have to have some understanding of the past, so not necessarily a nice, clean text that summarizes, but a model that could draw on thousands of findings through different literatures at different levels, because neuroscience is multi-level. It has everything from psychology and behavior all the way down to really low-level cellular, molecular findings, things involving DNA and so forth. And so no one could really draw on all that information, but it might be that biology is really messy. It's not how computer scientists create these abstraction layers. We have the hardware, the software, and so forth.
So to make a prediction, you might very well need to draw on all that information. And why do you want to make a prediction? Because I mean, well, first, there's so many uses in science for this. So you may want to run a more informative study. So, if BrainGPT predicts that your study is going to work out 99.9 percent certain as you expect, there's really no information gain in that. There's no reason to run the study. On the other hand if you, the system says, oh no, this is unlikely to give the pattern results you expect, but you have an intuition that the literature has gone off course and there's a systematic bias, that the same bias is affecting what BrainGPT is trained on, then, in some sense, you're trailblazing, you're making a really impactful discovery. So really everything: getting into replication is a huge issue in science. Most findings just don't actually replicate. And so I think we could, in the very near future, use systems like this to kind of get a handle on what's true, what we could count on, what next step we should take in scientific discovery.
Dan Shipper (00:05:40)
When you're training it, how are you taking into account the fact that p-hacking occurs and maybe the literature is somewhat damaged. Are they training it on literature? How do you filter that out, I guess?
Bradley Love (00:05:53)
Yeah. I mean, right now, not so much, but that's something that's definitely of interest. So the way I see it, I don't think scientists see it this way. I mean, I'm a scientist, but most people think of papers as individual contributions or discoveries, but I think of each contribution as—even if it's not p-hacked—is really flawed, noisy, incomplete. You have this tapestry of thousands of papers, and so hopefully, if you just aggregate— It's almost like in machine learning, if you do an ensemble, it's an ensemble of everybody, hopefully has the signal in the correct direction. I mean, that's why I raised the possibility before that there could be systematic biases or issues, but I think a lot of the problems just really come down to underpowered studies—statistically. And that opens the door to p-hacking or just a careless mistake. And so hopefully there's not that many systematic flaws. And if we could just not start reasoning about individual papers, but about thousands of papers at once, I think we'll get the signal.
Dan Shipper (00:07:00)
That's really interesting. And you said it predicts better than neuroscience experts which studies are going to work and which won't. What are the boundaries around those kinds of predictions? What is it good at predicting?
Bradley Love (00:07:13)
Yeah, yeah. So the way we tested it, kind of taking our cue from what people do in computer science and machine learning where they make benchmarks—like, the ImageNet benchmark was really critical to developing computer vision models. So we made our own brain bench model. And what we did is we looked at the Journal of Neuroscience, which is kind of a standard, well-respected journal in neuroscience. And the reason we chose it is because it gets to your question: it really covers all of neuroscience. There's five subsections to it, and it goes from everything from cognitive behavior, pretty high-level stuff, to cellular molecular systems to developmental psychiatry-type stuff. And so our benchmark would have these five subscores. And what we did is we took recent publications from this journal that are unlikely to have leaked into the training sets of models, and we trained some models from scratch where we know what they're trained on, but what we did is we just subtly altered these abstracts. So a scientific abstract is—to back up for readers that haven't read a scientific paper—there tends to be a structure, where there's a bit of background at first a couple sentences. Then, the method—what kind of experiment was run, and then the result. And so we just altered the result in very subtle ways, keeping the linguistic flow. So if it was like, blah, blah, blah, interior hippocampus, we change it to blah, blah, blah, posterior hippocampus. Or something was this brain activity increases, we had it decrease. And so there was a multiple choice, basically with two options, and we tested, basically, neuroscientists and a whole slew of large language models, including some that we fine-tuned on neuroscience literature and just compared their accuracy.
Dan Shipper (00:09:07)
That's really interesting. And I guess, for you, what do you think is the current sort of frontier for neuroscience? What are the interesting problems right now? And how does BrainGPT fit into that?
Bradley Love (00:09:23)
Yeah. I mean, there's so many things. I mean, in some ways neuroscience has been around for a while, but in other ways, it's still a really young discipline, say, compared to physics. And some people could even say it's almost pre- sort of, there's no standard model or anything like that. And in physics—
Dan Shipper (00:09:42)
Pre-paradigm.
Bradley Love (00:09:44)
Yeah, exactly. That's what I was grasping for. Thank you. So a lot of what I do in research—I do empirical research modeling. But I feel like a lot of time, I'm just trying to figure out what the question is or how to frame things, but there's so many questions and some of them are relevant. It goes from really high-level stuff to really low-level stuff. We have synaptic change and that's the basis of learning, but even that's up for grabs and how that works, even at the very low-level, how is a memory encoded in the brain? Is it in sort of those synapses, the weights, as we do in neural networks? Is it something more internal to the cell? How does error propagate from learning? So that'd be relevant to deep learning. Does the brain do gradient descent? Does it do something else? And then high-level stuff like when we understand space is that the basis for higher-level mental concepts like freedom, justice, or just chair, or is it just a general learning mechanism? So there's all these issues that go from very low-level to high-level. It's such a huge field. I can't really say there's one issue, and I kind of gravitate towards the ones that might have a bleed over or transfer into AI machine learning personally.
Dan Shipper (00:11:01)
And what are the ones that you think are the most bleeding over into AI and machine learning?
Bradley Love (00:11:06)
Yeah, well, one that I don't work on that seems like an obvious candidate is just transferring power consumption ideas. So modern GPUs are amazing and they power transformers, which power the AI revolution and other technologies. But, of course, these data centers are going up every day and it's stressing the grid, there's carbon impacts and so forth. Whereas our brains are doing a lot of computation, but I guess you just have to eat a sandwich or something and it's a lot less power consumption. So that's sort of a neuromorphic computing application.
Dan Shipper (00:11:48)
I'm just excited for a world where Microsoft data centers, they just like ordering a big load of Subway.
Bradley Love (00:11:56)
Yeah, exactly. It’s like a reverse matrix or something—just give it a salad, it doesn't use us as batteries. But yeah, that's really funny. Yeah, but possible. I mean, there's so many things. I mean, I'm really interested in the higher-level stuff. So I still think where people are better is like, somehow we have some tricks for how we represent the world and tasks and promote generalization. And I mean, in some sense, why everyone's so excited about large language models is that they're base or foundational in some sense that you could apply them to other tasks, not just the test they're trained on. Whereas previous generations of machine learning models, even the great, convolutional models that somewhat cracked object recognition or AlphaGo doing its games and so forth. Those are specialized models and I think people are still the kings of that flexibility. So there's probably some secret sauce to gain still from humans and how we represent situations and generalize and link things up.
Dan Shipper (00:13:03)
That's really interesting. Wait, can we see a demo of it? Do you have a way to use BrainGPT that we can look at?
Bradley Love (00:13:10)
So, on your show, there's a lot of good guests that do interesting things with prompting. So this is actually a failure of prompting, which might be even more interesting to your guests to give them a new way to interact with large language models. So it's not going to be visually exciting because it's just going to be me talking, but maybe it'll be informative.
So we tried doing kind of what you would expect we would do, like firing up GPT-4, which we did back in the day—the previous version—and saying, give it a few prompts. Hey, pretend you're a neuroscientist. We're going to give you two versions of scientific, abstract, and neuroscience. One of them is true. One of them is altered. Which one is the original? My memory's bad, but maybe it was like 61 percent. So that's actually pretty much human level. But if, instead of interacting with a model, you actually just have access to the weights, which you don't for GPT-4, but you do for the LLaMA family of models, for a lot of other models, for Mistral, free models and so forth, the Falcon models, there's a lot of models out there where there's access to weight and Microsoft's fee models is a smaller accessible model. With those models, you could actually just compute with what language researchers call perplexity, which is how surprising the text is in the model. So it's given what the model's been trained on. So it's basically eating all the scientific literature and it's set all the weights. How aberrant is this text passage? And so if you do that, it has two real advantages: The first one is it's just way more accurate. So it'll have a lower perplexity for the correct version than the incorrect version.
But what's also really powerful is that you could take the difference in perplexity. You know, how surprising these two passages are. And you could use that as a measure of confidence in the model. And it turns out that confidence is calibrated to its accuracy. Some of the models are more confident when the difference in the perplexity of the correct, incorrect passage is larger, the models more likely to be correct.
So that's really important for human-machine teaming, because you could put a human and a machine together and you could get a better result than either one alone. So we have some analyses like that. We're writing up now for a follow-up publication, but basically it's interesting why the prompting doesn't work. And I think the issue is that usually in multiple choice, the difference between the options is pretty large. Whereas here, it's really subtle, like that example I gave, anterior, posterior increases, decreases, it's pretty subtle, and I think scientific literature again is noisy and imperfect. And I also think this task is not like a lot of its tuning that it was given to be conversational, like its reinforcement learning through human feedback or other supervised fine-tuning.
I know these models generalize really well to tasks, but I think this is still kind of a weird task for them to do even with a few examples of prompting. So it just turns out you could really— To me it's like a pattern recognition problem. If you could just look at all the model's weights and get its perplexity out, they do much better. As a matter of fact, they're like in the 83 percent range, so you get a 20 percent bump and humans are only at 63 percent. So they get sort of superhuman good by just being able to calculate the perplexities directly.
Dan Shipper (00:16:24)
That's really interesting. So just to kind of play back what you just said: Rather than prompt the model and be like, I'm going to run this experiment, what do you think? What you do is you write a future neuroscience paper, and then you run that through the model and calculate its perplexity?
Bradley Love (00:16:44)
Yeah. I mean, you got the total idea. What we do is so much simpler. We have these two abstracts—the original version and the one with the altered result, and we just get a number for each one the perplexity. How surprising is the original one? How surprising is the altered one? And the model just chooses whichever one's lower, but you could do what you were doing. I think you're already seeing where this is going. Because I think for scientific discovery to me, this is the best first step, you have to be able to predict it, did this experiment go this way or that way? And if you did that, you could start doing more interesting things like you're saying, and build tools on top to say what experiment should I run next and so forth, or is this plausible? Would this work out how I think, and you could even pair that with generation, cause you could have the model in prompt mode, generate different patterns of results, then you could use it to actually evaluate, through perplexity, which one is most likely to be reasonable.
Dan Shipper (00:17:43)
That's really interesting. Yeah, this is just sort of, I think, an interesting dovetail into how science might change with these kinds of tools and for people who have been listening to this podcast, I've gotten on my soapbox, once or twice before about what I think. And I'm sort of curious to explain it a little bit to you and then kind of get your take because I'm just like some guy with a newsletter and a podcast.
Bradley Love (00:18:12)
No, no. I think in this day and age, people that are placed like you are actually the ones that are really important because you're integrating all these kinds of siloed perspectives and linking things together. I think It's really valuable. And I mean, you're being humble, but you should give yourself more credit. So, I want to hear what you have to say.
Dan Shipper (00:18:27)
Thank you. I appreciate that. Well, I guess the place I usually start is: Science seems to have been really obsessed with explanations for a really long time, basically, since science was invented, which makes a lot of sense because explanations are extremely powerful for making predictions. And they're also kind of beautiful to understand how the world works. And finding scientific explanations has worked really well in lots of areas of science like physics or chemistry, stuff like that, but there are lots of areas of science like psychology and the quote unquote soft sciences where actual causal parsimonious scientific explanations are really, really, really hard to find. And that's what makes studies hard to replicate, and all that kind of stuff. And if you look at the history of psychology, we've been trying to figure out what depression is or anxiety is or whatever for like 150 years. And we have some different lenses on what it is but we don't have the germ theory of mental illness, for example, we just don't actually know what it is. and there's lots of different competing ideas.
But we keep looking for explanations because we feel like that's the only way to make good predictions about what to do. And my kind of contention is that machine learning and AI approaches allow us to kind of unbundle explanations from predictions so we can get good predictions without having to explain anything, having any good theories about what's underneath. And so we sort of turn this science problem into an engineering problem, which is, instead of explaining depression, we just predict it and if you can predict it, one is you can start to help people. So you can predict when it's going to happen to someone, you can predict what interventions are going to work. And then two, if you’ve developed a good enough predictor for that kind of phenomena in the world, the theory exists in the neural network somewhere and neural networks are more interpretable than human brains are in general. And so you may be able to actually find a good causal theory of depression inside of a neural network if it's sufficiently predicting, which I think is really cool. Or what you may find, which I kind of feel like is probably the case, but I'm curious what you think. What you may find is that the theories for higher-level phenomena like depression are so big that they don't fit in our rational brains and so that's why a good clinician has a good intuitive sense of what's wrong with someone and can kind of help them help them overcome their depression, and they can give some sort of explanation for what's going on, but, really, a lot of stuff is just happening subconsciously.
And so I think the larger impact of all of this is we may start to realize that we put too much emphasis on our rational, problem-solving, logical brains and not enough emphasis on our intuitive brains and AI may help us kind of realize the power of intuition. Because AI makes intuition sort of usable in the same way that you can write down logical arguments. You can also build an AI predictor that has human intuition in it, which I think is really, really cool. And it may also change how we do science. So, instead of doing small-scale studies we will just try to do much larger-scale studies where we're getting lots of data together and that's the project of science and then we build predictors on top of that. So that's a very, very long spiel. There's a lot in there, but I'm sort of curious what you think.
Bradley Love (00:22:09)
No, I largely agree and could even see taking what you say further. I mean, certainly I've been trained as a cognitive psychologist and I've kind of more migrated to machine learning AI. So a lot of things we do are exactly like you described. So, if you ask a great baseball player how they hit the ball, they'll tell you something, but it's pretty much useless and doesn't describe what they do at all, which is probably a reason why a lot of great athletes end up being terrible coaches and managers, there's not really that strong of a correlation. But I might take it further. So I don't think it's a hard versus soft science thing. I think this really nice coupling of explanation prediction I fear might be a historical phenomenon. And it really only applies to some aspects of physics, because you could do things that— I don't know, in biological sciences, everything is so messy. It just might be that there's 10,000 variables and they're all interacting and these variables are at different levels. We're saying from DNA to behavior and there's all these feedback cycles and delays and that just might be what it's like. And if that's what it's like, that's just not gonna have a clean explanation that will be human understandable. And so at that point, it almost becomes like storytelling to have an explanation—the prediction, much like hitting the baseball—or it's going to diverge more and more with time. Unfortunately, again, this is not how I want the world to be because I was always somebody that would try to come up with a crisp explanation. Even when I do modeling, I don't want to just say the model works. I want to understand the principles behind it and how they interact to give rise to the behavior of the model, have understanding and to do a sense of understanding and a solid scientific explanation. But I don't know if that's our future.
Imagine explaining quantum mechanics to a dog—quantum mechanics is a very well successful head theory, and maybe at some point where the dogs, we're not going to understand what's going on and we build tools because we can already see beyond our eyes with telescopes and we could add up numbers. We could add with calculators before computers could do all these amazing things. So maybe it's just going to be beyond us and we'll have to accept it and take different forms of explanations that are much more general and much more about, well, we built the system and we put these constraints on— Or, for taking it back to BrainGPT, it might be really general stuff. It's psychology and neuroscience-related fields. Does knowing about behavior help prediction about neuroscience? Well, I don't know. Let's train a model that has both psychology and neuroscience and it doesn't predict neuroscience better. Well, okay. Then those fields are related. That's the kind of explanation, but it's not like E = mc² or something. It's not very clean or so maybe we're going to have stuff like that in the future as an explanation or maybe we're just going to be in this predictive world and it's not even dystopian—everything could be better. But we'll just have stories about how this stuff works. So, maybe scientists will be like the new priests, just interpreting things and making it, explaining it to how it works and providing meaning to the systems. But yeah, I don't know if we're quite there yet. And many of my colleagues would disagree, but I could see a world in which explanation and prediction, unfortunately diverge and it's not because anybody did something wrong. It's just because the world's really complex. Systems are complex and our brains aren't built to make sense of this kind of stuff.
Dan Shipper (00:26:04)
If one of your colleagues would disagree, what would be a reasonable counter to this viewpoint?
Bradley Love (00:26:13)
Yeah. I mean, maybe I'm so on this train, I have trouble articulating it. But I think a lot of things would just appeal to the past and really elegant theories that unlocked understanding an explanation. I could even point to things I've done myself that have that flavor, but a lot of the arguments that I come across, they're not really forward-looking. They're more in the past—it was this way, but in the past, we didn't have— I mean, we're having increasingly more large data sets available, and there's more and more different types of data to link together, not just in the commercial world, but in the scientific world, like massive DNA databases and disease databases and brain recordings from the same people.
And the scientific literature just gets bigger and bigger and bigger. So it just seems like it's sort of like a sensor fusion problem at some point of connecting these dots. So, again, I don't think that was what it was like in the past. I think scientists also tend to look up to physics too much and it hasn't worked that way there and, not being an expert in physics, I'd be surprised if many parts of physics aren't already like what I'm saying, because it's probably at some point things are going to get many, many variables complicated there. So not everybody's trying to do the grand unified theory. Most people are like, I don't know. I mean, I'm guessing it's something sort of veers into the real world, trying to build a containment field for fusion or something. I bet there's tons of machine learning going on there. And again, I know nothing about that. It just seems like the kind of thing where it's going to get complicated. There's going to be many variables to balance and time series, and it's not going to come down to some simple equation with a square in it or something.
Dan Shipper (00:28:14)
I think that makes sense. I guess what I'm sort of pondering right now is, I obviously like to feel like we're sort of approaching the world where maybe we don't jettison explanations entirely, but our emphasis shifts a bit, particularly in science from clean explanations to the stuff we're talking about—predictions and stuff like that—and using different models to draw different conclusions, which means we're not necessarily always getting down to the brass tacks of, we know how every causal interaction happens but we can generally know what is going to happen or all that kind of stuff. But I guess one of the things that that your dog analogy reminded me of is just like, how do we make a world where that is true that's also not a huge bummer? I feel like there's a lot of interesting things about it, but also, I can imagine thinking towards a world like that and being like, oh, I guess we're just fundamentally limited. That sucks, you know? What do you think?
Bradley Love (00:29:17)
Yeah. I mean, I’m more positive. And again, not denigrating most people, if they're healthier the economy grows, the climate doesn't collapse and so forth, they're probably going to be pretty happy with the future. If we have a future where we could predict disease and how to have sustainable energy, everything. If these predictive tools unlock those kinds of discoveries, most people would be happier, but I guess you and me, we want to actually have some insight into what's going on and I guess there, it's just going to maybe be different forms of explanation.
I remember as an undergraduate when neural networks were really out of style and learning about them and having fun programming up from scratch and seeing my own little backprop nets and stuff like that and didn't really understand what was going on the weights.
But I almost felt there was a different kind of understanding or something deeper by this kind of system that could solve this problem. So there might be that kind of thing—explanation, exactly like what you're saying where you don't like to trace it all out. You might even be able to do with the Judea Pearl-type people that really emphasize interventions and systems like tweaking, like zapping this neuron or changing something or doing an intervention and who knows in our climate. And then being able to predict how that cascades forward. I mean, I don't know. Maybe the kind of explanations will be the kinds of variables that are involved, some gross characterization of the system and the dynamics. And it'll be just like you're saying, not charting out the whole causal network, because that just might not really be something that's possible and it's probably not even what the machine learning systems are doing. So I don't even know if it could distill it, so yeah, I just think maybe what we accept as an explanation could change over time too.
I mean, how we understand the world must be so different now than 500 years ago, what would we find satisfying so maybe it's just going to change again. And we're worrying about something that people in 100 years from now just won't even think twice about.
Dan Shipper (00:31:31)
Yeah, I kind of think that that's true. You mentioned the idea of, okay, how do we tell if two fields are related? Well, we run it through a model and if it says it's related, then it is. And that's maybe not satisfying to us today, but it will be satisfying to a future generation of scientists.
And I think the conditions under which that would be satisfying is, we have a model that is so standardized and well used that everyone has an intuition for, what it is and trust it to some degree and in that case, getting new results from that model, I think, will be fascinating and really interesting in the same way as finding some underlying causal network or something like that.
It's pretty clear we're moving to a world where using models to judge things is a really important way to make progress in science, but also just in AI in general. We're kind of reaching that point too, like mechanistic interpretability, it's so complicated that you have to use models to find features of models and all that kind of stuff. And I just think that that feels like a place that we're moving towards.
Bradley Love (00:32:39)
Yeah. Yeah. It could be. Obviously there's tons of really smart people working on interpretable AI, both building it in from the start, but also kind of in a post examination of these models. So even if we're moving towards that world, maybe along the way that will help us transition or maybe there'll be a breakthrough, but I'm just skeptical because if the underlying reality is so complicated and we have a model that appreciates that underlying reality, then the explanation might basically be the weights of the model in the training set. So at that point, I don't know what you do, but maybe we'll have some purchase, some transition. And obviously that work is important. And when we make these models, we're never going to blindly trust them. So we're going to need all kinds of checks along the way and mitigate bias and so forth. But yeah, I think we're going to that world just because that's the world that things are going to get done in.
Dan Shipper (00:33:42)
Yeah, I guess, I'm curious. You referenced a couple minutes ago one of the reasons this is so hard is maybe the way reality works is 10,000 variables weakly interacting and that's really hard to create a prediction around— Or, sorry, an explanation around. Can you explain what that means, where you see that? I'm assuming you're talking about, for example, genomics or something like that.
Bradley Love (00:34:07)
Yeah. I mean, I think why we're doing what we're doing right now in this conversation, it's just, everything is that way. I mean, even in physics, everything's this ideal, idealized, spherical particle in some vacuum or something, and none of that stuff would even work in the real world when you have more variables, like the wind resistance, the friction, and the material stressing and, but yeah, so I just— Sorry, maybe I think I got lost. Too many variables. Could you even repeat the question? I just think everything has this flavor to it when it gets closer to the real world. But yeah, so by a lot of the issues too, it is anything biological, it's probably beyond biology. So, even in social sciences and economics, you have all these interacting elements and in an economy, and that's just incredibly complicated. But in biology, of course, the cell itself is really complicated and you have all these interactions there and in history. And you mentioned DNA and how the proteins are expressed, but also then you could think of it. Also, I mean, I think about everything. I'm like a physicalist from a philosophy standpoint.
So I think things, even if you have to study higher level things like psychology, neuroscience, and ultimately there is this lower-level explanation. It's just, again, too many variables. Humans can't make sense of it. But you know, you could think of all our goals and what we do as also affects what goes on in our brain and like how things wire up. So there's just so many interactions across the levels. And if you think of what engineers do. They do go to great pains to reduce those interactions, but you know, evolution, physical reality doesn't care about this stuff, but you know, when an engineer makes abstraction layers, you have the machine code and then maybe the assembly and then something like C or Fortran, then you have Python libraries on top—and I'm not doing a good job describing this. And you have applications and then like younger people today. A lot of them don't even know there's a file system on the computer, what a folder is, because it's so abstracted. I mean, which I guess is great, but it's sort of no one knows how to change the oil in their car or anything because we've abstracted away. But we build things like that and there's a lot of effort. That goes into that, but I don't think nature or biology respects that. So it just makes it kind of a mess to unravel. And of course, even with our systems, we have bugs and weird things happen. I remember decades ago, the Intel chip screwed up arithmetic or something from some hardware bug. And it's just really hard because it's spilled into other layers in the abstraction hierarchy.
Dan Shipper (00:36:59)
Yeah. And I think you've thought a lot about the ways that trying to make natural interactions—natural processes—human understandable, actually might cause us to misinterpret them. You have one paper in particular called “The inevitability and superfluousness of cell types and spatial cognition.” Do you want to talk a little bit about that? Introduce what the paper is, what the motivation behind it is, and what the results are that you found.
Bradley Love (00:37:26)
Yeah. Yeah. It's really like a lot of our conversation is kind of leaning into this idea. So, for most of neuroscience, a lot of the major discoveries, including one of the Nobel Prizes, have been somebody recording from a cell, either in a human or non-human animal, and basically having some intuition, some clear explanation. For example, there's these things in the brain called place cells, and they light up when a rodent is at a particular location and it's in its box. They're like, aha, this is the brain's GPS system. That's very intuitive and appealing. But it's also very simplistic because the animals there are contrived and we're kind of forcing this interpretation. And so what we found in this paper is many of these intuitive self types that you could just interpret them.
And it seems like, oh, the brain is so simple. When you do this, this fires and you could make sense of this in a very human understandable way. What we found is that when you take larger networks like deep networks and you put them in VR worlds, the experiments neuroscientists run on non-human animals that you get these same cell types popping up even in random networks that don't serve the function of navigation or localization like the place cells.
So it's really kind of a danger, I think, of trying to force the intuitive understanding of the system. It's just, you'll find it, but it's not actually how the system works. And even within the field where people believe in these kinds of easy-to-label intuitive cell types, what happens is there's this general pattern where someone will say, Eureka! I found the place cell, and they'll literally get a Nobel Prize. And then there'll be 30 years of research explaining how it's not really a place style—oh, well, if the room isn't perfectly round, then you get this distortion. Oh, if there's a reward like food, then it doesn't code for that. Oh, it depends on the history of the path. Oh, it depends on the viewpoint somewhat.
And so, maybe we'd actually understand what was going on more if we would do kind of more of, I don't know, respect the complexity of the system we're dealing with in some ways do something a little bit more bottom-up still having mechanistic explanations and models. But this stuff is just so complicated and it seems just ridiculous. It's almost like, again, this is going to get me in trouble with people and I think people hate this paper, even though it will be published and have an impact. It's almost like a kid's view of science. Oh, I just mix some chemicals and something happens and I tell people about it and it's a discovery. Or, ooh, look, I found this butterfly, no one found this before. That's so important. But that's not really like what scientific explanations are like. And it's just that when you're dealing with a system as complex as the brain and people and so multilevel, it just seems so vanishingly unlikely that whatever idea you go in with is what you're going to come out, is going to be actually true, or there's no, unless there's some reason for information to be coded that way. I think there's a problem because those are the explanations people find attractive. I think they're very seductive, but ultimately limiting.
Dan Shipper (00:40:38)
That's really interesting. Given that we'd like those kinds of explanations, what would you guess is the reason— We're just attracted to those sort of parsimonious— That's the way our brains work? What is it about those kinds of explanations if they must work for something, right?
Bradley Love (00:40:58)
Yeah, again, this is where I'm glad I have some training as a cognitive psychologist, but everything I'm going to say is going to be so obvious. I mean, it's just like, if you're a politician running for office. Simple story, coherent narrative works best. It's like, what's going to have the best information transmission value? It's going to be most viral. You know, what does a lawyer do in a jury trial? They create a narrative where their client is innocent. That's a coherent story. So I think people really prize this kind of coherency.
It’s just what makes an appealing explanation to us. But again, some things, even things that are actually pretty simple, in terms of just writing out what is quantum mechanics, that makes no intuitive sense intuitively. And that's pretty simple. But then I think when you get into things—anything involving the real world or biology, it's just going to be so many variables, you could tell the clear intuitive story, but I'm not even sure in cases that will be an approximation of the real thing, or it'll be like so crude and off it'll obscure miss the deeper truths.
But yeah, I just think that's how we like that. I mean, sometimes it's good. It's almost like Occam's razor—for some things you don't want needlessly complicated explanations, but sometimes you need it. Sometimes you need the more complex model to fit the data, make the prediction, characterize the system.
And I don't think humans just go in by default thinking things are that complicated, and we didn't really— I don't want to try to guess how we evolve, but we didn't really live in a world where things had to be that— They weren't like the quantum world, the 10,000 variables of how the brain works big data world. So probably having simple, robust procedures serves us well in our natural environments, but maybe it doesn't serve us well in science.
Dan Shipper (00:42:59)
Yeah, I guess that's maybe what I'm thinking right now is the brain has really high dimensional representations of things in the world that are very complicated and would be very hard to have a scientific explanation for. Our natural inclination towards sort of simple causal stories that we can tell, maybe the the proper place for those kinds of things are to slightly tweak our much higher-dimensional concept for a particular context and we're kind of misusing that to try to blow it up into representing the entire high-dimensional representation that really we can't put into language. And we've been sort of trained that the more parsimonious and powerful those little things are, the better they are, but that sort of gets misused when it goes beyond what they're capable of representing.
Bradley Love (00:44:04)
Yeah. Yeah. I think you're right. And it gets even trickier because, of course, we're not passive recipients of data in science. We got it and collected it. So once you start believing something, a story, there's a lot of confirmatory work in science. Basically, if you have that mindset about what the relevant variables are, even if you don't think you're being confirmatory, you kind of are confirming the framework. So I think it's what you're saying, but, yeah, we even do things to keep those simple stories going probably longer than they should.
Dan Shipper (00:44:42)
This is a big question. So take a second to think it through. But if you could wave a magic wand and all the priors of science were thrown out for a second and you would like to rebuild science as an institution, knowing what we know today about how hard this stuff is, what are the things you would change? And how would you rebuild science for this world?
Bradley Love (00:45:07)
Yeah, gosh, so I think it's really good that nobody has that much power because probably what I say is wrong, but probably what other people say is wrong too. Personally, I would do a combination of training people—like this conversation—to be a little bit more philosophical about things and maybe do some reading and thinking there about what explanations are and the limits and the study of it, but also like more emphasis to, which is already happening on computational skills, large scale simulations, and the fields going this way too, but more consideration of naturalistic environments and their complexity and how— There should be whole whole for university students about how limiting the problem that might work well in a particle accelerator in physics if you're working with the standard model, but probably everything else. You're going to miss key variables and insights. I mean, especially anything involving human behavior or biological systems. So I guess really just kind of a different orientation for scientists, have them be a little bit more thoughtful and philosophical about the whole enterprise and just everything's going to be AI, computational, large data sets, even if there's a beautiful experiment that can isolate something for a key question that you can't resolve with these large data sets. So something that when I did some work with consumer behavior to look at human decision making in the real world, like in retail settings. And there it was doing these big studies with millions of people, transactions, but then we'd bring it back into the lab and try to test something. So I think emphasizing that interplay, I think if you're going to run lab studies, I think there has to be the interplay with something more naturalistic, real world, big data, just so you don't just make up kind of a fake science of itself because you can have a science of anything, and it might just not have anything to do with anything anyone really cares about ultimately in a hundred years.
Dan Shipper (00:47:05)
Yeah. Who do you think is doing good work along this vein in your field or in other fields that you know about?
Bradley Love (00:47:13)
Oh, yeah. I mean, so a lot of people are doing amazing work and I'm very self-critical and critical of other people, but I don’t share with it them, so I probably won't be great at naming a bunch of people, but I mean, one, maybe you use one whole subfield is, I mean, years ago when I was in graduate school, the computer vision guys—they're mostly guys—but they were like the big jocks. So they feel like, oh, they had the most mathematical chops and they were doing the real science with all the filters and for your transforms and really leading the way. And then along comes AlexNet, this convolutional network, not made by a vision scientist but by machine learning, AI researchers. And overnight that, in terms of behavior, that was the best model of object recognition. So what happened is neuroscientists, computational neuroscientists took that on as a model for the ventral visual stream. So how our brain transforms our vision images on our eye into like, that's a chair. That's a cat. And they found that it's not a perfectly clean story like they presented. I actually have a paper criticizing the basic story, but I think it was a real advance to show that the levels of processing in these models, these artificial neural networks, fairly well tracked the transformations that went in our own brains.
And so this is an example where going larger naturalistic training this model on 1 million natural images, basically taking the whole problem at once and doing something at scale led to a bigger advance than 100 years of vision science and all these grants for this. And it was done by neuroscientists, but it was really kicked off by engineering and embracing this real world challenge.
And I think that should be really humbling to scientists. And so I don't want to single out anybody, but there's a number of people that did that and scientists are almost faddish, if one person has a good idea, they all kind of rush to the same thing. So there's been a lot of really smart people that have jumped on that train. And now it's sort of petering out and it's the next generation. Questions are happening, but yeah. so that's an example that kind of goes with what we're discussing.
Dan Shipper (00:49:30)
That makes sense. And then in terms of wanting scientists to be a little bit more thoughtful and philosophical about what the methods of science are, do you have any people or writers or philosophical ideas that you think they should be reading or we should be reading, even if it's not for lay people, just in general, who are the philosophical people that get you excited?
Bradley Love (00:49:55)
Yeah. I mean, gosh. So I'd say this book that I read recently, it's an older book by Thomas Nagel, who's a philosopher of science that recently passed away. This is not what I'm recommending. it's a good essay, but he's famous for “What is it like to be a bat?” essay where it's just sort of about subjective experience and how you couldn't really know what it's like to be a bat cause you'd have to be a bat. But he wrote this book, The View From Nowhere, that even though he's an incredibly famous philosopher, I feel like it should have way more impact because the first half, I think, really explains what science is and what it's not. And I think a lot of scientists, people that seem like they're being more inclusive in some way of topics like this probably applied to consciousness research in some ways they're doing scientism and overstepping. So the reason the book is called the view from nowhere is because that's what science is. So there isn't science for you or me, or there isn't Eastern science or Western science. It's from this disembodied perspective, and that has both strengths and weaknesses so if you launch a probe to Mars and put it in order, Mars doesn't care who came up with the calculations. It's either going to make it or not make it. And so that's kind of the view from nowhere. But of course, a lot of human experience that's very valuable, it's subjective, and that's like kind of where meaning comes in our lives, and so what I liked about that book is making clear what the strengths of science are, but in some ways, he doesn't hit this over the head with it, but for me, what I really got away was the limits of science, so I really liked that book.
Dan Shipper (00:51:37)
That's really cool. I love that. You're an experimental psychologist. So how does that relate to the field of psychology, which obviously is trying to be science, but then there's also it's also all about human experience?
Bradley Love (00:51:53)
Yeah, yeah. Another person who would be good to read would be late Wittgenstein. So I'm not trying to be some kind of logical positives that you could only rely on the observables and whatnot. But I think there are some oversteps. Psychology, in terms of experimental psychology, most people I would say are scientists and are doing science, but there is sort of is this oversteps at times, because you're really limited to what you could— I mean, it's just like you tell an undergraduate where you could operationalize, so what's depression? Well, you have to come up with some criteria for it. Or what's this? And when you're doing neural recording, as soon as you count something you're abstracting in a sense, you're saying there's some way in which these are alike, and you're ignoring other ways in which they're different.
So, that's kind of the standard. I mean, that's sort of, again, the view from nowhere trying to make a procedure where you could just observe things and quantify them away where you could start testing out hypotheses. But a lot of times I think psychologists and people in general get neuroscientists, others get confused that they could have some deep insights into our subjective experiences what people call the quality of why does the blue make me feel this way?
And when you start getting into that you could come up with very good things. If I put you under this anesthesia, will you wake up or not? Under what conditions? What brain waves can you measure that will predict that or account for that or even some mechanisms. But I think a lot of these when it gets into not the view from nowhere, but kind of the view from inside this first person perspective, which is mostly what we care about, is creatures. I don't think, I actually don't think science in general has a lot to say beyond correlates of these experiences. And that's probably why we still have literature and religion and music and all these arts and all these other things.
Dan Shipper (00:54:02)
I definitely feel like literature as an exploration of psychology is quite undervalued by people who are into science. I think some people think about literature and psychology as being very intertwined, but as not a professional psychologist myself, but one who likes to read a lot, a lot, a lot of psychology books and also has read a lot of novels, there's something about that. Or even the literary tradition in psychoanalysis where it's a lot of the earlier work is less about, okay, you have these cognitive distortions and we're going to do a worksheet or whatever, but it's really, really long kind of literary storytelling about particular patients and stuff that I honestly think is underutilized.
Bradley Love (00:54:52)
Yeah, it goes back to what you're saying before about intuition and the many variables. So, I don't know much about clinical work and like less about things that are considered outside science, like Freudian analysis, but I can imagine someone doing a lot of good work within that framework. Because the theory doesn't hold any water as a scientific theory, it can still be intuition. Someone builds up working within that system is very valuable and can create good outcomes for people. Yeah, I don't put my faith in my life in the hands of alternative practitioners, but when people get outcomes or anything like this I try to be a little bit slow to completely dismiss it and not at least which causes placebo effects while under attack are probably real in cases. But I mean, there's practices that might be done that could sort of cultural knowledge, like Italians know how to make good food and things like this. It's from hundreds of years of tweaking things and trying things out. So that's even beyond the individual so maybe some of the training these folks receive in clinical settings. I mean, some things like cognitive behavioral therapy have some random control trial backing. But, yeah, anyway.
Dan Shipper (00:56:23)
To be clear. I'm not trying to totally diss it. I just—
Bradley Love (00:56:28)
No, no. Please totally diss it. You’re not upsetting me. I think after I started just basically making fun of a few Nobel Prizes in neuroscience, no one's going to like me anyway.
Dan Shipper (00:56:40)
Great. Well be in the same boat then. Cool. Well, this is a really, truly fascinating discussion. I'm really thankful that we got a chance to chat. If people are interested in learning more about your research, where can they find you and what should they read?
Bradley Love (00:56:57)
Yeah, sure. They could go to my website, which is bradlove.org. There's also a link to the BrainGPT website. And if people are interested in that project we have a mailing list and we don't spam people. We will probably send a few messages a year with major updates. So if people want to follow that project, they could sign up there. And most of them should. I'll put a link up to this podcast when it's out. If they want to see more popular media stuff, that's on the website as well. And if anyone's really a glutton for punishment, of course, they could just start reading abstracts and papers, but I'm not sure I could recommend that to the casual listener.
Dan Shipper (00:57:40)
Cool. Well, thank you so much, Bradley. I really appreciate this. Whenever you have more stuff to share, I would love to have you back.
Bradley Love (00:57:46)
No, that'd be great. Yeah. Thanks so much, Dan, for having me here. It was a blast.
Thanks to Scott Nover for editorial support.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools