Episode 33

April 14, 2021

00:37:04

Music and the Brain

Hosted by

Richard Miles James Di Virgilio
Music and the Brain
The Inventivity Pod
Music and the Brain

Apr 14 2021 | 00:37:04

/

Show Notes

Nina Kraus, a professor of communication sciences, neurobiology, and physiology at Northwestern University in Chicago, has done a lot of research on the effect of playing music on processing sound, learning, and brain development.  She explains the “musician’s advantage,” which includes better reading skills, and how music training can be a tool to improve the performance of students from low socio-economic backgrounds.  *This episode was originally released on June 10, 2020.*

 

Intro (00:01): 

Inventors and their inventions. Welcome to Radio Cade, a podcast from the Cade Museum for Creativity and Invention in Gainesville, Florida. The museum is named after James Robert Cade who invented Gatorade in 1965. My name is Richard Miles. We'll introduce you to inventors and the things that motivate them. We'll learn about their personal stories, how their inventions work, and how their ideas get from the laboratory to the marketplace. 

Richard Miles (00:39): 

The sound of music, not just a movie about seeing Austrians, but also a fertile field of research, specifically the effect of playing music on processing sound, learning, and brain development. I'm your host Richard Miles. Today my guest is Nina Kraus, a professor of communication, sciences, neurobiology, and physiology at Northwestern University in Chicago. Welcome to Radio Cade, Nina. 

Nina Kraus (01:02): 

I'm so glad to be here. 

Richard Miles (01:03): 

So Nina, you're one of those difficult guests that you have done so much, and we could talk a lot, but then this would not be a 30 minute podcast, it would be like a 30 hour podcast, but I have heard you speak before, and I know you were actually quite good about summarizing your research so I know you're up to the challenge, but I'd like to start out by focusing on one particular area of your work. You've done a lot in sound processing and how the brain processes sound, but why don't we start with some basic definitions for our listeners. So from a scientific perspective or researchers perspective, what is the relationship or the difference, I guess, between music, noise, and language. What's the relationship between those three things? 

Nina Kraus (01:41): 

What a great starting question. So sound is the common denominator for all the things that you mentioned and sound is a very under-recognized force in our society. It is very, very powerful, and yet we don't pay very much attention to it because it's invisible, first of all, like a lot of powerful forces like gravity. So you don't think about it. And we live in a very visually biased world. And even scientifically there was a National Institute for Vision 13 years before there was one for hearing. And that was the National Institute for Deafness and Communication. We share that with smell and taste, but all of the things that you mentioned, language, and, music, and noise, these are all sounds. And I'm a biologist and I am interested in sound and the brain. And so really the overall umbrella over everything that we study is sound and brain. How do we make sense of sound? How is sound processed in the brain and how does our experience with sound shape how we perceive the world? 

Richard Miles (02:52): 

I saw on one of your papers, you have a specific way or methodology that you can actually look at brain as it is interpreting sound, right? 

Nina Kraus (03:01): 

Yeah. Let me tell you a little bit about that. Initially, as a biologist, I came into science studying single neurons, actually, single neurons with scalp electrodes and animal models and one of my first experiments was to play sound to an animal while I was recording the brain's response, the one cell's response, to that sound. This was a rabbit, a bunny rabbit, and we taught the rabbit that the sound had a meaning that every time the sound happened, he'd get some food. So the same sound, same neuron, but the neurons response to that sound changed. And so we could see firsthand learning, the biology of learning, and that's something that I'm deeply interested in. My lab, which we call Brain Volts has been looking at how our experience with sound shapes our nervous system, but I was coming from the specificity of recording from individual cells. And so these are signals. These are tangible signals that you can really define, and that felt good. And so the question was, well, how can we get a way of measuring sound processing in the brain in humans when we can't go sticking needles into individual cells? You know, there are many ways of recording the brain's response to sound with scalp electrodes. And of course, as I'm talking to you, now, the nerves in your brain that respond to sound are producing electricity. And so with the scalp electrodes, we can pick up that electricity and that's been done for a long, long time, but most of the measures that we can obtain from the scalp are rather blunt with respect to what I'm interested in, which is the different ingredients of sound. So sound consists, again let me make a visual object comparison. So with vision, everybody knows that a given object has a shape, a size, a color, a texture, that's all very obvious, but people don't realize, first of all, that there is sound and secondly, that sound also consists of ingredients like pitch, how high or low a tambour, a violin and a tuba sound different when they're playing the same note, that's tambour. The harmonics that differentiate one speech sound from another. There's phase that tells us where objects are in space, based on the time of arrival of the sound to your two ears. And there's a huge timing. So the auditory system is our fastest sense, even though light is faster than sound processing sound happens on the order of microseconds because there's so much timing information in sound. That's how sound works, it's fleeting. And so, what I was interested, what I am interested in is how do we figure out how the brain makes sense of these different ingredients? And we figured out a way of doing this because most of the methods that were available to us in the past, you could just see is the response large, is the response fast to sound, but I want to know how does your brain respond specifically to pitch and timing and tambour and phase all these different ingredients. And so one of the metaphors that I like to use is a mixing board. So if you think about the faders on a mixing board and you think of all the different ingredients and sound, when they are transduced into the signals of the brain, which is electricity, it doesn't work like a volume knob. People, even musicians, are not good at processing all the sounds like a volume knob. They have specific strengths and weaknesses like the faders on a mixing board and I wanted a biological approach that would be able to look at that, would be very, very precise, and not only be able to tell us well, what is the effect of playing a musical instrument for many years? What is the effect of speaking another language, but not only looking at these group differences, but what about individuals? I mean, my auditory brain is different from your auditory brain, we're all individuals. And so would it be possible to actually have a physiologic response that reflected these ingredients, A-of-all, and B-of-all would not only reflect what happens with experience in groups of people, but even on an individual basis. And we have really figured this out. So this is a response called the 'frequency following response' the FFR, which we have adapted to our use and we are able to use very complex, sounds like speech and music and analyze the responses in a way to see how an individual processes these different ingredients. And we've spent a lot of time on the methodology. So we have two tutorials on the frequency following response, which really speaks about these responses in a lot of detail. We have a number of patents on what we've discovered in terms of how to measure these responses. So this is really something that has kept us busy. So on the one hand, it was really a quest to search for a biological approach, which I'm really happy with now. And then it is a matter of applying that biological approach. Partly it was synergistic because we wanted to see, well, is this approach actually yielding the kind of information we want through research. And so we've done a lot of research and now we can really have confidence that a person's response to sound really does reflect how their brain processes the different ingredients, how it might've been affected by the songs we sing, the languages we speak, and even your brain health, because making sense of sound is one of the hardest jobs that we ask our brains to do. So you can imagine that if you get hit in the head, it will disrupt this very, very fine microsecond level processing, which is one of the areas that we're interested in looking at is, is what happens with head injury, especially with concussion, sports induced concussion. And so again, we can do that as well. 

Richard Miles (08:56): 

So on your website, I think you have this great graphical representation of the frequency following response, right? Where you will play a snippet of almost anything, but let's say a piece of music and in the brain of the person listening to it, you have almost a mirror image right, of that same frequency. And you can see differences in the ability of the person to process what they're hearing. And so you found, and again, I may have this wrong, you found that musicians had several advantages in the way that you will play for them something say Beethoven's Fifth Symphony and a musician will hear it differently than when you say musician, it's someone who actually plays an instrument, right? Not just a music appreciator, or someone who plays an instrument. Those people process the sound differently than those people who are not trained in playing music. Is that correct? 

Nina Kraus (09:55): 

That's exactly right. Because what is so beautiful about this biological approach is that the response we get from the brain, the electricity actually physically resembles the sound that was used to evoke it. This hardly ever happens in biological systems. I mean, usually you're looking at something very abstract, like lipid levels, cholesterol levels, to give you some index of cardiovascular health. Then, to actually be able to say, Oh, well, people process sounds in a way that we can actually see with a certain transparency. So the transparency is, as you said, is such that we play a sound wave and you can see the sound wave and you can then deliver that sound, wave to a person and pick up the electricity that the sound wave generates. And then, you know, we're all familiar with taking a sound wave and feeding it through a microphone, and then you can play it through a speaker. And then the same way, you can take an electrical response that you have recorded from the brain, it's just an electrical response, and you can deliver that to a speaker and play it so we can both see and hear a person's response to sound. And yes, in fact, we can see the people who regularly play an instrument, so I'm not talking about professional musicians. I'm just talking about people who regularly play a musical instrument, you know, as little as half an hour, twice a week on a regular basis. And one of the things that we've been able to find is that there really is a neural signature for the musician. So remember I said, that sound consisted of these ingredients like pitch and timing and tambour. And what we see as the musician strength is a strengthening of the harmonics in sound and of various timing ingredients. And it turns out that both the harmonics and the timing are not only important in music, but they also overlap with what you need for language. So you can imagine how if you are doing an activity that is strengthening your brain's response to the harmonics, which not only are important for playing a musical instrument, the harmonics are what distinguish B's and P's, and D's, and G's from each other. So these are the same signals. They're the same ingredients. They're these beautiful signals outside the head and inside the head that we can see, how does experience shape how we perceive the world. So the musician signature really has a strengthening of harmonics and timing, which it turns out transfers to language abilities and language abilities, including things like reading and being able to hear speech in a noisy place like a classroom, being able to figure out what's going on in a complex soundscape. So these are advantages that seem to come along with the brains increased stability, strengthened ability to process these particular ingredients of harmonics and timing and what we call FM sweeps, which are basically the simplest FM suites. It's a change of frequency over time. It's like a cat call, right? It's a sweep up and down. And it turns out that speech sounds have very, very, very fast FM sweeps that distinguishes one consonant from another, that happened in a very, very short period of time. And so the brain's ability to process these FM sweeps is something that we see as a strength in musicians, and is very much an important ingredient in language. 

Richard Miles (13:32): 

I find all of this fascinating Nina. I remember the one example that you gave you tested musician's ability to, as you said, pick out a particular sound in a crowded room, and you compare that to non-musicians and that the musicians had this definitive ability to recognize a sound pattern and all that. And then of course, different types of syllables or consonant, they also had that ability. The only time I can ever do this, I'm not a musician is if we're at a party, I can hear Phoebe's voice in a crowded room, and then she said, well, yes, that's because research has found that men interpret women's voices like music. So finally, I have a researcher. You tell me, is that true or not? 

Nina Kraus (14:08): 

Well, I would say that you have had a lot of experience with Phoebe's voice and so you're sonic brain is tuned to that voice. And we say this, when, you know, you pick up the phone, your son calls you and you say, "Oh, it's so good to hear the sound of your voice", the years of the sound to meaning, sound to emotion connections that you've made with that voice even before you hear the particular words, you have this very strong connection to what you've learned. And so I think that's why you can hear Phoebe so well. 

Richard Miles (14:41): 

Let's talk some more about work that you have done, very interesting work, with something called The Harmony Project in Los Angeles. This is something, I think in 2014 was the research, and essentially you worked with an organization in Los Angeles, it was giving music lessons, I think mostly stringed instruments, right? They're giving them lessons for a substantial amount of time and then you started tracking them doing assessments to see if there were other advantages, right, that translated not just the ability to play a given instrument, but also the ability to do other cognitive skills. Tell us a little bit more about that. 

Nina Kraus (15:14): 

So, we're really fortunate as scientists, and also if you read about Brain Volts and what we care about in our lab. We really are interested in sound in the world and we're less interested in creating an experiment in the lab where people come in and they are given a certain amount of training with sound. We're really interested in what is the impact of playing a musical instrument in actual music programs that live in the world? Also, one of the questions that one often asks is, well, is it that the brains strengthened response in musicians is just something they were born with so that if you have a strength in a certain domain, you might be encouraged to pursue that activity. A way of, of trying to understand what the effect of experiences is to do a so-called longitudinal study. Let me tell you the long in longitudinal is no joke, because that means tracking the same individuals year after year after year. So, we had the opportunity to do this in Los Angeles in the gang reduction zones of LA. And also we had a companion project at the same time in the Chicago Public Schools, where we basically had the same experimental design, which consisted of you take two groups of people and you match them at the beginning of training, or before training has started, and you match them on age and sex and reading scores and IQ and everything that you can think of, and then one group gets music and another group gets something else and you track them over time. So you track them year after year. And we were able to do this in LA with elementary school kids, second, third, and fourth graders. So we did this over three years and then the project in Chicago was adolescents also in low income areas. We've tracked the adolescents from freshman year until they graduated as seniors. And what was important is that the individuals in the different groups were in the same classroom, same teachers, same socioeconomic areas. And we could see, well, what happens if one group gets music and another group gets something else? So what we were able to find was, first of all, we were very interested in, well, we already knew from cross sectional studies across the, about this musician signature that I told you about that musicians had strengthened responses to FM sweeps, to harmonics, to timing in speech. The musicians had these stronger responses, but we wanted to know, well, is this something that develops over time? And in both studies after a year of regular music making in LA, these were after-school programs five times a week. If you also include Saturday and in the Chicago public schools, it was actually within the school day so that they had an hour every day of music, just like you had an hour of English and Math and History. We'd measure sound processing in the brain using our biological approach at the beginning of the year. And then at again at the end of the year, and after a year in both studies, we found no change in the brain's response to sound. And that's what the data showed but we kept going. And so in both of the studies, what we found was that it takes a while to change the brain. And that's a good thing. If your brain was changing in a fundamental way, every second, you'd be really confused, but you speak a certain language that has certain sound ingredients after a while. And it's really after years of speaking of particular language, your brain automatically changes and changes in a way fundamental or your default experience of the world. I mean, even if you're asleep and I'm measuring your brain's response to sound, you will have this heightened response to certain sound ingredients, because it has just become a fundamental way of how you perceive the world. But this takes, while it really did take two years to see these changes. And at the same time, of course, we were interested well, are these kids doing better in school in various ways, in terms of literacy, for example, and being able to hear speech and noise. And in fact, again, we were able to, to track the changes in the brain with these gains in literacy, and in being able to hear, for example, speech in noise, 

Richard Miles (19:52): 

So Nina, it's fairly common observation that the younger you are the easier it seems to do things like learn languages, foreign language, play instruments, and so on. Is there anything in your research or other people's research that indicates are there definitive windows of neuroplasticity past which it's not really worth it or the returns are so diminishing that every 10 hours of effort you put into it is really not going to get you much. Do you find that there's a cutoff? Does it happen in elementary school or middle school? Or can you go on up through your twenties and still reasonably hope to take up an instrument or learn a foreign language and accomplish a very high degree of proficiency with it. 

Nina Kraus (20:28): 

Great question, the answer is no, there is no limit. Certainly the way that a young brain learns is different from an older brain, but we continue to learn until the day we die. And in fact, there've been very beautiful experiments in auditory learning in animal models where you can very easily and in a very precise way, regulate an animal's experience at different ages and see how their brain responds to learning an auditory task. And there have been experiments showing that certainly animals will learn differently when they're younger and when they're older, but they will continue to learn until the end of their lives. And this is born out in human studies as well, specifically with music. So in our own experience, in the harmony project, the kids were elementary school kids, in high school, the kids just began their music instruction as freshmen. So what was kind of a tragedy for these kids? The fact that they really had had no music instruction of any kind before they were freshmen in high school, turned out to be from a scientific standpoint, very important, because we could see that certainly the kids who began their music training as adolescents had the same kinds of brain changes that we saw in the younger kids. Moreover, the number of labs have looked at learning in older people. And even if you've never played a musical instrument, your brain can change and you can continue to learn music, to learn new languages. And we have this very, very dynamic system, and I think we should embrace the differences in the way we learn at different ages, because as we're older, we bring wisdom with us and we bring an understanding of what we're doing that is very different from the way a child might approach learning, for example, a musical instrument. But the fact is that the benefits of playing a musical instrument, which are profound, really in terms of memory and attention and emotion, sociability, these are gifts from music that you want to experience throughout your life. 

Richard Miles (22:41): 

If we could just stay on that just a little bit more Nina, one of the fascinating things I saw in one of your papers was the connection of musical ability or music training to reading, and that you expected to find obviously, a connection to speaking, cause that's sort of an auditory sound function, right? But reading, and I didn't realize the extent to which a solid understanding of how a word sounds, how are phoning sounds is essential to reading a written word. So comment on that, but there's a second part of my question. Let me put it in right now, what are the other cognitive things that you have found that improve? I mean, is there a link with math, for instance, do you increase math abilities among musicians? Are there any other cognitive things that appear to be improved or beneficial as a result of music training? 

Nina Kraus (23:24): 

So your first question is what does sound have to do with reading? And we learned to speak first and what we need to do when we read is we have to associate the sound of the letters with a symbol on the page. And so, we've known from decades of research that kids who have difficulty processing sounds have difficulty reading. So there is a very, very strong connection there. Also there's a part of speech. When you think of music, you know that there's rhythm in music, right? Rhythm is a part of music, but you don't necessarily think about rhythm as being a part of speech. But it is. I mean, think of the difference between the word rebel and rebel. It's the same word, but I have a different rhythm. And even though the rhythm isn't as regular, we have tremendous rhythmic ability in speaking. So every Martin Luther King day, my husband and I listened to the, I have a dream speech and listening to Dr. King speak, it has this wonderful rhythm and cadence to it. And if I was saying those same words to you, you'd be looking at your watch, you'd be, when is this going to be over? But so much of the communication is rhythmic. If you want to have fun, do some YouTube searches for rhythm and music. And you'll find there's a guy who plays drums along with while people are speaking, it really pulls out what is not so inherently obvious. But after awhile you realize, Oh, this is really rhythmic. So this is another thing that gets strengthened. If you make music, you really make abilities get better. And the reason that we know that this is tied to reading is that again, for decades now, people have demonstrated that kids who have difficulty reading have difficulty with rhythm. Rhythm is one part of what gets strengthened with music. And I would say that it's the rhythm, and it is the tuning, if you will, of important sound ingredients that together help achieve the gains, which is now the second part of your question, which is why do we care? And well we care because we want to know what to pay attention to. And in order to learn, we have to be able to pay attention to sounds. So, for example, my husband's a real musician. And one day I was trying to learn a dire straits lead on the guitar and he came by and he said, Nina, if you just listen, you would realize that Mark Knopfler is not using his pick on the string each time. He's not going to Dee Dee Dee Dee. The reason that he's playing those notes so fast is because he's actually pulling off the string with the fingers of his left hand, it's called a pull off. And it has a very special sound to it, that I was deaf to. But now I know what that sounds like. And so when I hear it again, I have learned what to pay attention to. And it's kind of automatic like, Oh yeah, I know what this is. And so there are so many associations with sound and our ability to pay attention and to then be able to pay attention to other sounds in the world that might be important, like a teacher's voice or Phoebe's voice across the room. So that's one thing. The other is auditory working memory, in order for you to make sense of what I'm saying right now, you need to remember what I just said. So a typical auditory working memory test language is I'll give you a list of words and then ask you to repeat back only the words that were names of cars that started with M. And so you think, okay, so what did she say? Which ones are cars, which ones start with M. And this is your auditory working memory that is kind of helping you make sense of what you hear constantly. So it's very, very important. So on the test like this people who are musicians, someone who regularly plays a musical instrument, by the way, singing counts, then across the lifespan, people who are musicians have stronger auditory working memory skills and stronger attention skills, and any teacher will tell you. And one of the reasons this was interesting to me is that teachers will tell me all the time that the kids who play music are the ones who do better in school. 

Richard Miles (27:33): 

Nina, you alluded to this earlier, you talked about Brain Volts, which is essentially, you're looking at ways to take this research that you're doing or the findings, and basically help others in other fields. And if I understand it correctly, you can use this in addition to research, but also as a diagnostic tool, right? If you find somebody and it appears to be their audio processing capabilities off, that may be an indicator of something else, such as a concussion or maybe dementia or something like that. I'm not entirely sure about that. So I'm waiting for you to correct me, but is that what it is? And then how's it gone in terms of setting up something to try to commercialize the technology. And this is something we talk on this podcast, a lot, a lot of people like you, researchers have something that they know has a value outside of the research arena, and they want to take that technology to market. And it's very difficult. So it's kind of hit or miss. And we know for the genesis of this particular podcast, the museum project was Gatorade, a research project with great success, but isn't a tiny minority of what happens to typical research. So first of all, correct me, or affirm me that I have that description of your business model, correct. And then how's it going in terms of going to market? 

Nina Kraus (28:43): 

So I think the two areas that we have been focusing on, one is language and literacy. And yes, the idea is to use this biomarker, if you will, as a way to provide additional information about a kid who might be having difficulty in school or is having various problems with language and learning. And the question is, is this coming from the fact that his brain is not processing sounds in a typical way? And to be able to at any age, just deliver sounds and just use some scalp electrodes to get this piece of information is very valuable. And people talk about diagnosis. I wouldn't say that this would be the only thing that you would look at. Any clinician wants to have an armamentarium of clinical results. You go to your physician and he's looking at all of your various test results, and hopefully he can put together this constellation of findings and be well-informed. Well, I think at being well-informed, if you have a kid with a learning problem, when a language delay, if it was my kid, I would want to know, is there a bottleneck? Is there a problem here with sound processing? I would also want to know is my kid at risk? So I can envision this as now they have newborn hearing screenings where every child gets a hearing test to make sure that they can actually detect the sounds. I could envision the kind of technology that we've developed as being something that would be side by side with that. And you would also be able to see is my child at risk for struggling to learn language or struggling to learn, to read way before he actually begins to struggle in school. Wouldn't it be great to just know that this is a child who is at risk. And so there are various things that can be done, especially if you are aware of a potential problem early on 

Richard Miles (30:39): 

Nina, just to clarify, going back to your analogy of the sound volume knob versus the mixing board the tests are doing now, essentially just measuring the sound knob, right? Can they hear or not? And your test would give the ability to say, well specifically, are there things going on at the auditory processing level that bear watching or concern? Is that? 

Nina Kraus (30:58): 

Yeah, I mean the typical hearing test now is really, can you hear, there's a range of pitches that language consists of, and can you hear very, very quiet sounds and your ears ability to hear what I am measuring more is the brain's ability to understand what you hear. And so the sounds that we deliver, aren't very quiet, they're conversational level. So we already know that they can hear their ears are working fine. They've passed the hearing test from an ear perspective, but we want to know now, if I'm speaking conversationally, I know that you can hear me, does your brain process these different ingredients properly or not? And what are the strengths and what are the bottlenecks? And we know that there are certain signatures, and this is again, one of the things that we have patents on is that we know that there is a certain signature that's associated with a language delay and literacy problems. And so you would want to look for that particular signature in a child that you were wondering about in terms of their current or their future language potential. 

Richard Miles (32:02): 

Could you use it to detect mild concussions? For instance, if there is neurological damage and traditional tests, weren't willing indicating one way or another, is this another tool that you could use to figure out something is wrong here? 

Nina Kraus (32:14): 

Absolutely. Because most concussions, unless you have a cerebral bleed, you're not going to see them on imaging. You need a very sensitive measure and sound processing. The brain does provide that. It's also noninvasive. It takes 15 minutes to obtain and we have found again and again, we have papers and patents that describe that we've established this effect in youngsters who are elementary and high school aged kids. And right now we have a big study looking at division one athletes or northwestern athletes and NIH study, it's a five year project. That was won on the strength of the original work that we did describing what is now a different neural signature. It doesn't look anything like the language signature. There are other ingredients that are especially sensitive to head injury. And we can see this right now. I know that the whole issue of diagnosing concussion is a tricky one. And again, historically, people have been looking at vision. They've been looking at balance, but looking at hearing is fairly new. And one of the things that we have done in a couple of our studies is we followed our North Side Football League. These are our kids, and we gave them the vision test, the balance tests and the hearing tests. And you could see that they each tell us different things. So they're not redundant. So you know how wonderful my vision is for a clinician, a trainer, a coach, position, to be able to look at balance, look at vision, look at hearing, and to have this biological marker that would inform the diagnosis of the injury and also inform return to play, because we know that concussions often occur in the same person shortly after they've had a concussion. And so, it might be that with the current measures that we have available, it looks as though the athlete is ready to return to play. But maybe if you had a more sensitive measure and objective measure, because again, the athletes are very motivated to do whatever they can to get back on the field. But if you have an objective measure that doesn't require any kind of an overt response, wouldn't it be great to know? Let's just wait another week. His brain isn't quite ready, just to wait a week or two. I mean, we see that the changes in the brain change very rapidly, usually as individuals, athletes, recover from their concussions. 

Richard Miles (34:44): 

Nina, I know you have a lab there where you can assess people with that method. Is this something that could be done with a medical device? It could be done in a doctor's office or even in a trainer's room? 

Nina Kraus (34:54): 

That's what we do. When we went out to LA, we did this testing in instrument closets, and wherever we could find a spot, it's very portable right now. It's the size of a laptop. 

Richard Miles (35:04): 

Nina, this has been fascinating. And like I said, this could be episode one of a thirty podcast series on just sounds. I could listen to this all day and I'll go meta for just one second here. We're actually doing this in the medium of podcasting, right, that has made a huge resurgence as people like to listen now. And I don't know what that says about humans or our society in general, but it is a throwback to the days of the thirties and forties, right? When people consumed a lot of their entertainment from radio shows. Right? And what I like about it forced a little bit of your imagination, and play, because it's not laid out for you visually, you're listening to somebody sound or a sound clip of a particular event. And anyway, I thought I'd throw that in there. We're talking about sound on a medium that is only sound. 

Nina Kraus (35:44): 

I love that. I love that. And actually I have to say for myself, I do a lot of my reading, listening to audio books. I think we all spend probably too much time looking at screens. And it's just wonderful to kind of give your eyes a rest and listen. Of course I love radio and podcasts and I consume books that way. Sound is awesome. 

Richard Miles (36:05): 

Nina, thank you very much today for being on Radio Cade, hope to have you back maybe with an update on Brain Volts or your new research. Thank you very much for joining us today. 

Nina Kraus (36:13): 

Thank you for having me take care of Richard. Bye. 

Outro (36:18): 

Radio Cade is produced by the Cade Museum for Creativity and Invention located in Gainesville, Florida. Richard Miles is the podcast host and Ellie Thom coordinates inventor interviews, podcasts are recorded at Heartwood Soundstage and edited and mixed by Bob McPeek. The Radio Cade theme song was produced and performed by Tracy Collins and features violinist Jacob Lawson. 

 

Other Episodes