An interview about sound and noise with Tae Hong Park

Dr. Tae Hong Park is a musician, composer, educator, and researcher. His projects include Citygram, a community-driven 3D mapping network, and GetNoisy, an airplane noise tracker. His philosophy is “of the people, by the people” and he encourages people to use items you already have around the house to participate in these projects.

Citizen participation can help track daily airplane noise, traffic noise, or helicopter noise that often impacts hundreds of people. It gives you the tools to collect data and helps to persuade legislators for change.


Tae Hong Park with tulip mic
Tae Hong Park holds a DIY tulip mic.

Tae Hong has lived around the world, but he’s now based in New York City, where he teaches at NYU. In this podcast episode, we cover his background, his sound and noise projects, and how an inexpensive “tulip” microphone encased in a rubber thimble and a computer with an internet connection might be all you need to get started with capturing sound.

With Cary Norsworthy and Phill Hermans.

Listen in Apple Podcasts | Castbox | Spotify | iHeart Radio | Stitcher | Player FM  | Overcast

Learn more:

Wikipedia – Musique concrète

Get Noisy! – Learn more about airplane-noise tracking systems

Acoustical Society of America – Tae Hong’s presentation in 2020 about sound-mapping airplane noise

Engineering.com – Noisy Records Airplane Noise Data, Automatically Logs Complaints

Science Direct – Smart Citizen Kit and Station: An open environmental monitoring system for citizen participation and scientific experimentation  (not Tae Hong’s project, but a similar concept)

Soundproofist – interview with John Stewart, campaigner against Heathrow airplane noise / additional runways

Cary (00:07):
This is episode 14 of Soundproofist. And my name is Cary.

Phill (00:11):
This is Phill.

Cary (00:13):
Today. We’re going to talk with Tae Hong Park. He’s a musician and a professor at NYU. He’s also launched some projects about noise and sound. These projects include GetNoisy!, which is a citizen science project that measures airplane noise, and also CityGram, which is a soundmapping project. We’ll learn the background of these projects in this episode, and also how he built some inexpensive microphones called “tulip microphones” for noise measuring.

Phill (00:43):
Dr. Tae Hong Park was my undergraduate advisor at Tulane University in the Music Science and Technology program. He’s been a great friend and mentor to me over the years. And I was really excited to catch up with him and learn about his new work with Citygram, GetNoisy, and soundscape compositions.

Tae Hong Park (01:02):
Sure. Yeah, I got into this point in my life, I think somewhat indirectly. So back in the day, my dream was to become a computer scientist, electrical engineer and I went that route in college. And I soon found out that I had a passion for music. And I started music actually quite late…. and I’m currently in the music department at NYU. And that was actually in high school. And the initiative basically was “we need a bass player.” I’m like, “okay, four strings, how difficult can that be?” And I was very mistaken regarding “four strings, easier.” Fewer strings, harder. Because you have fewer combinations to work with.

Tae Hong Park (01:46):
So that’s how I got into music. I was doing it as a hobby, as is probably the case for a lot of people. And, but then I got into engineering school, the ECS, and was basically finding myself getting more and more into music. So during the day doing coursework at the university and then in the evenings becoming like a vampire from NOLA. And just improving my bass techniques and then starting to compose. It was kind of interesting, I think, where composition was a very important way for me to get into music rather than just playing other people’s songs, pieces, and whatnot. And that sort of led to me coming to the U S to do graduate studies and really getting into composition and combining both music and technologies like my engineering work, and also have more sort of formal entry and exploration of the music composition.

Tae Hong Park (02:45):
So that led me to Dartmouth and then to the PhD program at Princeton. And ever since I’ve been sort of exploring both areas, in particular as it pertains to soundscapes and noise pollution and whatnot. I started as… I’m sure Phillip can attest to this as well, cause he’s done compositions in this realm too… this genre let’s say called “music concrete,” which is really based on found sounds and sounds of the environment sort of driving the compositional trajectory and process. So that got into my sort of mindset, then writing soundscape type compositions. One is actually called 48, 13 and 16, 20 O. Which is the coordinates lat-long latitude and longitude of Vienna, where I was actually born. And the reason it’s an “O” is actually that it’s not East. Right. So there’s clues as to what that could actually mean.

Tae Hong Park (03:43):
And sometimes people still come up to me and say, ask me, what does that mean? And I’m like, “well, that’s homework for you.” So that’s how I got into soundscapes. And more recently when I started the project called Citygram and around 2010, 2011. I had a question for myself. And that question was with Google Maps becoming really interesting and useful and –beautiful, actually — Interface to explore spaces. And then Google Earth becoming a thing. Or just these interfaces that you could access through the web — mapping interfaces — was just very intriguing. But then I looked at the possibilities or this idea of soundmaps. And I didn’t find anything. And I figured, Hmm, I guess, people are generally speaking more visually inclined than sonically inclined, audio-inclined, and that didn’t surprise me. But then I also found out why it was actually really difficult.

Tae Hong Park (04:42):
Technically speaking, actually more accurately social-technologically speaking with that idea of a soundmap is pretty tricky. And that’s what I’ve been doing as one of my research sort of agendas. That is to come up with a system that would enable and make practicable sound mapping for the people. So that I see it as a community-type projects. It’s very much like actually YouTube. Not in the business sense, but where the content is actually provided by the people. And that’s a sort of system I’ve been designing and building and coding since 2010 or so.

Cary (05:24):
So these soundmaps, like what was your primary goal? Was it just to — you could have been doing it for noise modeling for architectural reasons and city planning? Or it could have been just like Dr. Radicchi…”find a quiet place to go in an urban environment.” What was like your primary goal with this?

Tae Hong Park (05:43):
But the goal was actually more of a curiosity and thinking about “is this possible?” And “why” is the question, right? Why would you build anything like that? And my feeling was that, or my observation was that it didn’t exist. But we as human beings have eyes, if we’re lucky. And we have ears, have taste buds, so we can see things, we can hear things, we can smell things, and we can touch things, right? So the seeing actually has been taken care of pretty well. If you go to these map interfaces, you can look at satellite pictures, you can look at symbolic representations, you can go and do street views, and all these things. What is missing, which is a big gap, is the things that you cannot see, right? So these non-ocular energies that occupy spaces — and one that’s very important to me — is sound. So that was the impetus and really the drive to make possible.

Tae Hong Park (06:42):
So the dream would be something like a map that we would be able to access via your favorite browser or whatnot. Where you could not just only see things, but also see and hear spacial sound. And also later on, other things like humidity, smell —and that’s why this project is called Citygram. That’s for a very specific reason. That is, it’s not just a sound. But that’s iteration one. Because I’m not too bad at dealing with sound. And that’s sort of my passion right now. And that’s sort of like a way to push this forward. But yeah, the goal was to have another layer on top of maps that would represent sound. And then people could use it in ways hopefully that would meet their needs and then spark other ideas.
Cary (07:32):
So ideally you’d like to maybe build a virtual map that involves all the senses, but you started with sound.

Tae Hong Park (07:40):
Right. Exactly. So these non-ocular energies, and I think it’s very interesting because as human beings, we walk around places and spaces and we sort of get the vibe, right. Vibe of spaces by all of these dimensions of energies, or energy dimensions, or energy types that are present in a space. But without all those different dimensions, it’s more difficult to determine what is going on. And that’s also where the AI comes into play. Machine learning, and being able to quickly and automatically sense as to what that space might be… opposed to what it is, actually. So that’s the closest to that. And that was a natural fit to what I did for my PhD thesis, which was basic machine learning of musical instruments. So you play a violin, it’s like, “okay, it’s a violin.” You play a trombone and “oh, it’s a trombone.” That type of idea. So that naturally evolved to where I am.

Cary (08:39):
I talked to somebody who works for a European company that does mapping and noise modeling. And my understanding is that any city of over 100,000 people in Europe, every five years, they need to actually do a sound map for their city. And update it. And we don’t have any such thing here in the United States, as far as I know. So this is pretty revolutionary to be doing this in a city like New York that has so many sounds, and probably needs this.

Tae Hong Park (09:08):
Sure. I think that’s a very good point. And I think a couple of years ago, the DOT a few years ago published a model, which is basically a soundmap that represents, I think the entire United States of America in terms of basically heat map. But as you point out, the main difference between capturing visual things versus sonic things, is that sound [clap!] is very transient, right? So it’s actually not possible to even do it like every couple of years. It is possible, but it’s not the right way to do it, because it changes. The sampling rate is 44,100 times per second. If you want to get good quality audio, or the standard what’s part of the CDs that have a standard that Phillips and Sony created. So that’s the biggest problem where for pictures and maps, traditional maps in the buildings, thankfully don’t collapse overnight. Sometimes they do, but in most cases they stay and the sampling rate of pictures in Google street maps and all these maps don’t need to be that quickly changing. And that’s also why I think the modeling approach is probably also not the best because models for another reason are as good as the models are. And so our approach is like, obviously looking at the models, but also very importantly, looking at the actual data so that they can be combined to inform us as to what the soundscape might actually be like.

Cary (10:40):
Yeah. Well, if you think about it, also cities — not just the sound, but everything that goes on in a city is transient. So just like with Google Maps, you may have the Google Map camera come through and right then there’s a construction project going on. And a month later, that project is over. But for the next couple of years, if you look up that address on Google Maps, you’re going to see this huge construction project. And as well with sound modeling, you would hear it. Or you model something and it’s a very quiet neighborhood. And then this huge three-year project starts up and it’s not like that at all. And if you’re basing, where do I want to live next? Where do I want to visit? Or is this a good place for me to start my quiet little acupuncture business? It’s not going to necessarily be that way and stay static by the time you get there.

Tae Hong Park (11:28):
Yeah. That’s a really good point. And it’s in some ways, even more serious than what people in general can and are imagining. For example, one of the main focus right now for us is to look at airplane noise. So aircraft noise, I’ll give you an example as to what the data shows. So around 2005 or so — this was around Chicago ORD Airport, which is one of the biggest airports in the US: Atlanta, JFK, and all these places. There were approximately around 5,000 complaints per year, which is a lot, right? If you think about it, the communities that live around the airports are unhappy because some of these airplanes make a lot of noise, especially if it’s in the evening, right? But the noise level, the background noise levels of ambient low levels come down. And if, even if you have a small or rather low event acoustic event, that’s very low. Because of the signal-to-noise ratio, it actually feels very loud. But the point here is that in around 2005 or so, we had about 5,000 complaints per year. Fast forward to 2015, it increased to more than 5 million.

Cary (12:42):
Wow.

Tae Hong Park (12:44):
5 million. I was like, this cannot be right. Right. It’s like some, something, somebody didn’t quite capture the …

Cary (12:54):
A zero wrong on this. Yeah.

Tae Hong Park (12:57):
This is like, “Oh, this is like Y2K era.” That’s what I thought. So one of the things I did is I always try to whenever possible verify and experience and check. So I actually flew to Chicago and met some of the community activists that are suffering from this. And I listened to them and what I listened to carefully, the airplanes that would fly over homes. And this is just like a regular neighborhood, just normal traditional American neighborhood. Just middle class and maybe even upper middle class. And it doesn’t even matter, cause noise doesn’t discriminate at all. Because it’s in the air. And the severity just blew my mind in terms of, wow, this just cannot be… Because it’s every minute, basically. More or less. You get another airplane, another airplane. And then until basically midnight. And they have different types of regulations and whatnot, but it was eye-opening.

Tae Hong Park (13:54):
Or shall we say “ear opening,” in our case? And as I said, one doesn’t realize it because not that many people live around airports. I mean, but there is a huge population that does. And with gravitational pull of megacities like New York and Chicago — where things since 2015, for the first time in human history, we’ve had this shift where more than 50% of the global population now lives around urban settings, urban areas. So it shows or is one of the artifacts that is expressed through sound. And that’s what we’re focusing on, the sort of the approach of what is the lowest hanging fruit as it could help in contributing to community and society where sound maps can be very useful. And that’s what we’re focusing on right now.

Cary (14:52):
I would imagine that the amount of air traffic is next to zero right now over New York though. I mean, maybe not zero because obviously there’s still flights going in and out, but not the volume that there has been before the pandemic. Have you been measuring that like current? Yeah.

Tae Hong Park (15:09):
Yes. As a matter of fact. So we’ve been doing this for the past two, three years. We’ve been capturing sound characteristics around airports and for communities in New York and also Chicago. A little bit over in Canada as well. And yeah, we have data. So we’re actually quite — I wouldn’t say excited, what’s the right word –intrigued — by this opportunity that has presented itself because we can now sort of compare before, during and after. And see, and actually hear in this case or visually see the sound and see what’s going on. So we are working on having that data visualizations available. And as you say, the traffic has gone down significantly, but our system also is still picking out and can actually quantify that in terms of which sounds are appearing. And in our case, we’re only capturing airplane sounds and ignoring everything else. So it’s basically the AI system listens to the sound and determines whether it’s an aircraft or not. We’re also very, very careful and sensitive to privacy issues. So we don’t want to do this wide area network of listening to everything. Because that can be a little — what’s the right word? Creepy.

Tae Hong Park (16:31):
Yes. So I think it’s important and we’re not recording anything. So all this data is basically, “is it an airplane between…?” Like, let’s say is it 0% airplane, a hundred percent airplane, anything inbetween. So that type of data where it cannot be really inverted to anything where it exposes people’s private information. That’s very important to us.

Cary (16:52):
It’s like a specific frequency, then?, You say, okay, this frequency range is probably an airplane versus a motorcycle on the street.

Tae Hong Park (17:00):
Somewhat related to that. It’s one of the great developments or significant developments in machine learning has been the comeback of AI in the context of deep learning. So one of the great things about deep learning is that you don’t necessarily have to do a lot of engineering of determining is this frequency important or that frequency important?  Is this DB level important? What characteristics are the most important characteristics in order to identify an airplane? What deep learning allows us to do is that a lot of that can be extracted or learned from the data itself. That’s a pretty interesting time for us to be doing this type of work. Where yes, the certain frequency characteristics and these spectral characteristics all contribute to the system. Being able to identify or classify whether this is a aircraft sound or a non-aircraft sound.

Phill (17:58):
So did you have to manually classify recordings of airplanes first to train this algorithm to then identify airplanes?

Tae Hong Park (18:06):
No, this is, as you say, labeled data. Meaning, let’s say we have… it’s like teaching a kid, right? So, you teach a kid with a book. Then you have all these audio examples and humans label these audio examples. “This is an airplane.” “This is not an airplane.” “This is an airplane.” It’s basically called “training.” Training the algorithm to learn from the data itself. So a lot of it has been labeled and annotated and then those annotations have been used to train the algorithm. And then some of it is semiautomatic, where it can do a semiautomatic labeling itself and writing or developing the tools to make labeling much, much faster. So that’s a lot of where our time is actually spent making the training process much more efficient. If you have a lot of data it allows you to get the performance up and make it a more robust than being more specific about certain things.

Phill (19:04):
So I think the other point that I’d like to highlight is when you talked about the ethics and privacy concerns of this. So from what I understand you were saying is that, so we have this tagged data, this labeled data by humans. So the computer can learn the pattern, “airplane, not airplane.” Classic machine-learning topic. So then once you have new data coming in from users that have their tulip mics or however the data is being submitted to your platform. And then the algorithm can say airplane or not an airplane with a percentage of accuracy of how much airplane is this. So then from what I understood from what you were saying, that is the data that’s actually stored that computed value of airplane. Not the actual audio recording of the event itself?

Tae Hong Park (19:52):
That is one hundred percent correct. It’s …nowadays, I think the catchword, or the way that this is described sometimes is “edge computing.” So you have the cloud computing, then you have the personal PC personal computers that you use. And then you have edge computing. And edge computing means that this device, this computer device is at the edge or at the source. So wherever our system is installed — and we can talk about that a little bit more — it’s at the source. And the recording, the capturing of the data happens at the source and it never leaves that source. So let’s say we have a Raspberry Pi, or you have your laptop, whatever. The audio recording never leaves that computer or that device. I think other people are doing it differently where the audio gets sent to the cloud. And I’m not a big fan of that.

Tae Hong Park (20:40):
So I don’t want the audio to leave your device because that’s the best way to bring trust into the system. Right? Where you have your own. And then you decide what you want to do. It’s up to you. From that edge device, from that personal computer, whatever that may be that you have at your home. Only the high-level information gets sent, which is “is it zero percent or a hundred percent, or is it anywhere inbetween?” And what are the DB levels, right? So it’s like, what’s like 65 DB and that’s an airplane of 70% chance. So we try to minimize that potential exposure of any type of private information, including our system does differentiate between an airplane sound and a voice, right? But we’d all say “it’s a voice,” because in this case, it’s not relevant. We only care about, is it an airplane or not? Because we’re looking at airplanes. And airplanes, I think, are machines. I think airplanes don’t have any privacy concerns.

Cary (21:38):
Speaking of privacy concerns, I was just thinking about how, if this was combined with like a sleep study or something, or a blood pressure study of people who live in the surrounding areas who are often impacted by airplane noise. I can’t think of a device you could collect this on that isn’t probably considered to be potentially an issue. I think you’ve got Fitbit or Apple Watch or whatever, but you could actually have people, maybe filling out a questionnaire and also tracking what’s their blood pressure like. And then map that to actually the time of day when you do have these events, even though there are fewer right now and see if that has an impact on human health, the reduction of air traffic.

Tae Hong Park (22:22):
Absolutely. And one of our collaborators — he’s based in Germany — is actually doing just that. And heartbeats and sleeping patterns as it relates to specifically flight patterns. So he’s looking at the heart rates, the breathing rates, how many times you wake up and things like that as correlates with flyover airplanes. And one thing that he doesn’t have is being able to track airplane noise, airplane sounds. And that’s where we’re trying to help him. Then you can, as I said, the idea is to have a platform where people can then use it to do their research or their studies or inform themselves. Right? And the idea is that this is one of those energy dimensions that’s missing. And that’s why it is this idea that it is by the people, for the people, of the people, right. So that you can do it.

Tae Hong Park (23:16):
And that opens up a can of worms, which is how do you make that practical? How is that actually feasible to do? And then you ask these questions. What does everyone have? And we’re using one right now. Oh, we all use web browsers. Okay. So let’s have it run on a web browser somehow. Then the next question is, okay, we’ve got to install a sensor somewhere. And do you want to have like a 50-ton microphone system? No, we can’t do that. So then the question is what does every home have? Most likely. And people are like, I think we have windows. And windows are this sort of view of the world, right outside a visual window to the world. So I’m like, okay, so let’s turn the windows into sensors. And that’s what we created and invented. And there’s basically a sticker mic. It actually looks like a sticker mic.

Tae Hong Park (24:05):
You just stick it onto the window. And it works on the browser. And then the last question is, “well, how do you store that in a safe place?” And that’s… Do you have internet at home? And if you have those three things, a window, browser, and internet, that’s where we started. And that’s why it took a long time because one doesn’t necessarily make it run on a web browser, because web browsers are usually used for other things. So that took a little while, but we’re there now. And it’s been sort of fine-tuning that system for the past four or five years. And it’s starting to hum, as they say. In a good way.

Phill (24:41):
And so these mics, you call them “sticker mics.” I’ve heard you call them “tulip mics.” These are very simple. DIY. Someone could make these at home with a little bit of soldering knowledge, I suppose?

Tae Hong Park (24:51):
So there’s sort of two efforts… I’m trying to find the mic… One is the tulip mic, which I just put together in maybe five minutes. Because I had this idea where, you know, how do you make it very easy to make, right? How can I have it so that people can get their own, right? And make their own. And this is what the tulip mic looks like. Looks like a tulip. Right?
Cary (25:15):
Wow. It’s so small.

Tae Hong Park (25:17):
What this thing is is actually, I don’t even know what the name is, but I think if you go to like the post office and whatnot, they have these things. They put them on their thumbs.

Phill (25:25):
Oh, right. So for when you’re turning pages, you don’t get a paper cut. Office supply.

Tae Hong Park (25:29):
Yeah. So these are basically waterproof, very robust and resilient. And basically I’m like, okay, I think this can work. And I bought one over at the stationary store. And then I’m like, okay, let’s just get one of these inexpensive mics from whatever store you have and make a hole, make it waterproof — with waterproof glue — and boom. It sort of sits outside your window. And then the question is, do you have somewhere to stick this into? And then if the answer is yes, then well, you’re all set. And then if this breaks, right, it’s like, I don’t know. Maybe it’s like five bucks to make, and anyone can build this. Cause I can build it in five minutes. And as I said, it’s like just cutting the thing off, putting the mic inside, gluing it. And then you’re done. This is the tulip mic, which anyone can build. And then we also build these custom mics.

Phill (26:25):
The listeners can’t see. So the little rubber thingie is like a rubber thimble or thimblette we use for turning pages.

Tae Hong Park (26:33):
Your audience can’t see these, but here’s the sticker mic. It’s actually the size of a credit card. And that’s also by design, making it sort of very familiar and friendly looking. And it’s very thin, as you can see — not quite as thin. This is one of the early prototypes and just Velcroed and sticking to the window with industry grade Velcro. It’s like, what, 12 pounds strength. This will never… cause this is so light. This one is actually using a MEMS mic and it’s a custom mic. But the cost for that is actually very low, too. Once you are able to mass produce it and whatnot. But I think the question has always been, how do you make it possible for people to do it if they want to do it — right? And why is that important? Because it is not something that I can do and I want to do alone anyway.

Tae Hong Park (27:25):
It is for the community. And why is that important? Because you want to scale, right? Because this is different to a camera, because the sound that’s happening here, and the sound that’s happening here…if the distance …there’s an inverse square law with sound. You go off by a few hundred meters and you actually don’t hear that sound. So you’ve got to have a lot of these sensing stations in order to create a sound map. And so that privacy is important. I’m making a practical solution that everyone can sort of embrace. And then having that be a sort of driving force behind it. Yeah. Otherwise it’s technically speaking just sticking a mic outside is easy to do, but then you have to think about how to make it waterproof, how to make it this and that. And these are actually basically waterproof and people-proof. That’s the scary part, obviously. And I’m not saying this is like people-proof, but we are very much thinking of that at every step of the way. And web browsers are probably one of the safest things that you can use, because a lot of people use it and the more people use it, you sort of test it and find the bugs and security bugs and things like that. There’s no perfect system.

Phill (28:36):
I would also point out that the web browser is a great example of like you were saying, community driven, open source. Specifically with the web audio initiative that we’ve seen in the last few years, that this is open source software — community driven — that has allowed, I mean, you started doing this before. This was much more difficult in 2010 to do than it is now, specifically in the browser.

Tae Hong Park (28:57):
It was a little tricky in the beginning. The browser wasn’t fast enough, but one of the primary reasons, as I said was, how do you port or make it possible that it works in the browser? That was a big question. It wasn’t actually possible because it was way too slow. But the other reason was actually, and you know this too Phillip, when you code anything. If it always changes, or the version gets updated or this person has that version. This person has this version and this person has that OS or different version. It just doesn’t work. And the amount of time you spend just getting to work outweighs the actual work that goes into it, which is the algorithms and the system design and all that other stuff. And I was just frustrated that this is almost like, is it the chicken or the egg?

Tae Hong Park (29:48):
And it’s like, neither. Because we spend so much time on the stuff that is, first of all, not interesting. And secondly, it doesn’t really serve anything except making it work on all these different platforms. And as we know, the web browser basically runs on every device that humans use, including the smartphone, tablet, Raspberry, Pi, desktop computers, and boom. So I only just write one code base in Javascript with HTML. Actually very little HTML, but basically most of it is just Javascript-based. And that allows me to focus my time on the actual development, because I don’t have a team of 50,000 people. It’s only 49,000. So divided by…

Cary (30:42):
So how much tech support do you find that you have to do for some of the citizen scientists? Not everybody is a coder I’m sure. So do you get calls saying, “Hey, this thing’s not working and what do I do? How do I reboot it?”

Tae Hong Park (30:54):
Yeah. So all those things obviously are an issue. And we’re not at that stage where we have everything sort of scale to that degree, but we want to make it as easily usable as possible. So I think that YouTube model is really what I’m going for again, not the business model, but the YouTube as a, that’s what people do, right? They upload their media, their videos, sometimes audio or whatever, and that’s how it gets created. And if people know how to use that type of interface, then I’m hoping that they’ll be able to use a similar interface in that regard where it’s just automatic. You just turn on the thing and it just does it automatically in the background. And you can actually turn it off any time you want to. So those questions from the people in the community will be there at some point. I think because there’s a good model already existing.

Tae Hong Park (31:42):
So, I’m hopefully …carefully optimistic that we can come up with a system that is not so complicated, cause that will really put a big obstacle, a hindrance on moving forward with anything like this. And it’s not like this is the solution. But somebody else way smarter than me will come along and say, “Hey, this actually isn’t an easy way to do it.” And if that happens, even better. Because basically just we’re leading a small piece of the puzzle and driving that forward. As we all know, society is in a very visually oriented time.

Cary (32:17):
Seeing is believing,

Tae Hong Park (32:18):
Seeing is believing.

Cary (32:20):
Deep fakes is going to throw that into question.

Tae Hong Park (32:23):
I know. That’s a totally different topic, but it’s pretty amazing though. The deep fake stuff,

Cary (32:30):
You know, I was thinking about how a couple of years ago there was a story about a similar type of device. Maybe similar. That was used in Barcelona because of neighbor complaints, because they lived over a plaza where tourists and other people would hang out all night long playing music and talking and partying. And it was impacting the quality of life. And so they had something similar where I believe citizens would put these monitors on their windows. And I think they were kind of a hack type, you know, small homemade device, just like what you showed us. And the results were that it showed what the noise levels were. And they created some guidelines around when people need to stop playing music, and that sort of thing. And so I see there’s a lot of potential for this, if it could move out even to other cities in the U S because New York’s not the only city that has some of these challenges or the only city that has multiple airports flying over residential areas, airports with airplanes flying over residential areas.

Tae Hong Park (33:36):
Yeah. That’s a really good point. And this like that project in Barcelona is basically a grassroots project, right. It sort of started because of this curiosity and then becoming a thing. The vision is that it becomes a tool for the people. And the way I sort of see this as it can be thought of a new thermometer. So back in the day when Fahrenheit came up with the mercury-based temperature measuring device, right. And back in the day, apparently only doctors had only the experts have this, that monitor and thermometer. Wasn’t like a household thing. So what I hope will happen is that sensing your environment is the new thermometer. Where you can get a sense of not just the temperature in your home, but also what is the air like, right? What is the sound pollution like? What are all these different elements like that affect you and your family’s health, which is very important, obviously.

Tae Hong Park (34:34):
So I see it as a sort of new thermometer that everyone can have. And everyone has a lot of people nowadays have these resources that are just sitting in their attic. And what I mean by that is we have all of these old computers that are really good computers, as we all know. And that are just being replaced every two or three years in many cases. But those are perfectly fine devices we can use to do something like this. Does it run a web browser? Yes. Check. Does it have an audio input? Yes. Check. So it’s not like you’re putting the burden on the people where they have to have these crazy resources to make it work. So it’s not like we can do it for you, but you can do this by yourselves. And this sort of repurposing recycling, these old machines that are causing issues versus environmental pollution is one way to… and this is not a new idea. Again, another sort of reason we use the web browser as a computing paradigm, because it’s in itself basically an OS, an operating system.

Cary (35:36):
Is there a resource online where you could kind of go through a checklist of what you need to get involved and what are the steps to get involved and how do you set this up?

Tae Hong Park (35:46):
Not quite yet. And I think, Phillip is working on that.

Phill (35:52):
Ha!

Tae Hong Park (35:53):
But yes, what we’ve done in the past. We’ve had these workshops where the student can call, anyone can come in and we sort of do this, these presentations as to how you can set up certain things and it’s been a work in progress. And because the technology also evolves, I think that’s been sort of one of the things that I learned. I think it’s not necessarily great to have it out too early. It’s certainly not great to have it out too late. The thing… what is that timing aspect that will make it meaningful from the community’s perspective? Right. So I think that’s the next step once it’s like ready. But then you don’t want to… nothing’s going to ever be perfect. So I don’t want to get into that trap either, where you just keep on hacking until you’re 600 years old. I mean, we’re basically around the corner as to where this can become a thing that people can use.

Phill (36:46):
So your project is really focused on airplane noise, which is really important. And I guess it didn’t start that way in 2010 or 11. But also since that time, we’ve seen a lot of sound maps emerge., Places like Montreal have them. I think the UK has a lot, British Columbia. So have you… these are your colleagues that you talk to at conferences. I’m sure …have there been any thoughts of collaboration between these platforms or like, you know, a larger global… at some level, there’s competition between your project versus you don’t want to abandon your baby. The question is, is there any potential for collaboration there or discussions around that?

Tae Hong Park (37:21):
That’s a really good question. And I think my sort of observation is that, although there are more of these types of projects that are sort of bubbling to the surface, it has taken everyone quite a bit of time to do it.
Tae Hong Park (37:35):
I think each one is a little bit different. So it’s kind of like when, I suppose Apple came out with the first version of the smartphone, which was THE smartphone. And it’s not like smartphone doesn’t exist. Right. The Samsung has been doing it, LG has been doing it. All these other companies have been doing it. And they’re all basically the same if you think about it. But something that Steve Jobs did with the first launch of the first iPhone was special. So he got something right. And he was obviously sort of reading what must be included in what should not be included, which is actually more important, I think. And I think Apple has done that really well, what not to include so that the user experience becomes a thing. So I think that’s in a similar vein. I think that’s what’s happening. There’s a lot of, much more research happening around the world, but it’s still very small community and it’s sort of bubbling to the surface.

Tae Hong Park (38:24):
And I think at some point they’ll all sort of know more. And then I think something will emerge like, “Oh, this is sort of the model,” but a lot of people will say, “Oh, this is a model to follow.” I think we’re not quite there, but I think we’re pretty close to it. And this interview actually is one of those artifacts of that happening. Because when we were together Tulane, I mean, I wasn’t thinking about this, not directly. I was thinking about soundscape compositions and like using that, but it wasn’t this mindset. So I think that’s sort of long-term trajectory that you’re seeing, and that’s what I’ve seen. So at some point, this will sort of coalesce into this standard way of doing things.

Phill (39:06):
I would also say anecdotally that a lot of the mailing lists I subscribed to during the shelter in place have been a lot of people, a lot of amateurs becoming interested in soundscape recording and going out and capturing the now profound silence that they’re experiencing in their day-to-day lives.

Phill (39:23):
So I think there is generally more awareness of soundscape as a phenomenon to non-niche people as it were.

Tae Hong Park (39:30):
Yeah. And then it’s kind of funny how this works. And this is just a general commentary on this phenomenon, right. Once you don’t have it, then you realize what it could be in both positive and negative ways. Right. If you, had AC — I lived most of my life actually in Europe and they don’t have AC there. And then you come to the U S and there’s AC, which is like pretty awesome. And then summer, especially when you’re living in New Orleans where the temperature goes up to the nineties in the evenings, right. Upper 90s, actually, sometimes. When that sort of carpet is sort of taken from you, then you realize, “Whoa, what’s going on here?” Right. And then that signal-to-noise ratio.

Tae Hong Park (40:12):
I think it always has come down like in this case sound and you hear the difference and it’s a double-edged sword, right? I mean, we’re living in a literally unusual period of the pandemic where this is possible, but the complexities as to what comes after this will be really, really interesting. And what’s really interesting, I think is that this is a very unusual time where we can sort of look at an ideal, Oh, this is something like an ideal that is at least sort of a guide as to how to make environments more livable. Right. Obviously it’s not possible to not go out and stay at home all day long. I wouldn’t be a big supporter of that in any way or fashion, but at least that opportunity, that setting has never been experienced since the industrial revolution. It’s been just nonstop go, go, go, go with machines, becoming bigger and more noisy and all over the world. So this is like, wow, interesting, that’s this happening? And that’s, I think why more people are realizing, being more aware of silence and silence can be deafening in this case.

Cary (41:24):
Yeah. It’s a positive and an eerie situation at times.

Tae Hong Park (41:30):
Very eerie.

Phill (41:31):
So you’ve mentioned, I know personally that you’ve lived all over the world. And you mentioned being born in Vienna, spending time in Europe…. I know you’ve lived other places as well. We’ve mentioned a few of them …auf jeden… So I guess this is kind of a more open-ended question, but do you have any profound experiences of soundscape from one urban setting to the next? Or any childhood impressions of the soundscape of Vienna or other places you may have lived that are cogent to you?

Tae Hong Park (42:00):
So again, a very good question. But yeah, it’s putting on the composer’s hat. So as I mentioned earlier, these types of soundscape pieces. One of the pieces that I wrote was about Vienna. And the reason for writing that piece was not just, “Oh, let’s just write a piece about Vienna.” It was this hypothesis and idea and concept that I think there are signature sounds in every place on planet earth. Some will be very divergent. Some will be very unique. Some will be more general. And the list goes on. And I remembered sounds from my childhood when I was living there and I had this hypothesis and this hypothesis, went something like this: I bet there’ll be, when I go back, I will recognize the vibe of the place. And one of the dimensions of the vibe is not just the visuals. The visuals are pretty much the same. Meaning for example, as quantified by I was able to find my old home.

Tae Hong Park (42:57):
I mean, that’s like from primary school, like kindergarten days. I was able to find that home and the owner was still the same. And he actually recognized me, which was like, hypothesis one — check. Hypothesis two was well, sonically also — I was thinking that that same sort of idea would be possible or would be something I would experience. And yes, that was also the case. So I captured, like, I think it was like two, three months. I went around all over the city, just capturing with my Adat — not sure that people remember Adap machines, but it’s cheap. And it’s of course digitally and was just recording the soundscapes of Vienna. And I then went back to the studio and made a piece that’s sort of like a traversal or just going through the city as I’m sort of experienced the soundscape as it relates to my past the present. And maybe in 10,000 years they’ll be listening to the piece and like, “Oh yeah, that’s Vienna!” -type idea. And then framed around a compositional sort of a format, right. It’s a musical piece. It’s not a scientific project, but it has these different dimensions of one’s technology. One is cultural, one is social, one is acoustic and the other is urban design. And then history and all that stuff presented in these sort of this multi-modal medium called music.

Phill (44:19):
And so I think if we fast forward from that, you know, you did that piece a while ago, and now I’ve seen some of your pieces where you’re actually using some of the data that you gather from your project, like Citygram to actually drive some audio and visual aspects of the composition. Is that correct?

Tae Hong Park (44:38):
That is definitely correct. So one of the ways back in the day when I was just running around with my recorder and capturing things was great. And I still enjoy doing that this day and age, you can actually do something where you don’t have to do that type of engagement where you, it only depends where you are, right. If it’s possible to get these high-level audio data features and have them stream to a computer, which we can actually drive certain elements in the musical sense. And that could be the video projections, all of the data that drives the audio. And I sort of jokingly say that this is a way to jam with planet earth — sonically. So there was one performance of one piece that I wrote in 2016. And this was that idea where it was an ensemble.

Tae Hong Park (45:27):
I think it was on the bass, I’m a bassist,, as you know, so I had to put in a bassist, drum kit, string quartet, piano, and vibraphone. And that was actually done at Carnegie Hall, which was pretty awesome. That’s why I had to put it in a bass. So I wanted to at least have a chance to play at Carnegie Hall, which is sort of like a lifelong dream for all kinds of musicians. And that was that setting where we have the video projection that was done in real time with the sensor data coming into the hall and then us jamming with most pre-composed or notated music. So the sensors sort of created a platform that will change along with the piece. So this data- driven musical exploration, I suppose one could say, and then installation works where you just have that similar idea, but it’s just part of the space and runs 24/7 as it drives the audio visual elements of that space.

Tae Hong Park (46:22):
And that’s part of this philosophy that I have, and I call it “soundmapping our world in 3D.” One is data-driven, second one is community driven. Third one is art-driven. So you talked a lot about the data stuff, which is the technology, the community. We talked a lot about that. Might be the people type idea. And the third is how do you bring awareness? Right? One of the best ways to bringing awareness is through the language of the universe, which is music. Music Is the language of humankind …type naive idea that I have.

Phill (46:57):
Well, so this is the argument I really want to have with Tae Hong. We might need to do another time, but when people claim that music is the universal language, I positive that it is neither a language, nor is it a universal at least. And that’s just based on listening to linguists and ethnomusicologists, but that really gets us in the weeds, I think. And that also belays the point that you’re making.

Tae Hong Park (47:23):
Hold on. I’m just kidding. No, no. I mean, it’s just a metaphor.

Tae Hong Park (47:27):
In a way to sort of think of it in where you have a way to bring awareness, look at the data, the technology, but also bringing the community. So it’s a great saying, but obviously, what does universal mean that word itself? I think that’s problematic. So I agree. Totally.

Phill (47:48):
So the only other question that I have prepared, which I’m not sure you actually want to talk about this or not, but your work with GetNoisy. This seems like a startup doing some more acoustic monitoring of airplane noise.

Tae Hong Park (48:00):
Right, right. Yeah. I can talk about that as well. So one of the things, and this relates to actually the second D, the community driven part is that I’m an academic. So I live in the lab 24/7 basically. And Phill can probably attest to that lifestyle. Just sitting in my office and do my thing. I was thinking that it would be a shame if something like this would just collect dust in my lab. And I think I was thinking of the most practical way to get it out of my office essentially. There’s so much great research going on so many great projects that happen in academia. But that never really get exposed or are shared with the public. And one way to do it is — outside of academia, we actually started a Kickstarter campaign to see if there’s interest. And people were like, yup. Check. There’s interest. Then GetNoisy was formed. Basically we have a CEO, I’m the CTO, which is what I do. And then we’re pushing that forward where we can bring it to the people more quickly and not have it just rot in the lab. And which would be the case, I think more often than not. So that’s one of the reasons we sort of created the startup and pushing that forward. As we speak.

Phill (49:22):
And so the product is basically an encapsulation of the whole system, like similar to Citygram even includes the microphone. And also the edge processing I assume, or similar, right.

Tae Hong Park (49:33):
It’s basically a raspberry Pi. It looks like, like this…

Phill (49:38):
So mini computer, that’s got all this stuff preloaded onto it with all the devices. So that way, unlike the DIY approach of Citygram, where you had to like build yourself — this is kind of more just a consumer, like a one-stop shop.

Tae Hong Park (49:53):
It’s sort of this concept –I mean many pathways, but generally they’re the DIY pathway is it’s difficult to determine whether that measurement that you’re getting is whatever accurate means. But is that accurate? Right. So there’s always going to be… that’s great. That’s one way of doing it. And in some cases it doesn’t matter if it’s that accurate. So when we’re splitting hairs with like, “Oh, it’s one dB high or lower, or two dB higher or lower,” I don’t think that’s a big issue. It’s like, I make this case. Oftentimes when I see Ferraris in New York City, I’m like, we’re going at like two miles an hour and you have a Ferrari and I’m like, okay, I’m not quite sure that in terms of speed, it makes sense to have a Ferrari in New York City. It’s probably better to just get the bus.

Tae Hong Park (50:37):
I use the subway all the time. So it’s in that same vein. For some cases, you don’t need that finality in terms of the accuracy. So in other cases where you want to make more accurate measurements, you would need to either have those things all in place. And when you buy them all separately, it costs a lot of money, right? So that’s again, another burden to the community. So for example, for airplane-type sensors that get installed, those costs like upwards to 20, almost $30,000 per unit. And those are great machines. This is the Ferrari of the sensors, right? But the negative aspect of that as well, it’s difficult to get those sensors installed in your home unless you have $30,000 to give. So ours is actually something that is not that much of a burden. So it depends on the person and the way that a person wants to be involved in. But one way to also look at this as that in this case, and this is one of those rare cases, more is better. At least for soundmaps, spatially, it’s really big temporally. It’s really big in terms of every second, basically more or less counts, right? Somebody enters this room in five seconds and they didn’t hear me clap. So there lies the problem.

Cary (51:58):
So I know you probably everybody wants to get going with their day, but I do have one more question and that is, I stumbled across maybe through the … I don’t know how you pronounce it — S O N Y C — the sounds of New York City project. They seem to have a citizen science arm that they’re connected to, which is a website that sort of crowdsourced, citizen science projects called, I think it’s called Zooniverse. There may be more of them. And I’m just curious to know, for listeners, if they want to get involved somehow in a project like this, where might they sign up to participate?

Tae Hong Park (52:34):
Oh, that’s a great question.  I think we talked about this earlier, one way or very important component in getting the AI to work better is to get a lot of human annotations meeting human saying, “Oh, this sound is the cockroach dancing This one is the honking sound.” And that’s that effort in the Zooniverse allows this what based platform where people can come in and they sort of identify these different types of sounds. And those labels then use along with a sound file to train the AI module. So that’s very important in terms of crowdsourcing. We’re not doing that at this time, because I think it’s, well, I’ll leave it at that. I think there’s some, uh, interesting dimensions as to why, where, and how one does certain things. But yeah, that’s basically one of the key factors in machine learning. You need a lot of annotated and labeled data to make the algorithm work better.

Phill (53:34):
Is there anything else you want to plug or share to our listeners before we sign off?

Tae Hong Park (53:44):
Yeah, I think it’s amazing. It always amazes me how the internet is so awesome. Right? So full disclosure, Phill and I, we worked together at Tulane. And we got both go through Hurricane Katrina.

Phill (54:00):
No, I showed up after I showed up in the post Katrina,

Tae Hong Park (54:04):
I was there during Katrina and it was a long time ago. And we’ve known each other for a long time, but I haven’t seen you for the longest time actually. It amazes me how these connections are made again, then meeting new people and then also finding that other folks are interested in this. It’s amazing. I think it’s great that the internet allows us to do this. And even in this type of situation where we just do it remotely.

Phill (54:25):
I think it’s now that everyone’s remote for everything they do, even with their friends down the street, that it kind of the difference between me talking to you over Zoom and talking to my, you know, colleagues that live in the same town as me is just the time difference. That’s really the only complication.

Tae Hong Park (54:45):
That’s one of the things that we might be needing to navigate sooner than anticipated, which is that this sort of shift from physical to virtual, the physical realities and the virtual realities and the mixture models. I mean, I think the best model is always like a hybrid, right? It’s the best of both worlds. But I think this has come upon us much earlier than I actually had expected it to come. So I think it’s a very interesting time of living.

Cary (55:21):
I’d like to thank Tae Hong Park for joining us today. I hope this episode inspires you to get involved in citizen science and to start collecting and mapping sounds from your sonic environment. We’ll post some links in the Soundproofist blog. You can find some information@getnoisy.io and also@citygrahamsound.com. Thanks for listening, and see you next time.

Capturing sound with DIY tools
Tagged on: