I'm deaf. Something close to standard Canadian English is my native language. Most native English speakers claim my speech is unmarked but I think they're being polite; it's slightly marked as unusual and some with a good ear can easily tell it's because of hearing loss.
Using the accent guesser, I have a Swedish accent. Danish and Australian English follow as a close tie.
It's not just the AI. Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right? I've also been asked if I was Scandinavian.
Interestingly I've noticed that native speakers never make this mistake. They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent. That leads me to the (probably obvious) inference that whatever it is that non-native speakers use to judge accent and competency, it is different from what native speakers use. I'm guessing in my case, phrase-length tone contour. (Which I can sort of hear, and presumably reproduce well, even if I have trouble with the consonants.)
AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable. Even now AI transcription has much more trouble with me than with most people. Yet aside from a habit of sometimes mumbling, I'm told I speak quite clearly, by humans.
Wow, I'm not deaf, but almost everything you mentioned applies to me too. I've never met anyone else who has experienced this before, yet all of your following points apply exactly to me:
> standard Canadian English is my native language
> Most native English speakers claim my speech is unmarked
> Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right?
> They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent.
At least 2 or 3 times a year, someone asks me if I'm British, but me and my parents were born in Canada, and I've never even been to England, so I'm not really sure why some people think that I have a British accent. Interestingly, the accent checker guesses that my accent is
American English 89%
Australian English 3%
French 3%
I was born in Brooklyn, to Yiddish speaking parents and Yiddish was my first language. I now spend half my time in California and half in Israel. The accent checker said 80% American English, 16% Spanish, and 4% Brazilian Portuguese. In Israel they ask if I’m Russian when I speak Hebrew. In the US, people ask where I’m from all the time because my accent—and especially my grammar—is odd. The accent checker doesn’t look for grammatical oddities but that’s where a lot of my “accent” comes from.
> AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable.
I don't know what your transcription use cases are, but you may be able to get an improvement by fine-tuning Whisper. This would require about $4 in training costs[1], and a dataset with 5-10 hours of your labeled (transcribed) speech, which may be the bigger hurdle[2].
1. 2000 steps took me 6 hours on an A100 on Collab, fine-tuning openai/whisper-large-v3 on 12 hours of data. I can shar my notebook/script with you if you'd like.
2. I am working on a PWA that makes it simple for humans to edit initial, automated transcriptions with mistakes for feeding the correct dataset back into the pipeline for fine-tuning, but its not ready yet
I'm also deaf, and I took 14 years of speech therapy. I grew up in Alabama. The only way you would know I'm from the South is because of the pin-pen merger[1]. Otherwise, you'd think I grew up in the American Midwest, due to how my speech therapy went. Almost nobody picks up on it, unless they are linguists that already knew about the pin-pen merger.
I’m aware of the merger, but I literally can’t hear a difference between the words. I certainly pronounce them the same way.
I also think merry-marry-Mary are all pronounced identically. The only way I can conceive of a difference between them is to think of an exaggerated Long Island accent, which, yeah, I guess is what makes it an accent.
> Your accent is Dutch, my friend. I identified your accent based on subtle details in your pronunciation. Want to sound like a native English speaker?
I'm British; from Yorkshire.
When letting it know how it got it wrong there's no option more specific than "English - United Kingdom". That's kind of funny, if not absurd, to anyone who knows anything of the incredible range of accents across the UK.
I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
> I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
Sure, I agree. But look at it from the perspective of a foreigner living in an English-speaking country, which is probably their target demographic.
We know that as soon as we open our mouth the locals will instantly pigeonhole us as "a foreigner". No matter how good we might be in other areas, we will never be one of "them". The degree of prejudice that may or may not exist against us doesn't matter as much as the ever present knowledge that the locals know that we are not one of them, and the fear of being dismissed because of that.
Nobody likes to stand out like that, particularly when it so clearly puts you at a disadvantage. That sort of insecurity is what this product is aimed at.
It's quite offensive. English is my native tongue, I got a perfect IELTS score, and one of my parents was an English professor. But my accent makes me less than "native".
The Australian-Vietnamese continuum is well-explained by Australia being the geographically nearest region which can supply native English language teachers to English language learners in Vietnam, rather than by any intrinsic phonetic resemblance between Vietnamese and Australian English.
> This voice standardization model is an in-house accent-preserving voice conversion model.
Not sure this model works really well. As a french/spanish native speaker, I can immediately recognize an actual French or Spanish person speaking in english, but the examples here are completly foreign to me. If I had to guess where the "french" accent was from I would have guessed something like Nigeria. For example spanish have a very distinct way of pronouncing "r" in english that is just not present here. I would have been unable to correctly guess French or Spanish for the ~10 examples present in each language (mayyybe 1 for French).
It's probably an artifact of them lumping together all varieties/dialects of a given language. I don't speak Spanish, but I know that the R is one of the things that's different in e.g. Argentina.
For sure the voice standardization model is not perfect, but it was important for us to do especially for the voice privacy. It’s still pretty early tech.
Since our own accents generally sound neutral to ourselves, I would love someone to make an accent-doubler - take the differences between two accents and expand them, so an Australian can hear what they sound like to an American, or vice-versa
I've found that when I'm listening to recordings of me my accent really sticks out to me in a way that's completely inaudible when listening to myself live. This happens with both English and my native German.
What does it mean mono-tonal and what is an expressive ebook? I assume you are not American born? I had been of the understanding that rythm was more important than the exact sounds in comprehension.
This is fascinating in theory, but I'm confused in practice.
When I play the different recordings, which I understand have the accent "re-applied" to a neutral voice, it's very difficult to hear any actual differences in vowels, let alone prosody. Like if I click on "French", there's something vaguely different, but it's quite... off. It certainly doesn't sound like any native French speaker I've ever heard. And after all, a huge part of accent is prosody. So I'm not sure what vocal features they're considering as "accent"?
I'm also curious what the three dimensions are supposed to represent? Obviously there's no objective answer, but if they've listened to all the samples, surely they could explain the main constrasting features each dimension seems to encode?
I just got a project running whereby I used python + pdfplumber to read in 1100 pdf files, most of my humble bundle collection. I extracted the text and dumped it into a 'documents' table in postgresql. Then I used sentence transformers to reduce each 1K chunk to a single 384D vector which I wrote back to the db. Then I averaged these to produce a document level embedding as a single vector.
Then I was able to apply UMAP + HDBSCAN to this dataset and it produced a 2D plot of all my books. Later I put the discovered topic back in the db and used that to compute tf-idf for my clusters from which I could pick the top 5 terms to serve as a crude cluster label.
It took about 20 to 30 hours to finish all these steps and I was very impressed with the results. I could see my cookbooks clearly separated from my programming and math books. I could drill in and see subclusters for baking, bbq, salads etc.
Currently I'm putting it into a 2 container docker compose file, base postgresql + a python container I'm working on.
When people mention a single "British accent", in 99% of the cases it's just a more widely understood shorthand for Received Pronunciation. I don't see how that's bad or wrong, considering how common it is in education.
Did research on accent, pronunciation improvement, phoneme recognition, kaldi ecosystem, etc … nothing really changed in the public domain past few years. There’s no even accurate open source dataset. All self claimedccc manually labelled dataset with 10k+ hours was partly done with automation. Next issue, model models operates in different latent space often with 50ms chunks while pronunciation assessment requires much better accuracy. Just try to say B loud - silent part gathering energy in the lips, loud part, and everything what resonates after. Worst part there are too many ml papers from the last year students or junior phd folks claiming success or fake improvements, etc
The article itself is just a vector projection in 3d space … the actual reality is much complex.
Any comments on pronunciation assessment models are greatly appreciated
You are right and I don't think incentives exist to solve the issues you describe, because currently many of the building blocks people are building are aligned to erase subtleaccent differences: the neural codecs, transcription systems such as whisper want to output clean/compressed representations of their inputs.
Apparently Persian and Russian are close. Which is surprising to say the least. I know people keep getting confused about how Portuguese from Portugal and Russian sound close yet the Persian is new to me.
Idea: Farsi and Russian both have simple list of vowel sounds and no diphtongs. Making it hard/obvious when attempting to speak english, which is rife with them and many different vowel sounds
I don’t think I’m using it as a metaphor? To “have interesting latent spaces” just means you have access to the actual weights and biases, the artifact produced by fine-tuning/training models, or you can somehow “see” activations as you feed input through the model. This can be turned into interesting 3D visualizations and reveal “latent” connections in the data which often align with and allow us to articulate similarities in the actual phenomena which these “spaces” classify.
Not many people have the privilege of access to these artifacts, or the skill to interpret these abstract, multi-dimensional spaces. I want more of these visualizations, with more spaces which encode different modalities.
Very nice viz. it reminds me of the visualizations people used to do of the mnist data set in the days when the quintessential ML project was “training a hand writing digits classifier”:
https://projector.tensorflow.org/
Fascinating! How did you decouple the speaker-specific vocal characteristics (timbre, pitch range) from the accent-defining phonetic and prosodic features in the latent space?
We didn't explicitly. Because we finetuned this model for accent classification, the later transformer layers appear to ignore non-accent vocal characteristics. I verified this for gender for example.
Why do the voices all sound so similar? I'm not talking about accent, I'm talking about the pitch, timbre, and other qualities of the voice themselves. For instance, all the phrases I heard sounded like they were said by a medium-set 45 year old man. Nothing from kids, the elderly, or people with lower / higher-pitch voices. I assume this expected from the dataset for some reason, but am really curious about that reason. Did they just get many people with similar vocal qualities but wide ranges of accents?
> By clicking or tapping on a point, you will hear a standardized version of the corresponding recording. The reason for voice standardization is two-fold: first, it anonymizes the speaker in the original recordings in order to protect their privacy. Second, it allows us to hear each accent projected onto a neutral voice, making it easier to hear the accent differences and ignore extraneous differences like gender, recording quality, and background noise. However, there is no free lunch: it does not perfectly preserve the source accent and introduces some audible phonetic artifacts.
> This voice standardization model is an in-house accent-preserving voice conversion model.
I'm kind of curious if it would be possible for it to use my own voice but decoupled from accent. I.e. could it translate a recording from my voice to a different accent but still with my voice. If so, I wonder if that makes it easier for accent training if you can hear yourself say things in a different accent.
That would be interesting for sure, but considering you don't hear yourself the same way someone else or a mic does, I'm not sure it would have the benefit you're expecting.
Good question! It's likely because there are lots of different accents of Spanish that are distinct from each other. Our labels only capture the native language of the speaker right now, so they're all grouped together but it's definitely on our to-do list to go deeper into the sub accents of each language family!
Spanish is one of those languages I would love to see as a breakdown by country. I’m sure Chilean Spanish looks very different from Catalonian Spanish.
Not sure, could be the large number of Spanish dialects represented in the dataset, label noise, or something else. There may just be too much diversity in the class to fit neatly in a cluster.
Also, the training dataset is highly imbalanced and Spanish is the most common class, so the model predicts it as a sort of default when it isn't confident -- this could lead to artifacts in the reduced 3d space.
Yeh, we would've loved to see that too. It's on our roadmap for sure. Same for some of the other languages with a large amount of unique accents like e.g. French, Chinese, Arabic, etc...
This is a fascinating look at how AI interprets accents! It reminds me of some recent advancements in speech recognition tech, like Google's Dialect Recognition feature, which also attempts to adapt to different accents. I wonder how these models could be improved further to not just recognize but also appreciate the nuances of regional
I'm deaf. Something close to standard Canadian English is my native language. Most native English speakers claim my speech is unmarked but I think they're being polite; it's slightly marked as unusual and some with a good ear can easily tell it's because of hearing loss.
Using the accent guesser, I have a Swedish accent. Danish and Australian English follow as a close tie.
It's not just the AI. Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right? I've also been asked if I was Scandinavian.
Interestingly I've noticed that native speakers never make this mistake. They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent. That leads me to the (probably obvious) inference that whatever it is that non-native speakers use to judge accent and competency, it is different from what native speakers use. I'm guessing in my case, phrase-length tone contour. (Which I can sort of hear, and presumably reproduce well, even if I have trouble with the consonants.)
AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable. Even now AI transcription has much more trouble with me than with most people. Yet aside from a habit of sometimes mumbling, I'm told I speak quite clearly, by humans.
Hearing different things, as it were.
Wow, I'm not deaf, but almost everything you mentioned applies to me too. I've never met anyone else who has experienced this before, yet all of your following points apply exactly to me:
> standard Canadian English is my native language
> Most native English speakers claim my speech is unmarked
> Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right?
> They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent.
At least 2 or 3 times a year, someone asks me if I'm British, but me and my parents were born in Canada, and I've never even been to England, so I'm not really sure why some people think that I have a British accent. Interestingly, the accent checker guesses that my accent is
which is pretty close to correct.I was born in Brooklyn, to Yiddish speaking parents and Yiddish was my first language. I now spend half my time in California and half in Israel. The accent checker said 80% American English, 16% Spanish, and 4% Brazilian Portuguese. In Israel they ask if I’m Russian when I speak Hebrew. In the US, people ask where I’m from all the time because my accent—and especially my grammar—is odd. The accent checker doesn’t look for grammatical oddities but that’s where a lot of my “accent” comes from.
To judge local or not, I would consider use of phrases and word as well, and not just the accent. Perhaps, that's what is working for you?
> AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable.
I don't know what your transcription use cases are, but you may be able to get an improvement by fine-tuning Whisper. This would require about $4 in training costs[1], and a dataset with 5-10 hours of your labeled (transcribed) speech, which may be the bigger hurdle[2].
1. 2000 steps took me 6 hours on an A100 on Collab, fine-tuning openai/whisper-large-v3 on 12 hours of data. I can shar my notebook/script with you if you'd like.
2. I am working on a PWA that makes it simple for humans to edit initial, automated transcriptions with mistakes for feeding the correct dataset back into the pipeline for fine-tuning, but its not ready yet
I'm also deaf, and I took 14 years of speech therapy. I grew up in Alabama. The only way you would know I'm from the South is because of the pin-pen merger[1]. Otherwise, you'd think I grew up in the American Midwest, due to how my speech therapy went. Almost nobody picks up on it, unless they are linguists that already knew about the pin-pen merger.
[1]https://www.acelinguist.com/2020/01/the-pin-pen-merger.html
I’m aware of the merger, but I literally can’t hear a difference between the words. I certainly pronounce them the same way.
I also think merry-marry-Mary are all pronounced identically. The only way I can conceive of a difference between them is to think of an exaggerated Long Island accent, which, yeah, I guess is what makes it an accent.
Some variants of Australian English are very similar to Canadian English. I can't always immediately tell if someone is from Canada or home.
This is probably because some states in Aus use Queens English passed down from the colonies.
Hard of hearing, from the midwest; also identified as Swedish by the accent guesser.
I tried the oracle and got this:
> Your accent is Dutch, my friend. I identified your accent based on subtle details in your pronunciation. Want to sound like a native English speaker?
I'm British; from Yorkshire.
When letting it know how it got it wrong there's no option more specific than "English - United Kingdom". That's kind of funny, if not absurd, to anyone who knows anything of the incredible range of accents across the UK.
I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
> I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
Sure, I agree. But look at it from the perspective of a foreigner living in an English-speaking country, which is probably their target demographic.
We know that as soon as we open our mouth the locals will instantly pigeonhole us as "a foreigner". No matter how good we might be in other areas, we will never be one of "them". The degree of prejudice that may or may not exist against us doesn't matter as much as the ever present knowledge that the locals know that we are not one of them, and the fear of being dismissed because of that.
Nobody likes to stand out like that, particularly when it so clearly puts you at a disadvantage. That sort of insecurity is what this product is aimed at.
It's quite offensive. English is my native tongue, I got a perfect IELTS score, and one of my parents was an English professor. But my accent makes me less than "native".
The Australian-Vietnamese continuum is well-explained by Australia being the geographically nearest region which can supply native English language teachers to English language learners in Vietnam, rather than by any intrinsic phonetic resemblance between Vietnamese and Australian English.
> This voice standardization model is an in-house accent-preserving voice conversion model.
Not sure this model works really well. As a french/spanish native speaker, I can immediately recognize an actual French or Spanish person speaking in english, but the examples here are completly foreign to me. If I had to guess where the "french" accent was from I would have guessed something like Nigeria. For example spanish have a very distinct way of pronouncing "r" in english that is just not present here. I would have been unable to correctly guess French or Spanish for the ~10 examples present in each language (mayyybe 1 for French).
It's probably an artifact of them lumping together all varieties/dialects of a given language. I don't speak Spanish, but I know that the R is one of the things that's different in e.g. Argentina.
For sure the voice standardization model is not perfect, but it was important for us to do especially for the voice privacy. It’s still pretty early tech.
Since our own accents generally sound neutral to ourselves, I would love someone to make an accent-doubler - take the differences between two accents and expand them, so an Australian can hear what they sound like to an American, or vice-versa
I've found that when I'm listening to recordings of me my accent really sticks out to me in a way that's completely inaudible when listening to myself live. This happens with both English and my native German.
If we assume this model is accurate, I sound to Americans like I'm South African!
Going mono-tonal to that of an expressive ebook increased my "American English" score from a 52% to 92%.
I'd suggest training a little less on audio books.
What does it mean mono-tonal and what is an expressive ebook? I assume you are not American born? I had been of the understanding that rythm was more important than the exact sounds in comprehension.
This is fascinating in theory, but I'm confused in practice.
When I play the different recordings, which I understand have the accent "re-applied" to a neutral voice, it's very difficult to hear any actual differences in vowels, let alone prosody. Like if I click on "French", there's something vaguely different, but it's quite... off. It certainly doesn't sound like any native French speaker I've ever heard. And after all, a huge part of accent is prosody. So I'm not sure what vocal features they're considering as "accent"?
I'm also curious what the three dimensions are supposed to represent? Obviously there's no objective answer, but if they've listened to all the samples, surely they could explain the main constrasting features each dimension seems to encode?
I just got a project running whereby I used python + pdfplumber to read in 1100 pdf files, most of my humble bundle collection. I extracted the text and dumped it into a 'documents' table in postgresql. Then I used sentence transformers to reduce each 1K chunk to a single 384D vector which I wrote back to the db. Then I averaged these to produce a document level embedding as a single vector.
Then I was able to apply UMAP + HDBSCAN to this dataset and it produced a 2D plot of all my books. Later I put the discovered topic back in the db and used that to compute tf-idf for my clusters from which I could pick the top 5 terms to serve as a crude cluster label.
It took about 20 to 30 hours to finish all these steps and I was very impressed with the results. I could see my cookbooks clearly separated from my programming and math books. I could drill in and see subclusters for baking, bbq, salads etc.
Currently I'm putting it into a 2 container docker compose file, base postgresql + a python container I'm working on.
The fact that they believe there to be a single 'British' accent means this can be quickly discounted as nonsense.
When people mention a single "British accent", in 99% of the cases it's just a more widely understood shorthand for Received Pronunciation. I don't see how that's bad or wrong, considering how common it is in education.
I would say in 99% of cases people mean Estuary English.
Did research on accent, pronunciation improvement, phoneme recognition, kaldi ecosystem, etc … nothing really changed in the public domain past few years. There’s no even accurate open source dataset. All self claimedccc manually labelled dataset with 10k+ hours was partly done with automation. Next issue, model models operates in different latent space often with 50ms chunks while pronunciation assessment requires much better accuracy. Just try to say B loud - silent part gathering energy in the lips, loud part, and everything what resonates after. Worst part there are too many ml papers from the last year students or junior phd folks claiming success or fake improvements, etc
The article itself is just a vector projection in 3d space … the actual reality is much complex.
Any comments on pronunciation assessment models are greatly appreciated
You are right and I don't think incentives exist to solve the issues you describe, because currently many of the building blocks people are building are aligned to erase subtleaccent differences: the neural codecs, transcription systems such as whisper want to output clean/compressed representations of their inputs.
Apparently Persian and Russian are close. Which is surprising to say the least. I know people keep getting confused about how Portuguese from Portugal and Russian sound close yet the Persian is new to me.
Idea: Farsi and Russian both have simple list of vowel sounds and no diphtongs. Making it hard/obvious when attempting to speak english, which is rife with them and many different vowel sounds
Yeh they seem to be in the same "major" cluster, although Serbian/Croatian, Romanian, Bulgarian, Turkish, Polish and Czech are all close.
Turkish and Persian seem to be the nearest neighbors.
When I went to Portugal I was struck by how much Portuguese there does sound like Spanish with a Russian accent!
Part of this is the "dark L" sound
I’d guess that the sibilants, consonant clusters, and/or vowel reduction would play a big role.
I thought I was the only one who perceived an audible similarity between Portuguese and Russian.
I had that too but it was Brazillian Portuguese where I noticed it.
I speak neither, and both also sound similar to me depending on the accents of the speakers.
The source code for this is unminified and very readable if you’re one of the rare few who has interesting latent spaces to visualize.
https://accent-explorer.boldvoice.com/script.js?v=5
Nothing too secret in there! We anonymized everything and anyway it's just a basic Plotly plot. Feel free to check it out.
could you explain what it means for someone to “have interesting latent spaces”? curious how you’re using that metaphor here
I don’t think I’m using it as a metaphor? To “have interesting latent spaces” just means you have access to the actual weights and biases, the artifact produced by fine-tuning/training models, or you can somehow “see” activations as you feed input through the model. This can be turned into interesting 3D visualizations and reveal “latent” connections in the data which often align with and allow us to articulate similarities in the actual phenomena which these “spaces” classify.
Not many people have the privilege of access to these artifacts, or the skill to interpret these abstract, multi-dimensional spaces. I want more of these visualizations, with more spaces which encode different modalities.
https://en.wikipedia.org/wiki/Latent_space
Good catch. I really hate javascript so i never got into d3js, so plptly was such a life saver.
Plotly is great! Much love.
Very nice viz. it reminds me of the visualizations people used to do of the mnist data set in the days when the quintessential ML project was “training a hand writing digits classifier”: https://projector.tensorflow.org/
Fascinating! How did you decouple the speaker-specific vocal characteristics (timbre, pitch range) from the accent-defining phonetic and prosodic features in the latent space?
We didn't explicitly. Because we finetuned this model for accent classification, the later transformer layers appear to ignore non-accent vocal characteristics. I verified this for gender for example.
Why do the voices all sound so similar? I'm not talking about accent, I'm talking about the pitch, timbre, and other qualities of the voice themselves. For instance, all the phrases I heard sounded like they were said by a medium-set 45 year old man. Nothing from kids, the elderly, or people with lower / higher-pitch voices. I assume this expected from the dataset for some reason, but am really curious about that reason. Did they just get many people with similar vocal qualities but wide ranges of accents?
From the article:
> By clicking or tapping on a point, you will hear a standardized version of the corresponding recording. The reason for voice standardization is two-fold: first, it anonymizes the speaker in the original recordings in order to protect their privacy. Second, it allows us to hear each accent projected onto a neutral voice, making it easier to hear the accent differences and ignore extraneous differences like gender, recording quality, and background noise. However, there is no free lunch: it does not perfectly preserve the source accent and introduces some audible phonetic artifacts.
> This voice standardization model is an in-house accent-preserving voice conversion model.
I'm kind of curious if it would be possible for it to use my own voice but decoupled from accent. I.e. could it translate a recording from my voice to a different accent but still with my voice. If so, I wonder if that makes it easier for accent training if you can hear yourself say things in a different accent.
That would be interesting for sure, but considering you don't hear yourself the same way someone else or a mic does, I'm not sure it would have the benefit you're expecting.
Ah thanks, missed that somehow
BERT still making headlines in 2025, you love to see it.
Note: this is related to https://news.ycombinator.com/item?id=42392088 from a few months ago.
Thank you for sharing! the 3d visual was an interesting application of the UMAP technique.
Is there a way to subscribe to these blog posts for auto-notification?
Yeah, if only there was a protocol for that.
It would have taken you a second more to type out "RSS", and turn a sarcastic comment into an informative one.
Obligatory xkcd: https://xkcd.com/1053/
1: me <- 1/10k
2: also thnx for the laugh
Irish accent appears to break it.
We are working on this - we don't have quite enough Irish speech data.
How do you know? Your “UK” set is liable to have some Irish accents in it. You need to break down regions more
Fascinating work — especially how geography and history influence accent clustering more than language families. Brilliant visualization!
really fun discovery clicking a dot and hearing the accent. neat visualization, lots to think about!
"Audible visualization" is a visualization enhanced by auditorization. :)
why is spanish so distributed?
Good question! It's likely because there are lots of different accents of Spanish that are distinct from each other. Our labels only capture the native language of the speaker right now, so they're all grouped together but it's definitely on our to-do list to go deeper into the sub accents of each language family!
Spanish is one of those languages I would love to see as a breakdown by country. I’m sure Chilean Spanish looks very different from Catalonian Spanish.
Did you mean Catalan (which is not Spanish) or Castilian Spanish?
Yes the Spanish spoken in Spain, especially the one that’s like /ˈɡɾaθjas/ and /baɾθeˈlona/.
But Spanish sounds very different in Spain depending on what region of the country you are talking about.
Yeah, and not all Spaniards have a distinct pronunciation for "c" and "s". For those curious: https://en.wikipedia.org/wiki/Phonological_history_of_Spanis...
Not sure, could be the large number of Spanish dialects represented in the dataset, label noise, or something else. There may just be too much diversity in the class to fit neatly in a cluster.
Also, the training dataset is highly imbalanced and Spanish is the most common class, so the model predicts it as a sort of default when it isn't confident -- this could lead to artifacts in the reduced 3d space.
whats the dimensionality of the latent space? How were the 3 dimensions visualized selected?
12 layers of 768-dim each. The 3 dimensions visualized are chosen by UMAP.
it would've been nice to be able to visualize the differences between the different accents in the spanish language, really cool tho
Yeh, we would've loved to see that too. It's on our roadmap for sure. Same for some of the other languages with a large amount of unique accents like e.g. French, Chinese, Arabic, etc...
Very interesting
This is a fascinating look at how AI interprets accents! It reminds me of some recent advancements in speech recognition tech, like Google's Dialect Recognition feature, which also attempts to adapt to different accents. I wonder how these models could be improved further to not just recognize but also appreciate the nuances of regional
i love boldvoice
Thanks, we love you too
this is super cool!