Episode 20 w/ Jay & Nick
Speaker: Welcome to E2Talks. It’s a podcast where we chat about the English language landscape. In this podcast Jay is joined by Nick Jenkins, the founder of Language Confidence – an AI-driven software as a service product that gives students automatic corrective feedback on their pronunciation. In this discussion, Jay and Nick talk about pronunciation broadly and more specifically how AI can help students develop better speech habits. They talk about Language Confidence’s origins, what it can currently do, and what it will be able to do in the near future. They also touch on many other topics pertinent to pronunciation teaching, learning and technology. Take a listen!
Jay: Hello, Nick, how’s it going?
Nick: Doing actually pretty good. All things considered in the moment. So yeah, pretty good and yourself?
Jay: Yeah. I’m well, I’m well, whereabouts in the world are you?
Nick: Currently in beautiful Sydney now shared office down in Chinatown called Haymarket HQ.
Jay: Cool. Nice. So Nick, can you give us a bit of background on yourself and how language confidence came about?
Nick: Yeah. The short version is it’s it’s my third startup. The first one I had, I sold when I was 21. The second one was in our space, the online education space. And failed pretty spectacularly. I think we did just about everything wrong that you could. So as an online, it was an app for teaching kids to speak English and just delivering content to them. And that took us to China. So I spent nearly three years in China. And obviously, when the business wound up, I needed something to do. So I was teaching English when I got my T cell qualification. And when I was teaching, I had this idea. I was like, Hey, you know, when looking at what we learn in the app space, looking at your Babbles and your DuoLingos of the world. None of them really give students good quality feedback for their spoken input. We actually did a really great video where we compared us to some of the bigger names in the space like Rosetta Stone and Babbel and show how much more accurate the tech is that we built. So, yeah, I wanted to be able to emulate what I can do as a teacher, but fully automate that. So it improves the learning experience as a whole and really focuses on spoken English.
Jay: Nice, interesting. I find that even in the classroom pronunciations, a neglected topic. One of the reasons is, it’s difficult if you have a classroom of 35 students to actually give individualised feedback for each student. It’s basically impossible if it’s an hour long class that means students are getting two minutes from the teacher. So is that something you’ve thought about?
Nick: Yeah, that’s exactly what I wanted to do. So when I was teaching, I’d asked my students to say something. And as a human individually, I could pick where they were going wrong. And one on one, say, okay, you went wrong here. And he listened to me, watch me. And then we’ll try it again and try with different words, the same sounds, different content, different contexts, etc. And you can’t scale that, you know, I think the smallest classes I had in China, were down to two students, and even then trying to give individual attention is really difficult. So yeah, I wanted to build something that’s really scalable, and you can have that experience at home.
Jay: Terrific. So can you tell us a bit about language confidence, what it is and how it works, and why it actually does improve people’s pronunciation skills?
Nick: Yeah, yeah, for sure. So it’s a AI SAS company, I suppose you’d call it. So we provide a product that’s called LCAT. And LCAT is similar in nature to speech recognition. But instead of deciphering what’s being said, we make an assessment on how well that’s being said. So a really easy example is in an app or a platform like E2school, you ask the students to say a particular word, they record the input that’s sent over to our back end, we make an assessment and we send it back the assessment results. And how the assessment actually works is we give an AI lots of native speaker data, we train that the AI learns what is native or good, give or take, and when you send that audio in, we come back with the score. And that score is how close to a naitve speaker the aim thinks that you are.
Jay: Gotcha. So when I’m doing my pronunciation practice on DuoLingo, for example, it’s and I have played with that app, it just tells me that I’m comprehensible. Sort of goes ‘Ba-ching” gives me a green thing. Yeah, you said it right. But in saying it right, I may have actually said some of the sounds incorrectly or mispronounced particular things. However, it doesn’t tell me that. So is this a big distinction between what you do and say DuoLingo?
Nick: Yeah, absolutely. So you big groups like DuoLingo go and Babbels of the world. They use the on device speech recognition. So that’s the Siri or the nuance or Google speech recognition that comes on iPhones or androids. And all that does is that gives you a pass or fail or gives you a ticker across and in that video that we made. I’m using one of these platforms and actually say the wrong word and still pass the test. You’ll get the green tick. I think they were asking me to say mineral water, and I said ‘minieral wasa’, and I still get a green tick and pass, and move on. And whereas in our system, it comes out as like, Hey, you got all of this right, but you went wrong here and here. So then the step beyond that is to move, create like a personalised learning scenario, proper feedback loop, the state, the first step is just to identifying where we went wrong.
Jay: Nice, nice, it might be worth just stepping back a little bit here and just talking about what pronunciation actually is. And it’s a really strange feature of language because it’s very anatomical, it you know, it really does involve the tongue and lips and throat and roof of the mouth, it’s very strange when you’re, it’s sort of a, there’s a cognitive aspect to it, obviously. But it’s very much a muscular and body parts thing which is pretty strange. So my understanding of pronunciation is this sort of, I look at it as having three parts. One part is the, the sounds, the individual sounds, so the phonemes. So the vowel sounds, short vowels ‘a,’ ‘e,’ ‘i,’ ‘o,’ or long vowels, combined vowels like “ay”, for example, then you’ve got consonant sounds, ‘p,’ ‘d,’ ‘k,’ ‘m,’ which actually are using a part of your mouth. Whereas a vowel is not actually using a part of your mouth, you’re making a sound, basically from the voice box, but it’s been unimpeded by the tongue of the lips, etc. So you’ve got all these, you’ve got individual sounds, then you’ve got consonant clusters. And one of the researchers we have here at E2Language just did a, she looked into the literature actually at consonant clusters in English. And she was trying to find a corpus or basically trying to find out what they are and how many consonant clusters there are in English. And she couldn’t find it, she found sort of differing opinions and some literature from the 1980s, etc. So she did a full scale study herself and went through and actually determined that there’s 146 consonant clusters in English, which is really interesting, it’s really nice to know that because it’s finite. So you’ve got 44 individual sounds, then you’ve got 146 consonant clusters. Consonant clusters are where you got two or more consonant that come together like ‘p’ and ‘r’ to make ‘pr’, or ‘s’ and ‘l’ to make ‘sl’ as in ‘slip’, for example, which is very problematic, as you would have known in China, a lot of the East Asian, Korean, Japanese, etc, they love to put a vowel between those two consonants. And then the third part of pronunciation is not so much the sounds that we’re making, but the delivery of the speech in terms of the rhythm, the pausing the connected speech, etc. So right now, where does language confidence help out with these aspects of pronunciation?
Nick: Yeah, so look, overall English is really illogical. I mean, I think there’s 12 different ways to make ‘sh’. So different grapheme combinations to make the sound ‘sh’. Don’t quote me on that, I have to double check. So it’s really quite a difficult language. And he said that there’s 44 sounds in the official International Phonetic Alphabet. And I think that’s been around since the I think, 1890s, it was a French group that made it originally. Again, don’t quote me on the double check. And there’s actually another coding system called the arpabet, which is traditionally used for speech recognition purposes, not for language learning purposes.
Jay: Sorry, what’s that called?
Nick: That Arpabet? A-R-P-A-B-E-T, I think. I think it stems from Carnegie Mellon University. So it was a coding system they created in parallel to the IPA for speech recognition. So for more like computer orientated, like speech recognition purposes,
Jay: Okay.
Nick: I think where we’re at, and where we’re focusing on is exactly what is the assessment of pronunciation relative to a native speaker. And the idea is that a native speaker, based on their accent, will say, each word a particular way, and you have differences in those and I think the ‘tomato’ ‘tomatoe’ is a really great example of that. And I think you know, the different accents, there’s 107 funds with all the different variants.
Jay: Oh, interesting.
Nick: And I’ve seen even other phonetic alphabets they move towards, I think I’ve seen one with 39 sounds and it’s real complicated mishmash, but we’re just focusing on just the International Phonetic Alphabet, the 44 sounds of English That seems to be the most common most in demand reference for learning to speak English. And we’re currently just focusing on pronunciation. But moving further down the track, and I think you touched on this on your third point is looking at other aspects of spoken English learners really took some time to quantify what they were. So you’ve got pronunciation, so that the sounds that you’re trying to say, and that’s really the core metric. So if you get the sounds completely wrong, meaning isn’t conveyed at all. So you have to get at least x percentage of the sounds right to be able to convey any meaning at all. And then from there on, adding on to that is, you know, your fluency, porosity, grammar, lexical resource, and then content relevance. So essentially, we tried to start with pronunciation, which is actually the most difficult problem to solve technically. And it’s taken a while, but we’re finally there. And then moving on to other metrics like fluencies is our next step.
Jay: Yeah, I really like that language confidence, because we use it in E2School, in our pronunciation course. And it’s terrific because we get the student to practice a particular individual sound or a consonant cluster, and then we get them to do a repeat sentence or just a single word, etc. And they get to, they speak into the computer, and it almost immediately spits back a response percentage responses, like a score, but much better than that is when they click on it, they can actually see which particular phoneme they’re mispronouncing. Which is. I mean, that’s hugely helpful. Because they’re either unaware of that, or maybe they were aware of that. But now they’re acutely aware that Okay, I need to try that ‘p’ sound again, and not make it sound like a ‘b’ sound. So there’s that element there. What should students do though? Or what do you suggest students do once they’ve sort of found out that they are having trouble with the ‘p’ sound, for example.
Nick: So I think coming from a teaching point of view, what I always tried to do with students was, nicely identify and say, Hey, that wasn’t quite right. And we gave them more content, maybe in a different context to practice. So you got this sound wrong. So as a human as a native speaker, on here you can hear that you went wrong here, and then I can say, okay, what’s another word that we can practice that contains that same sound, so you can practice it in a different content, a different contexts, different content, different sentence. And that’s what we tried to do. So the first step of our case is identifying where they went wrong. And then moving past that, and moving forward is being able to offer some suggestions. So corrective suggestions, and then good or correct examples of those. So if they get the, you know, R or L or ‘dz’ sound wrong. One word my students used to struggle with which was ‘casual’. And another word I used to pick was ‘usual’, and we put that in different sentence, we get them to practice again. So that’s the idea is that with LCAT, it’s able to emulate what a teacher can do in picking where you go wrong. And then the students are able to see that listen to the good example. And actually try it again. And, and keep practising until they can get it right.
Jay: Yeah, it really is. I’ve been mucking around with Farsi language and trying to do that “kh” sound, ‘khubi’ “khubam”. And it’s, I’ve just found it’s quite fun. Or it’s quite a good idea as a language teacher to do that, to speak, to start saying some sounds in another language that you just do not have in your first language and to feel how uncomfortable and strange and then just try to make it feel, you know, normal, normal standard to do it. That’s interesting.
Nick: Absolutely agree when I was learning Mandarin, the sounds that exist in Mandarin that as an English native speaker you never learned. So it was really difficult, especially as an adult to actually practice those particular sounds.
Jay: Yeah, yeah. Yeah, that’s right. I mean, the good thing is when you do learn another language is that, you know, a number of let’s say, There’s 44 sounds, you know, I think in Arabic, they’ll have 38 of them. And they won’t have I won’t do the math on that six of those sounds that just they don’t have, and so they’ll need to practice those. So I think some of the literature that’s really helpful there is, is that first language interference literature. What we’ve done on E2school is, we’ve got this, we’ve got all the sounds there. However, we’ve got this little tool, whereby that curates the content. So I’ll go in and say I’m an Arabic speaker, and then it will just tell me which particular phonemes I need to work on because I don’t need to do them all. There’s also literature around that for grammar, which at some stage, we’ll get to as well. But yeah, that’s interesting stuff. I might shift the topic a little bit here to talk less about the sort of technicalities of pronunciation and more about, I’m interested in the social element of pronunciation and why somebody would want to improve the pronunciation. I’ve got a few ideas, but I’d like to hear what you have to say about that. Like, why, if I’m going to get a job in the UK, or for whatever reason, why do I want to have clear pronunciation.
Nick: I think as an overarching theme, pronunciation is basically the when in spoken language or spoken English, if you your pronunciation is completely off, no matter how accurate your grammar or vocab or fluency, you can’t actually convey what you’re trying to say. So if your sounds are wrong, and I may be wrong academically here, but from my experience in teaching and learning other languages, if your pronunciation is wrong, you can’t actually convey any meaning. And people learn to speak other languages for a whole wide range of reasons. I think, obviously, being a test prep group, you understand there’s a massive market out there for people that want to take recognised tests, like PTE, IELTS, TOEFL, etc. There’s lots of people that want to do it for fun as well. They really enjoy learning languages, the polyglots of the world. And then there’s people that want to do things like travel, and travel to a different country and be able to say, Where’s the bathroom? Or how much does that cost? And those things that you learn that are really basic things like when I first moved to China, I didn’t know I didn’t speak Mandarin. And I got horribly lost quite a few times. So it was really handy to be able to say, Go straight and turn right, stop here. So yeah, this is really the I think the academic one is a very large one, studying overseas, for fun for travel, things like that. And that’s really the core of spoken language of really conveying any meaning and being able to communicate with someone.
Jay: That’s it. Yeah. But yeah, being understood. And so you mentioned before about graphemes, and phonemes. Can you just explain to me or explain to the audience, what is a grapheme? What is a phoneme? And what is the relationship between the two? Because I know in English, the relationship is often not quite there.
Nick: Yeah. So. And again, I’m not a linguist or physician, but I’m a sort of technology background. But English itself is made up of a mishmash of a range of different languages, Latin, German, French, all the above. And so graphemes is our alphabet. The 26 letter alphabet, and a phonemes is the sounds and the phonetic alphabet, but the obviously 26 letters 44 sounds are not going to match up. And I think an example that I mentioned before, was that you can make ‘sh’ sound in 12 different ways in English. So 12 different grapheme combinations make the shirt sound once. It’s
Jay: So it’s not just ‘sh’.
Nick: No.
Jay: Right. Okay, gotcha. This ‘ss’ and ‘zh’, and there’s
Nick: ‘ti’
Jay: and ‘ti’.
Nick: Yeah. Yeah. So and it’s like, it doesn’t make any sense. It’s not very logical, but it does exist. And, you know, it’s like, communication. So it’s, it’s, it’s a tough language. So that’s, and that’s the relationship they don’t line up. But there are sort of more general rules that account for most of the cases, but there’s not one clear or for all of them.
Jay: I think that’s extremely frustrating for people who are studying in their country like China or South Korea. Is they learn like textbook English. This is what I taught in South Korea for a couple of years. And you know, they sort of they, I’ve got to say the Korean language is fascinating because they actually have I think they call it like perfect orthography which means that like the spelling sound relationship is basically one to one. There’s no exceptions. But as you say, in English, the spelling sound relationship is radically different because communication ‘ti’ and what happens with with Korean students is they you know, they learn these texts and they learn the grammar, etc, then they’ll land in Australia or in the US, and all of a sudden, they just let what’s everybody’s saying? I’ve been studying English for 12 years and I can’t understand anybody in this crazy things like you know, one of the things we do in English is if what you want to do this weekend. Whatchya, what ‘dz’. What do you want to do? Whatcha. So these crazy sorts of blends and, and whatnot.
Nick: And that proves a real technical problem as well. Like we we had a some real issues when You can join two sounds that are the same. So if you say after the show, we, as a native speaker, a native speaker, you say after the show we, you concatenate the join that that words sounds. And the AI looks for the two sounds because it’s based on our dictionary, our lexicon, it says, hey, there’s two sounds. And even though you get a good score for one of the sounds, it’ll score you lower for the second sound, even though as a native speaker, you just joined together. Yeah. So it actually creates a lot of technical problems, even when you’ve got a native speaker speaking to try and get this all right. From a and from a product inaccuracy point of view?
Jay: Well, yeah, it’s really messy, isn’t it? Like even the question? How are you? We don’t say that we say, how whare you? Right? So you have to then train the machine to recognise that I think the rule is there that if there’s a vowel sound followed by consonant sound, or something gets joined together, or moved, so your machine recognises these little rules?
Nick: Yes, we had to, when we first released the product, we had all kinds of issues. Obviously, when you first launch without quick service crashed, actually, we had so much usage for the group in Korea. Yeah, the server crash. And sort of the infrastructure. Yeah. And then from a content accuracy point of view, being able to assess sentences like after the show we, and how are you? And it’s mainly a data problem to solve. But then you also have to be clever with the way that you actually set up the suppose the dictionary or the lexicon to make it all work nicely together.
Jay: Gotcha. Okay, I’m just trying to imagine how the systems programme, so because there are rules there. And one of the rules is, you know, sometimes we add a sound as well, we add these add sounds, we delete sounds, I think they call them allusions, deletions, technically. Wow. So, okay, so you have to programme the system. And it’s just not as simple as pumping in a dictionary and saying, this is how it all sounds? So who actually programmed all this? And how does? How did that come about?
Nick: So originally, when I started this, I was living in China, I was teaching. I hadn’t spoken to my parents, is a few years ago, and I had spoken my parents for about three or four months. And they said, Hey, we want you to go and get a normal job and, and go back to uni, and things like that. And I put together a list of researchers around the world that had skills in particular areas, so AI, deep neural networks, computational linguistics, and I literally picked up the phone and started calling and said, Hey, I want to build this, can you help me and, and found a lot of help at MIT. So we found a few, young, pretty switched on guys over there and to advise us, and we found a team to actually build the product. Well, and it took a long time to go from I think our first working demo was five sentences. And then the jump from five sentences that worked accuracy accurately to a product where we could scale it millions and millions of times for like millions, millions of users, and then also make it accurate for a wide range of content with such a big gap actually took us quite a while but on a on a nice sort of upward trajectory. At the moment, I think we just hit 20 million API calls in production, which is really cool. And then also looking at our own automated English tests and things like that. So it’s, it’s all started with a list and lots of LinkedIn messages, lots of cold phone calls. Quite a few people hung up on me, but all part of it right.
Jay: Wow, man, that’s amazing. So then they had to collaborate with each other. It’s like so you sort of pulled a team of researchers together?
Nick: Yeah, we bought a team and found some really great programmers over here as well in Sydney, studying things like maths and physics, places like UNSW to pull it all together, and get it up and up and running in from an idea to actually working and being a real product.
Jay: Far out. It’s amazing. I actually didn’t know that part of the story. That’s cool. And so you see, I’ve had 20 million API calls. So that means that 20 million students have clicked the button spoken into the computer and gotten feedback. That’s, that’s insane, man. Because if you think about how many teachers it would require to give 20 million verbal pieces of feedback to a student it’s like It’s unthinkable, right?
Nick: Yeah, it’s really cool. Actually. We hit 10 million really quickly because obviously through COVID things were pretty busy. But yeah, we hit 20 million just recently. And yeah, I mean, it’s really exciting that we’re actually having an impact on helping students learn doing it well, making it scalable. There’s definitely a social aspect to the business, I think that what we’re doing is a net benefit to society and education. So it makes me. Yeah, really excited and really happy sort of very fulfilling.
Jay: Yeah, I agree. I think there is definitely a net benefit to society. It’s interesting, you know, we live in times where people are very conscious about different sort of different forms of discrimination. And I think we’re doing well in progressing society, and whatnot. And but one form of discrimination that I think is overlooked is accents. People are extremely discriminatory based on accent. I read something somewhere was like a magazine article, it was like, which is the most preferred form of the English language, and which is the worst form. And this is kind of terrible research. But they’re saying that male British accents are the most favoured forms of spoken English. And the worst was male Vietnamese speaker speaking English, because of their first language interference, and how different Vietnamese pronunciation is that people dislike that they dislike Vietnamese spoken English. That makes sense. So, you know, this is, it’s awful. And, you know, I think making changes around discrimination, making people more aware of it is essential. But there is a part to play with the student there. And what you’re doing is, which is actually helping people to speak more clearly and to overcome. You know, those deficiencies? Pretty interesting, isn’t it?
Nick: Yeah, for sure. I mean, and I’ve definitely been guilty of it as well, where I found myself judging someone’s intellect by their ability to communicate verbally.
Jay: Yeah.
Nick: And, and which is completely incorrect. It’s just a native natural thing, I think, where you’re like, Oh, we can’t communicate, therefore, you’re not very smart. And that to me, like when you realise that you’re like, hang on, this is not correct at all. It’s just an inherent natural thing. And have definitely found myself in that scenario before. But learning languages and teaching languages, you understand that it’s actually nothing to do with that, it’s that ability to communicate is really important. And that’s really definitely part of why we do this and why we go through the grind. That is a startup and the rollercoaster ride of all this is that giving me, my vision for the way this goes in 10 years is any student anywhere in the world on a smartphone can have a fully open ended automated language lesson with an AI tutor. That’s really where I see that the vision for this and that takes away a lot of that prejudice against and opens a lot of doors for those learners as well that can’t pay $50 $60 in an hour.
Jay: Yeah.
Nick: For a teacher.
Jay: Yeah.
Nick: Is they can pay $5 a month for a really high quality lesson on a phone that’s really scaleable.
Jay: So are you going to move into or look at interactivity, because right now, and this is a limitation of some English language tests out there as well is that there’s no interactivity involved in the assessment of their speaking, they read aloud or they repeat a sentence or describe an image etc. I think there’s limitations involved in English language tests that actually have interactivity because of the nature of the interlocutor. But can you talk to me about how language confidence might get there with interactivity?
Nick: Yeah, so one of the big technical problems that we faced originally was how do we make this on top of making this product scalable in terms of the infrastructure? How do we make this product, able to assess a wide range of content. And this took a long time to figure out how we make the product able to assess a very, very wide range of content. And what we did is we ran experiments, and we figured out for each word, how much data or sets of data that we needed to make the AI accurate. And then we extrapolated that information and made it accurate for the 10,000 most common words in English and any combination thereof. So at the moment, you can just type in the text speak and get your assessment result back. But obviously, that’s still a closed end scenario from a testing and from a learning point of view. It’s so incredibly important to have open ended creative questions and feedback and answers, questions and answers. So the next step for us and we’ve had a lot of interest in this is actually having an open ended assessment. So no we can do the assessment of the content. Once we know what it is. And, and but what we need to do is first be able to recognise what that is. So we actually have a demo on our website where you can just talk, and then we do the assessment. So you can have a more creative, open ended response to that. Obviously, there’s limitations with that as a product. So, you know, using the speech recognition deciphers, what’s being said that we then we make the assessment on how it said, all of that is based on how accurate the recognition is from the very first step. So if you have an earlier learn, or some of the heavy accent, even the best ASR in the world isn’t going to give you back an accurate reading on their what they’re actually trying to say. Context comes into it. But there’s actually I think we found a way around that. But it requires quite a bit of development. So definitely looking down that path. And in the automated tests that we’ve built as well, we’re actually experimenting with that, as well as having one to three minute answers asking students about art, the environment and sort of lELTS style questions.
Jay: Yep.
Nick: And then doing that as a fully automated open ended assessment, which is really, really cool. And really exciting. I think that that’s really where we’ll take off. And that’s our niche, our focus is to work on that. Because yeah, as you said, from a testing point of view, and from a learning point of view, it’s really important to have that not as a read the text or read the sentence. Repeat the word that can only assess so much. But to really get a good gauge of someone’s language ability, and to help them learn is to have that open ended dialogue with them.
Jay: Yeah, nice. Yes. Yeah. That’s, that’s huge. So the issue with open ended responses then is really content. Is that correct?
Nick: Ironically, all the other metrics, so fluency and paucity content agnostic, so you don’t need to know what’s being said. But for pronunciation, so the sounds, you need to know what the user is trying to say. And then you can make an assessment on how well it’s said.
Jay: So is this how it works? So let’s say I’m describing a picture of the Eiffel Tower. And so I’m saying sentences, and then it’s transcribing them into words, like a transcription service for like voice typing, for example. I have no idea how the hell it works. How does it work?
Nick: So say use speech recognition to decipher what’s being said as the first step. And that gives you the text and once you’ve got the text, so you send that with the audio to our API, and we make the assessment and send back that result.
Jay: Okay, so it’s looking at it’d be looking at the sounds, that’d be taking it through your pronunciation, evaluation software, but then the words as well, it’d be making a sort of semantic map and saying, Yes, he’s talking about whatever I say about the Eiffel Tower, and then giving me a score for content as well. And possibly even grammar. Well, it gets really.
Nick: Yeah, that’s exactly the plan, yes, to have six, six core metrics as sort of the way that we could take how you convey meaning and actually quantify that and put it into buckets, essentially a compartmentalised. We’ve got pronunciation, porosity, fluency, grammar, vocab, or lexical resource, and then content relevance. So those six metrics is we sat down and broke down looking at everything from the IELTS rubric to technical requirements to what can we do, what can’t we do? And then put it in that pipeline, where that’s how we think that may change how we think that you’ll actually be able to deliver a fully automated open ended assessment using those metrics for testing and for learning as well.
Jay: Nice, nice. And then and then one day in the distant future that will be interactivity where you will actually be able to converse. But that’s that’s different ballgame, isn’t it?
Nick: It is I have the plan for that written out. I know exactly how to do it. But it’s very difficult, technically. So yes, it will happen. But yeah, not for a little while.
Jay: That’s when have you seen that movie ‘her’ when they have the little earpieces and he falls in love with the… Have you seen that movie? It’s Jaoquin Phoenix that whatever his name is, it’s a beautiful film. It’s actually one of my favourite films. And they have these earpieces that sit in the future. And he falls in love with the AI in his ear. But then he realises everybody’s in love with their AI because they have these wonderful conversations. It’s worth watching. It’s really, it’s really neat. All right, cool. That’s really interesting. So just quickly, tell me about this test that you’ve created.
Nick: Yeah, so we’re working with a group called Holmes Education Group. And we’re using the basically, in partnership with them. We’ve built an automated test. So DuoLingo released one mid COVID I think it was about a year ago. And it was $69. And I don’t think it was automated, I still think there was a human element in the assessment. Whereas ours is fully automated. Starting with a, for sorry, fully automated, and for speaking and listening to start with. And the original use case was using it to streamline applications for schools and universities. But as we keep building and developing this, it actually looks like we’ll use it for the actual entry exam. So we can automate that process. And it will never replace IELTS I don’t think, but we definitely like to move in that direction with the technology.
Jay: Wow, does it test what skills does it test?
Nick: At the moment it’s just speaking, listening, but we are adding reading, grammar, comprehension, etc.
Jay: Right. Yeah, because a lot of those tests really neglect speaking, like TOEIC, for example, was very much as far as I know, was a very much a listening and reading test, because speaking is expensive to assess. It usually requires people I know we have to, at E2Language, we mark a lot of writing and speaking assessments. And, you know, yeah, we’re still using human raters because for that level of complexity, like an IELTS essay, we haven’t found any AI that would satisfy the students and the students really need to be satisfied because they want very high level feedback and explanations, etc. So yeah, it’s, I mean, an amazing market. If you got in there and cracked that one, that would be really something else.
Nick: It’s definitely difficult. But I think we’re on the right path.
Jay: Yeah, I think big problems like this, you just have to start chipping away at them. Don’t you just have to have a bit of an idea and solve little problems until you solve the big problem. Well, I mean, we’re trying to solve the big problem of teaching English on a computer. And working out how to do that. And it’s been a while I’ve been sort of looking at stuff for 10 years, and I’m still changing my mind and still haven’t worked out how to teach grammar that one’s. That one’s I don’t know how to teach grammar man, that’s, I’ve gone through so many different, you know, swung from this way, too, that way, and back to this way. And then finally, I think I’ve given up and maybe I’m close, close now that I’ve given up. Sorry, go on.
Nick: I was gonna say it makes a lot of sense, right? Because I think in like Mandarin is a nice “kàn shénme?” And that’s ‘You at look what?’ not ‘What are you looking at?’ And I think Germans and French and that all the Latin based languages are sort of similar with the structure of the language, is English, the only one that’s, that’s our structure, is that correct?
Jay: I don’t know what you can look at on Google Images, which is fascinating. A morpheme maps, if you type in morpheme map, what it does, it puts in puts the English sentence at the bottom. ‘What are you looking at’, and we’ll put the Chinese Mandarin sentence at the top. And we’ll actually draw a line to show word order. And some of them are just radically different. I mean, that’s just word order. It’s just let alone how do the verbs work? And how do they indicate time and grammars? I find the grammar totally fascinating. But unsolvable, it seems, yeah. Cool, Nick, how can people reach out to you and find out more about language confidence.
Nick: Jump on our website. So language confidence.ai. Or send me an email at Nick@ languageconfidence.com . Yeah, the demos on the website, have a play around. And let us know any feedback. I’m always looking for partners to improve the product. That’s how we work with our partners. It’s a really collaborative relationship. I think we spent quite a bit of time with our CTO down in Melbourne, figuring out product and best use case and how to use this and things like that. So yeah, get in touch. And we’d love to hear from you.
Jay: It look, it’s made a big difference to our platform E2school, because the pronunciation course, I mean, if we had two choices, either not give them feedback, or pay a teacher, in which case the cost of the course would skyrocket, and nobody would use it. So what we’ve been able to do with language confidence is provide lots of feedback to the student. And I think the cost is $19 for three months or something. So yeah, it’s a hugely valuable tool for us. And yeah, our students are using it, which is great.
Nick: Yeah, that’s awesome. And yeah, I really enjoyed working with you guys as well. So it’s been good fun, and I look forward to quite a few more years of it.
Jay: Yeah, absolutely. We’ll keep branching out and see where we get to. Cool. Thanks very much for joining me today. Nick.
Nick: Thanks for having me. I really appreciate it.
Speaker: Thanks for listening to E2 Talks. Remember to check out e2language.com for PTE, IELTS, OET and TOEFL courses and If you need help with general English language learning, check out e2school.com. Thanks