Skip to main content

· 13 min read
Aaron Ng

When talking to a voice AI, few things are as frustrating as getting interrupted just because you paused for a moment to think. As humans, we naturally pick up on a wide range of cues to know when someone is done speaking — body language, tone, context, even the subtle shifts in breathing.

But for a machine, this is a much tougher problem. Simply waiting for a fixed 500ms of silence isn't enough, and it's a quick way to drive users away.

This post is about making turn detection smarter. I'll walk you through why Semantic Turn Detection is a big upgrade over just listening for silence, how instruction-tuned Small Language Models (SLMs) fit the job perfectly, and share a practical code example using an open-source SLM to help you get started.

· 17 min read
Will Knottenbelt

Cover

Sesame AI has recently stirred up a huge amount of hype with their ultra-realistic, open-source Conversational Speech Model (CSM). While the model is impressive, they didn't release the training code and there is strong demand for customization. This blog walks you through exactly how to fine-tune the CSM for any language or voice you desire!

· 10 min read
Seoirse Murray

Complete this sentence: "I love lamb..." What comes next? "it pairs so well with mint", or "it's perfect in a Sunday roast?" What if the conversation has been about poetry, in which case this probably refers to Charles Lambe, and the phrase could continue "his poetry is so moving". If the preceding conversation had been about the movie Anchorman, it may have been "I love lamp".

The important difference between these scenarios is the context.

· 31 min read
David MacLeod

The All-Reduce collective is ubiquitous in distributed training, but is currently not supported for sparse CUDA tensors in PyTorch. In the first part of this blog we contrast the existing alternatives available in the Gloo/NCCL backends. In the second part we implement our own efficient sparse All-Reduce collective using PyTorch and CUDA.

· 7 min read
Aaron Ng

Imagine being able to understand and interpret spoken language not only retrospectively, but as it happens. This isn't just a pipe dream — it's a reality we're crafting at Speechmatics.

Our mission is to deliver Speech Intelligence for the AI era, leveraging foundational speech technology and cutting-edge AI.

In 2023, we launched a series of Capabilities that look to do more with the spoken word. Moving beyond transcription, we're now offering powerful functionality that interprets, analyses and makes the spoken word more useful and valuable than ever before. So far, we've released Translation, Summaries, Sentiment, Chapters and Topics, but our journey has only just begun.

· 21 min read
Andrew Innes

Not everyone is able to write funky fused operators to make ML models run faster on GPUs using clever quantisation tricks. However lots of developers work with algorithms that feel like they should be able to leverage the thousands of cores in a GPU to run faster than using the dozens of cores on a server CPU. To see what is possible and what is involved, I revisited the first problem I ever considered trying to accelerate with a GPU. What is unusual about my chosen problem is that it is officially pointless, so you ought not to be able to find any library that will accelerate this algorithm, because it isn’t worth writing one! That makes it an interesting proxy for algorithms which aren’t catered for by high-performance libraries written by experts, but can be structured to run thousands of threads in parallel.

· 8 min read
Theo Clark
Ellena Reid

As machine learning engineers increasingly adopt the Bitter Lesson and models grow in size, the cost associated with training them is also on the rise. A significant portion of overall compute budget is frequently spent on hyper-parameter tuning before launching a final training run. MuP offers the capability to transfer hyperparameters from a much smaller 'toy' model, leading to a substantial reduction in overall training cost.

· 9 min read
Anartz Nuin

At Speechmatics, we wanted to present our real-time translation product in a straightforward yet impactful manner, demonstrating its exceptional capabilities. You can experience this firsthand on our website. Beyond its capabilities in showcasing real-time transcription and translation, our live demo extends its reach to address diverse user needs. For those who may have hearing impairments or find themselves in environments where audio isn't a viable option, our streaming server provides a text-based alternative, ensuring that no one is left out. Moreover, our service bridges language barriers, making it indispensable in situations where immediate translation is crucial, breaking down communication barriers effortlessly.

· 11 min read
Andre Mansikkaniemi

Speaker diarization often complements automatic speech recognition (ASR) by determining "Who spoke when?". One intriguing advancement in the field is the adoption of Self-Supervised Learning (SSL). By harnessing vast amounts of unlabelled audio data, SSL manages to improve multiple downstream tasks, including ASR and diarization, using the same pre-trained model. As we explore in this blog, the synergy between SSL and traditional methods not only boosts ASR accuracy but also aids in improving speaker diarization results.

· 28 min read
Tudor Evans

Here at Speechmatics, audio is the lifeblood of everything we do, from training our models right through to crafting effective demos of our technology. One of the best examples of this is our Portal translation demo, which allows the user to see their speech translated into a number of languages in realtime. However, accessing media devices through the browser isn't straightforward. Browsers require the user to explicitly permit access to the media device, and to make things even more complicated, each browser engine has its own quirks that have to be handled. In this article, I'll walk through how we were able to provide a consistent and straightforward microphone access experience for our demos across all the major browsers and devices.