drawing

KaniTTS: Ultra Fast and Expressive Speech Generation Model


We're excited to introduce KaniTTS, our new Text-to-Speech (TTS) model designed for high-speed, high-fidelity audio generation.

KaniTTS is built on a novel architecture that combines a powerful language model with a highly efficient audio codec, enabling it to deliver exceptional performance for real-time applications.


Architectural Breakdown

KaniTTS operates on a two-stage pipeline, leveraging a large foundation model for token generation and a compact, efficient codec for waveform synthesis.

1. LiquidAI LFM2-350M Backbone: Semantic and Acoustic Tokenization

The first stage utilizes the LiquidAI's LFM2-350M as a backbone. It is responsible for converting input text into a sequence of compressed audio tokens. The model is trained on a vast corpus of text and corresponding audio ( ~ 50k hours). Its primary function is to produce a high-level representation of the speech in a latent space.

2. NVIDIA NanoCodec: High-Fidelity Waveform Synthesis

The second stage of the pipeline is the NVIDIA's NanoCodec, which serves as the vocoder. This highly optimized model takes the audio tokens from the backbone and converts them into a continuous, high-fidelity audio waveform.


Performance and Latency

The two-stage design of KaniTTS provides a significant advantage in terms of speed and efficiency. The backbone LLM generates a compressed token representation, which is then rapidly expanded into an audio waveform by the NanoCodec. This architecture bypasses the computational overhead associated with generating waveforms directly from large-scale language models, resulting in extremely low latency.

The processing time is dominated by the initial token generation, which is highly parallelizable, and the subsequent decoding by the NanoCodec is near-instantaneous.

This approach makes KaniTTS particularly suitable for applications where real-time responsiveness is critical, such as interactive voice assistants, gaming, and live content generation. The combination of a powerful, token-generating backbone and a highly efficient vocoder marks a new direction in high-performance TTS system design.

Features

The model trained primarily on English for robust core capabilities and the tokenizer supports these languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The base model can be continually pretrained on the multilingual dataset producing high-fidelity audio at sample rates 22kHz.

This model powers voice interactions in the modern agentic systems, enabling seamless, human-like conversations.


Recommended Uses


Limitations

Check out our github repo for more info.


Inference on Nvidia RTX 5080:

This performance makes KaniTTS suitable for real-time conversational AI applications and low-latency voice synthesis.


Examples

TextKaniTTS BetaCartesia Sonic 2.0Elevenlabs V3Inworld MaxKokoroOrpheusSesame CSM 1bHume AIMinimax 2.5 HD
You make my days brighter, and my wildest dreams feel like reality. How do you do that?
Anyway, um, so, um, tell me, tell me all about her. I mean, what's she like? Is she really, you know, pretty?
Great, and just a couple quick questions so we can match you with the right buyer. Is your home address still 330 East Charleston Road?
No, that does not make you a failure. No, sweetie, no. It just, uh, it just means that you're having a tough time...
Oh, yeah. I mean did you want to get a quick snack together or maybe something before you go?
I-- Oh, I am such an idiot sometimes. I'm so sorry. Um, I-I don't know where my head's at.
Got it. $300,000. I can definitely help you get a very good price for your property by selecting a realtor.
Holy fu- Oh my God! Don't you understand how dangerous it is, huh?

Beyond Open Source

Impressed by the demo? That’s just the beginning. Our commercial API will unlock ultra-low latency generation and true on-device capabilities that will redefine your product. Be the first to build with it.

Subscribe to the Waiting List for the Commercial/Pro version.


Training Data & Evaluation


Tips & Tricks


Responsible Use and Prohibited Activities

The model is designed for ethical and responsible use. The following activities are strictly prohibited:

By using this model, you agree to abide by these restrictions and all applicable laws and regulations.


Sources