
KaniTTS: Ultra Fast and Expressive Speech Generation Model
We're excited to introduce KaniTTS, our new Text-to-Speech (TTS) model designed for high-speed, high-fidelity audio generation.
KaniTTS is built on a novel architecture that combines a powerful language model with a highly efficient audio codec, enabling it to deliver exceptional performance for real-time applications.
Architectural Breakdown
KaniTTS operates on a two-stage pipeline, leveraging a large foundation model for token generation and a compact, efficient codec for waveform synthesis.
1. LiquidAI LFM2-350M Backbone: Semantic and Acoustic Tokenization
The first stage utilizes the LiquidAI's LFM2-350M as a backbone. It is responsible for converting input text into a sequence of compressed audio tokens. The model is trained on a vast corpus of text and corresponding audio ( ~ 50k hours). Its primary function is to produce a high-level representation of the speech in a latent space.
- Input: Raw text, including punctuation and potential prosodic markers.
- Process: The model analyzes the text for semantic meaning, syntactic structure, and prosodic cues (e.g., emphasis, pauses, intonation). It then maps this information to a sequence of discrete audio tokens. These tokens represent specific sounds, pitch contours, and rhythmic patterns.
- Output: A compact sequence of audio tokens. This tokenized representation is significantly smaller than a raw audio waveform, allowing for extremely fast processing and transfer.
2. NVIDIA NanoCodec: High-Fidelity Waveform Synthesis
The second stage of the pipeline is the NVIDIA's NanoCodec, which serves as the vocoder. This highly optimized model takes the audio tokens from the backbone and converts them into a continuous, high-fidelity audio waveform.
- Input: The sequence of audio tokens generated by backbone.
- Process: The NanoCodec is a lightweight generative model specifically designed for real-time operation. It reconstructs the full audio signal from the compressed token stream. Its efficiency is a key factor in KaniTTS's low latency, as it can synthesize the audio waveform almost instantaneously from the token input.
- Output: The final raw audio waveform (e.g., WAV format).
Performance and Latency
The two-stage design of KaniTTS provides a significant advantage in terms of speed and efficiency. The backbone LLM generates a compressed token representation, which is then rapidly expanded into an audio waveform by the NanoCodec. This architecture bypasses the computational overhead associated with generating waveforms directly from large-scale language models, resulting in extremely low latency.
The processing time is dominated by the initial token generation, which is highly parallelizable, and the subsequent decoding by the NanoCodec is near-instantaneous.
This approach makes KaniTTS particularly suitable for applications where real-time responsiveness is critical, such as interactive voice assistants, gaming, and live content generation. The combination of a powerful, token-generating backbone and a highly efficient vocoder marks a new direction in high-performance TTS system design.
Features
The model trained primarily on English for robust core capabilities and the tokenizer supports these languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The base model can be continually pretrained on the multilingual dataset producing high-fidelity audio at sample rates 22kHz.
This model powers voice interactions in the modern agentic systems, enabling seamless, human-like conversations.
- Model Size: 450M parameters (pretrained version)
- License: Apache 2.0
Recommended Uses
- Conversational AI: Integrate into chatbots, virtual assistants, or voice-enabled apps for real-time speech output.
- Edge and Server Deployment: Optimized for low-latency inference on edge devices or affordable servers, enabling scalable, resource-efficient voice applications.
- Accessibility Tools: Support screen readers or language learning apps with expressive prosody.
- Research: Fine-tune for domain-specific voices (e.g., accents, emotions) or benchmark against other TTS systems.
Limitations
-
Performance may vary with finetuned variants, long inputs ( > 2000 tokens) or rare languages/accents.
-
Emotion control is basic; advanced expressivity requires fine-tuning.
-
Trained on public datasets; may inherit biases in prosody or pronunciation from training data.
Check out our github repo for more info.
Inference on Nvidia RTX 5080:
- Latency: ~1sec to generate 15sec of audio
- Memory Usage: 2GB GPU VRAM
This performance makes KaniTTS suitable for real-time conversational AI applications and low-latency voice synthesis.
Examples
Text | KaniTTS Beta | Cartesia Sonic 2.0 | Elevenlabs V3 | Inworld Max | Kokoro | Orpheus | Sesame CSM 1b | Hume AI | Minimax 2.5 HD |
---|---|---|---|---|---|---|---|---|---|
You make my days brighter, and my wildest dreams feel like reality. How do you do that? | |||||||||
Anyway, um, so, um, tell me, tell me all about her. I mean, what's she like? Is she really, you know, pretty? | |||||||||
Great, and just a couple quick questions so we can match you with the right buyer. Is your home address still 330 East Charleston Road? | |||||||||
No, that does not make you a failure. No, sweetie, no. It just, uh, it just means that you're having a tough time... | |||||||||
Oh, yeah. I mean did you want to get a quick snack together or maybe something before you go? | |||||||||
I-- Oh, I am such an idiot sometimes. I'm so sorry. Um, I-I don't know where my head's at. | |||||||||
Got it. $300,000. I can definitely help you get a very good price for your property by selecting a realtor. | |||||||||
Holy fu- Oh my God! Don't you understand how dangerous it is, huh? |
Beyond Open Source
Impressed by the demo? That’s just the beginning. Our commercial API will unlock ultra-low latency generation and true on-device capabilities that will redefine your product. Be the first to build with it.
Subscribe to the Waiting List for the Commercial/Pro version.
Training Data & Evaluation
- Dataset: Curated from LibriTTS, Common Voice and Emilia (~50k hours). Pretrained mostly on English speech for robust core capabilities, with multilingual fine-tuning for supported languages.
- Metrics: MOS (Mean Opinion Score) 4.3/5 for naturalness; WER (Word Error Rate) < 5% on benchmark texts.
- Hardware: Pretrained on 8x H200 over 25 hours.
Tips & Tricks
- Language Optimization: For the best results in non-English languages, continually pretrain the base model on datasets from your desired language set to improve prosody, accents, and pronunciation accuracy. Additionally, finetune NanoCodec for desired set of languages.
- Batch Processing: For high-throughput applications, process texts in batches of 8-16 to leverage parallel computation, reducing per-sample latency.
- Blackwell GPU Optimization: This model runs efficiently on NVIDIA's Blackwell architecture GPUs for faster inference and reduced latency in real-time applications.
Responsible Use and Prohibited Activities
The model is designed for ethical and responsible use. The following activities are strictly prohibited:
- The model may not be used for any illegal purposes or to create content that is harmful, threatening, defamatory, or obscene. This includes, but is not limited to, the generation of hate speech, harassment, or incitement of violence.
- You may not use the model to generate or disseminate false or misleading information. This includes creating deceptive audio content that impersonates individuals without their consent or misrepresents facts.
- The model is not to be used for any malicious activities, such as spamming, phishing, or the creation of content intended to deceive or defraud.
By using this model, you agree to abide by these restrictions and all applicable laws and regulations.
Sources
-
GitHub Repo: nineninesix-ai/kani-tts
-
Base Model Card on HF: nineninesix/kani-tts-450m-0.1-pt
-
FT Model Card on HuggingFace: nineninesix/kani-tts-450m-0.2-ft
-
Link to HF Space: nineninesix/KaniTTS
-
Inference Example: Colab Notebook
-
Finetuning Example: Colab Notebook
-
Example Dataset for Fine-tuning: Expresso Conversational
-
Waiting List for Pro Version