nineninesix.ai

KaniTTS: Ultra Fast and Expressive TTS Model

drawing

We're excited to introduce KaniTTS, our new Text-to-Speech (TTS) model designed for high-speed, high-fidelity audio generation.

KaniTTS is built on a novel architecture that combines a powerful language model with a highly efficient audio codec, enabling it to deliver exceptional performance for real-time applications.


Architectural Breakdown

KaniTTS operates on a two-stage pipeline, leveraging a large foundation model for token generation and a compact, efficient codec for waveform synthesis.

1. LiquidAI LFM2-350M Backbone: Semantic and Acoustic Tokenization

The first stage utilizes the LiquidAI's LFM2-350M as a backbone. It is responsible for converting input text into a sequence of compressed audio tokens. The model is trained on a vast corpus of text and corresponding audio ( ~ 50k hours). Its primary function is to produce a high-level representation of the speech in a latent space.

  • Input: Raw text, including punctuation and potential prosodic markers.
  • Process: The model analyzes the text for semantic meaning, syntactic structure, and prosodic cues (e.g., emphasis, pauses, intonation). It then maps this information to a sequence of discrete audio tokens. These tokens represent specific sounds, pitch contours, and rhythmic patterns.
  • Output: A compact sequence of audio tokens. This tokenized representation is significantly smaller than a raw audio waveform, allowing for extremely fast processing and transfer.

2. NVIDIA NanoCodec: High-Fidelity Waveform Synthesis

The second stage of the pipeline is the NVIDIA's NanoCodec, which serves as the vocoder. This highly optimized model takes the audio tokens from the backbone and converts them into a continuous, high-fidelity audio waveform.

  • Input: The sequence of audio tokens generated by backbone.
  • Process: The NanoCodec is a lightweight generative model specifically designed for real-time operation. It reconstructs the full audio signal from the compressed token stream. Its efficiency is a key factor in KaniTTS's low latency, as it can synthesize the audio waveform almost instantaneously from the token input.
  • Output: The final raw audio waveform (e.g., WAV format).

Performance and Latency

The two-stage design of KaniTTS provides a significant advantage in terms of speed and efficiency. The backbone LLM generates a compressed token representation, which is then rapidly expanded into an audio waveform by the NanoCodec. This architecture bypasses the computational overhead associated with generating waveforms directly from large-scale language models, resulting in extremely low latency.

The processing time is dominated by the initial token generation, which is highly parallelizable, and the subsequent decoding by the NanoCodec is near-instantaneous.

This approach makes KaniTTS particularly suitable for applications where real-time responsiveness is critical, such as interactive voice assistants, gaming, and live content generation. The combination of a powerful, token-generating backbone and a highly efficient vocoder marks a new direction in high-performance TTS system design.


Features

The model trained primarily on English for robust core capabilities and supports these languages: English, Arabic, Chinese, German, Korean, and Spanish. The base model can be continually pretrained on the multilingual dataset producing high-fidelity audio at sample rates 22kHz.

This model powers voice interactions in the modern agentic systems, enabling seamless, human-like conversations.


Recommended Uses

  • Conversational AI: Integrate into chatbots, virtual assistants, or voice-enabled apps for real-time speech output.
  • Edge and Server Deployment: Optimized for low-latency inference on edge devices or affordable servers, enabling scalable, resource-efficient voice applications.
  • Accessibility Tools: Support screen readers or language learning apps with expressive prosody.
  • Research: Fine-tune for domain-specific voices (e.g., accents, emotions) or benchmark against other TTS systems.

Limitations

  • Performance may vary with finetuned variants, long inputs ( > 2000 tokens) or rare languages/accents.
  • Emotion control is basic; advanced expressivity requires fine-tuning.
  • Trained on public datasets; may inherit biases in prosody or pronunciation from training data.

Check out our github repo for more info.


Inference on Nvidia RTX 5080:

  • Latency: ~1sec to generate 15sec of audio
  • Memory Usage: 2GB GPU VRAM

This performance makes KaniTTS suitable for real-time conversational AI applications and low-latency voice synthesis.


Beyond Open Source

Impressed by the demo? That’s just the beginning. Our commercial API will unlock ultra-low latency generation and true on-device capabilities that will redefine your product. Be the first to build with it.

Subscribe to the Waiting List for the Commercial/Pro version.


Training Data & Evaluation

  • Dataset: Curated from LibriTTS, Common Voice and Emilia (~80k hours). Pretrained mostly on English speech for robust core capabilities, with support for German, Arabic, Chinese, Korean and Spanish.
  • Metrics: MOS (Mean Opinion Score) 4.3/5 for naturalness; WER (Word Error Rate) < 5% on benchmark texts.
  • Hardware: Pretrained on 8x H100 over 45 hours on Lambda AI.

Tips & Tricks

  • Language Optimization: For the best results in non-English languages, continually pretrain the base model on datasets from your desired language set to improve prosody, accents, and pronunciation accuracy. Additionally, finetune NanoCodec for desired set of languages.
  • Batch Processing: For high-throughput applications, process texts in batches of 8-16 to leverage parallel computation, reducing per-sample latency.
  • Blackwell GPU Optimization: This model runs efficiently on NVIDIA's Blackwell architecture GPUs for faster inference and reduced latency in real-time applications.

Responsible Use and Prohibited Activities

The model is designed for ethical and responsible use. The following activities are strictly prohibited:

  • The model may not be used for any illegal purposes or to create content that is harmful, threatening, defamatory, or obscene. This includes, but is not limited to, the generation of hate speech, harassment, or incitement of violence.
  • You may not use the model to generate or disseminate false or misleading information. This includes creating deceptive audio content that impersonates individuals without their consent or misrepresents facts.
  • The model is not to be used for any malicious activities, such as spamming, phishing, or the creation of content intended to deceive or defraud.

By using this model, you agree to abide by these restrictions and all applicable laws and regulations.


Sources



Contact

Have a question, feedback, or need support? Please fill out our contact form and we'll get back to you as soon as possible.