This website uses cookies to anonymously analyze website traffic using Google Analytics.

Models / CartesiaSonic /  / Cartesia Sonic-2 API

Cartesia Sonic-2 API

Low-latency, ultra-realistic voice model, served in partnership with Cartesia.

Try our Cartesia Sonic API

Try this model in our Playground!

Cartesia Sonic-2 API Usage

Endpoint

RUN INFERENCE

curl -X POST "https://api.together.xyz/v1/audio/generations" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  --output speech.mp3 \
  -d '{
    "model": "cartesia/sonic-2",
    "input": "",
    "voice": "sweet lady",
    "response_encoding": "pcm_f32le",
    "response_format": "wav",
    "sample_rate": 44100,
    "stream": false
   }'

RUN INFERENCE

from together import Together

client = Together()
  
response = client.audio.speech.create(
    model="cartesia/sonic-2",
    input="",
    voice="sweet lady",
    response_encoding="pcm_f32le",
    response_format="wav",
    sample_rate=44100,
    stream=False
)

response.stream_to_file("speech_file_path.wav")

RUN INFERENCE

import Together from "together-ai";

const together = new Together();

const response = await together.audio.speech.create({
  model: "cartesia/sonic-2",
  input: "",
  voice: "sweet lady",
  response_encoding: "pcm_f32le",
  response_format: "wav",
  sample_rate: 44100,
  stream: false
});

if (response.body) {
  const nodeStream = Readable.from(response.body as ReadableStream);
  const fileStream = createWriteStream("./speech.mp3");

  nodeStream.pipe(fileStream);
}

How to use Cartesia Sonic-2

Model details

Prompting Cartesia Sonic-2

Applications & Use Cases

Looking for production scale? Deploy on a dedicated endpoint

Deploy Cartesia Sonic-2 on a dedicated endpoint with custom hardware configuration, as many instances as you need, and auto-scaling.

Get started