Overview
Process long text efficiently by automatically splitting it into chunks and combining the audio with pauses between segments. Ideal for long-form content like articles, books, or documentation.
Request Body
The text to convert to speech. Maximum length: 20,000 characters. For text longer than this limit, split it into multiple requests or use the streaming endpoint.
Language code (e.g., “en”, “es”, “fr”)
Pause duration in milliseconds between text segments
Example Request
curl -X POST https://api.gistmag.co.uk/tts/batch \
-H "Content-Type: application/json" \
-d '{
"text": "This is a very long text that will be automatically split into chunks...",
"language": "en",
"pause_duration": 800,
"api_key": "your_api_key_here"
}' \
--output output.mp3
Text Length Limits
- Maximum: 20,000 characters per request
- Recommended: For text longer than 20,000 characters, split it into multiple batch requests or use the streaming endpoint
- Processing Time: Approximately 1 second per 1,000 characters (e.g., 20,000 characters ≈ 20-30 seconds)
Requests exceeding 20,000 characters will return a 400 error. For very long content, consider splitting it into multiple requests or using the streaming endpoint for real-time playback.
How It Works
- Text Splitting: Text is automatically split at sentence boundaries (
., !, ?) into chunks of up to 250 characters, keeping sentences intact
- Sequential Processing: Each segment is processed independently and saved to temporary files
- Memory-Efficient Combining: Audio segments are combined in batches to prevent memory exhaustion, then merged into a single file
- Combining with Pauses: Audio segments are combined sequentially with configurable pauses (default 800ms) between them for natural breaks
- Normalization: Final audio is normalized for consistent volume across all segments
- Single Output: Exported as a single high-quality MP3 file (192k bitrate)
When to Use Batch Processing
Use batch processing when:
- You need a single complete MP3 file for download
- Processing long-form content (articles, books, documentation, audiobooks)
- You want natural pauses between text segments
- You prefer higher quality audio (192k bitrate)
- You’re creating content for offline playback
Don’t use batch when:
- You need real-time/low latency playback (use streaming instead)
- You want audio to start playing immediately (use streaming instead)
Response
The response is a single MP3 audio file containing the complete text with natural pauses.
Content-Type: audio/mpeg
Content-Disposition: attachment; filename=chapter.mp3
Processing Time
Processing time scales with text length:
- Small text (< 5,000 chars): ~5-15 seconds
- Medium text (5,000-15,000 chars): ~15-30 seconds
- Large text (15,000-20,000 chars): ~20-30 seconds
The API does not enforce request timeouts, so requests will wait as long as needed for processing to complete.
Error Responses
400 Bad Request - Text exceeds maximum length:
{
"error": "Text is too long for batch processing. Maximum is 20,000 characters. Your text has 30,000 characters.",
"text_length": 30000,
"max_length": 20000,
"suggestion": "Please split your text into smaller chunks or use the streaming endpoint for very long text."
}
Credit Cost
1 credit per 1,000 characters, with a minimum of 1 credit for any request.
Examples:
- 10 characters = 1 credit (minimum charge)
- 500 characters = 1 credit (minimum charge)
- 1,000 characters = 1 credit
- 2,500 characters = 3 credits (rounded up)
- 5,000 characters = 5 credits
- 10,000 characters = 10 credits
- 20,000 characters = 20 credits (maximum)
Example Usage
Python
import requests
response = requests.post(
"https://api.gistmag.co.uk/tts/batch",
json={
"text": "Very long text here...",
"language": "en",
"pause_duration": 800,
"api_key": "your_api_key_here"
}
)
with open("output.mp3", "wb") as f:
f.write(response.content)
Best Practices
- For very long content (> 20,000 characters): Split into multiple batch requests or use streaming
- Concurrent requests: Multiple batch requests can be processed simultaneously - no need to wait for one to complete
- Monitor progress: Use the dashboard to track processing status for long batches
- Error handling: Always check for 400 errors indicating text length limits
- No timeout: The API does not enforce request timeouts, so requests will wait as long as needed
Batch processing is more efficient for long text as it handles chunking and combining automatically, ensuring natural pauses between sentences. The 20,000 character limit ensures reliable processing and prevents server overload. Files are automatically cleaned up after download to save disk space. Multiple users can process batches concurrently without any restrictions.