torch.compile with dynamic=True still specializes per shape family on
first call. The warmup was running one text length, leaving real requests
to JIT-compile their own shapes (15-22s for first chunk). HA freezes
because it gets no AudioChunk for 22 seconds.
Fix:
- Run 3 warmup passes (short/medium/long text) so torch.compile builds
a dynamic shape graph covering the range HA actually sends. Real
requests then hit a cached compilation and synthesize in 3-8s.
- Reduce default chunk_size from 300 to 120 chars so the first text
chunk is shorter, producing faster synthesis and earlier first audio.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Warmup now uses a ~170-char representative sentence so torch.compile
JIT-compiles for typical token sequence lengths. Previously "Warmup."
compiled for very short shapes, causing a full re-compile (17s) on the
first real HA request and pushing total synthesis past 30s.
- Compile model.ve (voice encoder) in addition to s3gen — both are
convolutional and hit the MIOpen workspace=0 bug.
- Fix _patch_timing: attribute is model.ve not model.voice_encoder,
so the timing wrap was silently skipping the speaker embedding.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Warmup: run a synthesis before accepting Wyoming connections so MIOpen
benchmarks and caches all conv layer shapes. Without this, the first HA
request triggers hundreds of benchmark runs and times out.
fp16: wrap in try/except so a failed autocast retries in fp32 rather
than dropping the request silently.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Wyoming-only server built around the official chatterbox TTS model.
Includes ROCm/AMD GPU support, sentence-level streaming, config.yaml
management, and Gitea CI for container builds.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>