Cache voice conditionals and add FP16 autocast
All checks were successful
Build ROCm Image / build (push) Successful in 3m17s
All checks were successful
Build ROCm Image / build (push) Successful in 3m17s
Voice conditionals (s3tokenizer + voice encoder + mel embeddings) are expensive to compute but depend only on the reference audio, not the text. Previously they ran on every synthesis chunk — 3x wasted work for a 3-chunk request. Now computed once at startup and reused. Also wrap generate() in torch.amp.autocast(float16) for ~2x speedup on all model computation (T3 LLM, S3Gen CFM, HiFiGAN vocoder). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
12
main.py
12
main.py
@@ -19,8 +19,18 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _warmup(voices: dict) -> None:
|
||||
"""Run one synthesis to populate MIOpen's in-memory kernel cache."""
|
||||
"""Pre-compute voice conditionals and populate MIOpen's kernel cache."""
|
||||
from wyoming_voices import resolve_voice
|
||||
|
||||
# Pre-compute conditionals for all discovered voices so the first real
|
||||
# request doesn't pay the s3tokenizer + voice encoder cost.
|
||||
for name, path in voices.items():
|
||||
try:
|
||||
engine.prepare_voice(path)
|
||||
except Exception:
|
||||
logger.warning(f"Failed to prepare voice '{name}' (non-fatal)", exc_info=True)
|
||||
|
||||
# Synthesis warmup to populate MIOpen's in-memory kernel cache.
|
||||
audio_prompt = resolve_voice(None, voices) if voices else None
|
||||
logger.info("Running warmup synthesis...")
|
||||
try:
|
||||
|
||||
Reference in New Issue
Block a user