Switch to ROCm 6.1 + torch 2.5.1 to fix MIOpen workspace=0 slowness
Some checks failed
Build ROCm Image / build (push) Failing after 11s

ROCm 7.2 + PyTorch 2.11.0 has a bug where PyTorch passes workspace=0 to
MIOpen convolutions, forcing fallback to the slow GemmFwdRest solver.
This caused s3gen.inference to take 15-22s instead of <5s, making
synthesis 3-4x slower than real-time audio playback.

ROCm 6.1 allocates workspace correctly so MIOpen picks fast GEMM solvers
without needing torch.compile workarounds.

Changes:
- Base image: rocm/dev-ubuntu-22.04:7.2 → 6.1
- torch 2.11.0 → 2.5.1 (rocm6.1 wheel index)
- Add pytorch_triton_rocm==3.1.0
- transformers 5.2.0 → 4.46.3, safetensors 0.5.3 → 0.4.0
- s3tokenizer unpinned → 0.3.0
- resemble-perth==1.0.1 directly (v1.0.1 is pip-installable; drop stub)
- Drop Dockerfile perth_stub steps
- Drop torch.compile and timing patches from engine.py (not needed)
- Drop multi-pass warmup from main.py (torch JIT warmup not needed)
- Drop ROCm 7.2-specific env vars from docker-compose.yml

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-05 17:27:21 -04:00
parent 23a0b914fa
commit 8de67c8bd9
6 changed files with 18 additions and 100 deletions

View File

@@ -28,18 +28,10 @@ services:
- hf_cache:/app/hf_cache
environment:
- HF_HUB_ENABLE_HF_TRANSFER=1
# Required for RX 6700 XT (gfx1031) - not natively supported in ROCm 7.2.
# Required for RX 6700 XT (gfx1031) - not natively supported in ROCm.
- HSA_OVERRIDE_GFX_VERSION=10.3.0
# Disable MIOpen's SQLite cache — avoids crashes writing benchmark results.
# PyTorch's in-memory benchmark cache still applies within a container run.
- MIOPEN_DISABLE_CACHE=1
# Disable MLIR-based ImplicitGEMM solvers. These compile MLIR kernels on the
# fly and hit 'too many open files' during the exhaustive benchmark search.
- MIOPEN_DEBUG_CONV_IMPLICIT_GEMM=0
# Suppress MIOpen workspace=0 fallback warnings (errors still shown).
# Levels: 0=quiet 1=fatal 2=error 3=warning(default) 4=info 5=debug
- MIOPEN_LOG_LEVEL=2
# - HF_TOKEN=your_token_here
volumes:
hf_cache: