Replace upstream library with ROCm/Wyoming deployment project
Some checks failed
Build and Push Docker Image / build (push) Failing after 47s

Remove original Kokoro library source, demo, examples, tests, JS port,
and GitHub config. Add Dockerfile (ROCm 6.1 / PyTorch 2.5.1), Wyoming
TCP server, docker-compose with GPU passthrough, config, entrypoint,
and Gitea Actions build workflow.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-08 13:30:54 -04:00
parent dfb907a02b
commit 5e8e7ad6d4
113 changed files with 490 additions and 17307 deletions

26
entrypoint.sh Normal file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
set -e
echo "=== Kokoro TTS Wyoming Server ==="
# Show ROCm device info if available
if command -v rocm-smi &>/dev/null; then
echo "--- ROCm Devices ---"
rocm-smi --showproductname 2>/dev/null || true
echo "--------------------"
fi
# Quick GPU availability check via Python
python3 - <<'EOF'
import torch
available = torch.cuda.is_available()
print(f"ROCm/CUDA available: {available}")
if available:
count = torch.cuda.device_count()
for i in range(count):
print(f" [{i}] {torch.cuda.get_device_name(i)}")
else:
print(" WARNING: No GPU detected — running on CPU (performance will be degraded)")
EOF
exec python3 /app/server.py "$@"