Replace upstream library with ROCm/Wyoming deployment project
Some checks failed
Build and Push Docker Image / build (push) Failing after 47s
Some checks failed
Build and Push Docker Image / build (push) Failing after 47s
Remove original Kokoro library source, demo, examples, tests, JS port, and GitHub config. Add Dockerfile (ROCm 6.1 / PyTorch 2.5.1), Wyoming TCP server, docker-compose with GPU passthrough, config, entrypoint, and Gitea Actions build workflow. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
26
entrypoint.sh
Normal file
26
entrypoint.sh
Normal file
@@ -0,0 +1,26 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "=== Kokoro TTS Wyoming Server ==="
|
||||
|
||||
# Show ROCm device info if available
|
||||
if command -v rocm-smi &>/dev/null; then
|
||||
echo "--- ROCm Devices ---"
|
||||
rocm-smi --showproductname 2>/dev/null || true
|
||||
echo "--------------------"
|
||||
fi
|
||||
|
||||
# Quick GPU availability check via Python
|
||||
python3 - <<'EOF'
|
||||
import torch
|
||||
available = torch.cuda.is_available()
|
||||
print(f"ROCm/CUDA available: {available}")
|
||||
if available:
|
||||
count = torch.cuda.device_count()
|
||||
for i in range(count):
|
||||
print(f" [{i}] {torch.cuda.get_device_name(i)}")
|
||||
else:
|
||||
print(" WARNING: No GPU detected — running on CPU (performance will be degraded)")
|
||||
EOF
|
||||
|
||||
exec python3 /app/server.py "$@"
|
||||
Reference in New Issue
Block a user