Change Wyoming port from 10200 to 10300
All checks were successful
Build and Push Docker Image / build (push) Successful in 2m20s

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-08 18:57:02 -04:00
parent ad58061b6f
commit e8a3844994
4 changed files with 7 additions and 7 deletions

View File

@@ -36,6 +36,6 @@ WORKDIR /app
COPY server.py config.yaml entrypoint.sh ./ COPY server.py config.yaml entrypoint.sh ./
RUN chmod +x entrypoint.sh RUN chmod +x entrypoint.sh
EXPOSE 10200 EXPOSE 10300
ENTRYPOINT ["./entrypoint.sh"] ENTRYPOINT ["./entrypoint.sh"]

View File

@@ -10,7 +10,7 @@ A Docker image running [Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M) T
| PyTorch | 2.5.1 | | PyTorch | 2.5.1 |
| Target GPU | AMD RX 6700 XT (gfx1031) | | Target GPU | AMD RX 6700 XT (gfx1031) |
| Kokoro model | hexgrad/Kokoro-82M | | Kokoro model | hexgrad/Kokoro-82M |
| Protocol | Wyoming (TCP, port 10200) | | Protocol | Wyoming (TCP, port 10300) |
## Quick start ## Quick start
@@ -18,13 +18,13 @@ A Docker image running [Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M) T
docker compose up -d docker compose up -d
``` ```
The Wyoming server will be available at `<host-ip>:10200`. The Wyoming server will be available at `<host-ip>:10300`.
## Home Assistant setup ## Home Assistant setup
1. In Home Assistant, go to **Settings → Devices & Services → Add Integration** 1. In Home Assistant, go to **Settings → Devices & Services → Add Integration**
2. Search for **Wyoming Protocol** 2. Search for **Wyoming Protocol**
3. Enter your host IP and port `10200` 3. Enter your host IP and port `10300`
4. Kokoro voices will appear in your voice assistant configuration 4. Kokoro voices will appear in your voice assistant configuration
## Configuration ## Configuration

View File

@@ -1,7 +1,7 @@
# Kokoro TTS Wyoming Server Configuration # Kokoro TTS Wyoming Server Configuration
server: server:
uri: tcp://0.0.0.0:10200 uri: tcp://0.0.0.0:10300
tts: tts:
device: cuda # ROCm presents as 'cuda' to PyTorch via HIP device: cuda # ROCm presents as 'cuda' to PyTorch via HIP

View File

@@ -14,7 +14,7 @@ services:
- render - render
ports: ports:
- "10200:10200" - "10300:10300"
volumes: volumes:
# Persist HuggingFace model/voice cache so downloads survive container restarts # Persist HuggingFace model/voice cache so downloads survive container restarts
@@ -34,7 +34,7 @@ services:
import socket import socket
s = socket.socket() s = socket.socket()
s.settimeout(5) s.settimeout(5)
s.connect(('localhost', 10200)) s.connect(('localhost', 10300))
s.close() s.close()
interval: 30s interval: 30s
timeout: 10s timeout: 10s