A few exist but none quite like what you've built:
Closest things:
- Hugging Face Spaces — individual model demos exist (e.g. for Kokoro, Parler-TTS), but they're one model at a time, no parameter sliders, no side-by-side comparison, no rating system. You have to bounce between tabs to compare.
- TTS Arena (by Hugging Face) — blind A/B listening test to rank models, but it's fixed samples, no parameter control, and you're voting on quality not tuning settings.
- Coqui Studio (now defunct) — had a decent UI but was commercial, closed source, and is gone.
- elevenlabs.io / play.ht — commercial, closed models, no parameter exposure.
What you have that none of them do:
- Sliders for the actual synthesis parameters, not just voice selection
- Rating system tied to specific parameter combos
- Comparison matrix of your personal top picks
- History with exact settings so you can reproduce anything
- Self-hosted, so you control the models and compute
The gap in the market is specifically the parameter exploration angle — most tools treat TTS as a black box and just let you pick a voice. The fact that yours exposes noise_scale, noise_w, and length_scale with instant feedback is genuinely useful for anyone trying to understand what these knobs actually do, which is the stated goal.
If you open sourced it with a good README it'd likely get traction in the self-hosting / local AI communities (r/selfhosted, Hacker News, Hugging Face etc).