FamqBot uses a custom PyTorch model to process images from Discord channels and respond automatically. ~70ms end-to-end. Self-hosted via Docker.
$ famqbot deploy --region us-east-1
> Loading PyTorch model (95MB)...
✓ Model loaded — warm-up: 42ms
✓ Connected to Gateway v10
✓ Heartbeat active — 41.2s
✓ Watching 3 guilds, 12 channels
━━━ Events ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[16:42:01] ⚡ img_8x3k.png → xK7m2p dl=23ms ml=18ms tx=31ms 72ms
[16:42:04] ⚡ img_q9v2.png → Rn4TbW dl=19ms ml=21ms tx=28ms 68ms
[16:42:08] ⚡ img_v2k1.png → Dm8YqL dl=21ms ml=16ms tx=29ms 66ms
[16:42:12] ⚡ img_a1b3.png → v3XpNs dl=24ms ml=19ms tx=27ms 70ms
The stuff that actually matters when you're running bots at scale.
PyTorch CNN trained on our own dataset. Beam-search decoding handles distorted and rotated text. Runs inference in ~20ms per image.
WebSocket connection to Discord Gateway → image download → model inference → send response. Full cycle averages around 70ms.
Run multiple bot accounts from one dashboard. Each bot has its own channels, templates, and delay settings. Add or remove via web UI.
Bot tokens are AES-256 encrypted before storage. Dashboard uses JWT auth with role-based access. Nothing stored in plaintext.
WebSocket-powered UI showing solve logs, latency per event, accuracy stats, and per-channel activity. Updates in real time.
Full stack runs in Docker Compose. PostgreSQL, Redis, NestJS, Next.js, Python worker — all containerized. Deploy anywhere.
Three steps. Takes about 5 minutes.
Paste your Discord bot token in the dashboard. It gets encrypted and stored — you can revoke it anytime.
Choose which channels the bot should watch. Set response templates and delays per channel.
The bot connects to Discord, watches for images, runs them through the model, and responds. All automated.
Four services in Docker Compose. Each does one thing.
Free tier to try it out. Paid plans when you need more.
Download + inference + response takes around 70ms total. The ML model itself runs in ~20ms — the rest is network latency to Discord's CDN and API.
Yes. Everything runs in Docker Compose. You need a machine with Python 3.11+, Node 18+, PostgreSQL and Redis. A GPU is optional — CPU inference works fine for moderate load.
Tokens are AES-256 encrypted before being written to the database. The dashboard uses JWT sessions with role-based access. We don't store plaintext tokens anywhere.
On the self-hosted plan, yes. The worker loads any PyTorch model that follows the expected input/output shape. Docs cover how to retrain on your own data.
Free tier, no credit card. Set up a bot in 5 minutes.