Now in Open Beta

Discord Bots
That See & Think

FamqBot uses a custom PyTorch model to process images from Discord channels and respond automatically. ~70ms end-to-end. Self-hosted via Docker.

Terminal — famqbot worker
$ famqbot deploy --region us-east-1
> Loading PyTorch model (95MB)...
 Model loaded — warm-up: 42ms
 Connected to Gateway v10
 Heartbeat active — 41.2s
 Watching 3 guilds, 12 channels

━━━ Events ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[16:42:01]  img_8x3k.pngxK7m2p dl=23ms ml=18ms tx=31ms 72ms
[16:42:04]  img_q9v2.pngRn4TbW dl=19ms ml=21ms tx=28ms 68ms
[16:42:08]  img_v2k1.pngDm8YqL dl=21ms ml=16ms tx=29ms 66ms
[16:42:12]  img_a1b3.pngv3XpNs dl=24ms ml=19ms tx=27ms 70ms
0msAvg Latency
0%Solve Accuracy
0Tech Stack Components
0/7Self-Hosted Uptime

What It Does

The stuff that actually matters when you're running bots at scale.

Image Recognition

PyTorch CNN trained on our own dataset. Beam-search decoding handles distorted and rotated text. Runs inference in ~20ms per image.

Fast Pipeline

WebSocket connection to Discord Gateway → image download → model inference → send response. Full cycle averages around 70ms.

Multi-Bot Support

Run multiple bot accounts from one dashboard. Each bot has its own channels, templates, and delay settings. Add or remove via web UI.

Token Security

Bot tokens are AES-256 encrypted before storage. Dashboard uses JWT auth with role-based access. Nothing stored in plaintext.

Live Dashboard

WebSocket-powered UI showing solve logs, latency per event, accuracy stats, and per-channel activity. Updates in real time.

Docker Deployment

Full stack runs in Docker Compose. PostgreSQL, Redis, NestJS, Next.js, Python worker — all containerized. Deploy anywhere.

Setup

Three steps. Takes about 5 minutes.

1

Add Bot Token

Paste your Discord bot token in the dashboard. It gets encrypted and stored — you can revoke it anytime.

2

Pick Channels

Choose which channels the bot should watch. Set response templates and delays per channel.

3

It Just Works

The bot connects to Discord, watches for images, runs them through the model, and responds. All automated.

How It's Built

Four services in Docker Compose. Each does one thing.

Discord Gateway
Event Router
ML Inference
Auto Response

Stack

PyTorchML inference
NestJSREST API
Next.jsDashboard
PostgreSQLDatabase
RedisPub/Sub + cache
DockerDeployment

Pricing

Free tier to try it out. Paid plans when you need more.

Free

$0
  • 1 Bot Account
  • 3 Channels
  • 1,000 Tasks / Month
  • Basic Dashboard
  • Community Discord
Get Started
Most Popular

Pro

$29/mo
  • 5 Bot Accounts
  • Unlimited Channels
  • 50,000 Tasks / Month
  • Full Analytics
  • Custom Templates
  • API Access
  • Priority Support
Start Free Trial

Self-Hosted

Custom
  • Run on your own servers
  • No limits
  • Full source access
  • Custom model training
  • Dedicated support
  • SLA available
Contact Us

FAQ

Download + inference + response takes around 70ms total. The ML model itself runs in ~20ms — the rest is network latency to Discord's CDN and API.

Yes. Everything runs in Docker Compose. You need a machine with Python 3.11+, Node 18+, PostgreSQL and Redis. A GPU is optional — CPU inference works fine for moderate load.

Tokens are AES-256 encrypted before being written to the database. The dashboard uses JWT sessions with role-based access. We don't store plaintext tokens anywhere.

On the self-hosted plan, yes. The worker loads any PyTorch model that follows the expected input/output shape. Docs cover how to retrain on your own data.

Try It Out

Free tier, no credit card. Set up a bot in 5 minutes.