- Add src/gitea.rs: axum webhook server on :9800, handles @mention in
issues and PRs, spawns claude -p for review, posts result as comment
- Add call_gitea_api tool: LLM can directly call Gitea REST API with
pre-configured admin token (noc_bot identity)
- Add Caddy to Docker image as ingress layer (subdomain/path routing)
- Config: add gitea section with token_file support for auto-provisioned token
- Update suite.md: VPS-first deployment, SubAgent architecture, Caddy role
- Configurable backend: claude (CLI) or openai (API), selected in config.yaml
- OpenAI streaming via SSE with conversation history in memory
- Session isolation: config name included in session UUID
- Markdown to Telegram HTML conversion (pulldown-cmark) for final messages
- Fix sendMessageDraft: skip cursor to preserve monotonic text growth,
skip empty content chunks from SSE stream
- Simplify Makefile: single deploy target
Telegram Bot API 9.3+ sendMessageDraft provides smooth streaming text
rendering without the flickering of repeated edits. Falls back to
editMessageText automatically if the API is unavailable (e.g. older
clients or group chats). Also reduces edit interval from 5s to 3s and
uses 1s interval for draft mode.
Async Rust bot (teloxide + tokio) that:
- Authenticates users per chat with a passphrase (resets daily at 5am)
- Generates deterministic UUID v5 session IDs from chat_id + date
- Pipes messages to `claude -p --session-id/--resume <uuid>`
- Persists auth and session state to disk across restarts
- Deploys as systemd --user service via `make deploy`