Alice
AI Local Interactive Cross-device Engine
Same AI session, anywhere. Terminal ↔ Feishu. No cloud lock-in.
Alice is a Feishu long-connection connector that turns CLI-based LLM agents into interactive bots inside Feishu. Works with OpenCode (DeepSeek V4), Codex, Claude, Gemini, Kimi.
The core idea
Your terminal agent and your Feishu bot are the same session. Start a refactor in your IDE, check progress from your phone, send the next instruction via Feishu. Alice bridges your local CLI agent to Feishu's WebSocket so you're never locked to one device.
What problem does Alice solve?
You have an LLM agent CLI installed — and it works great in your terminal. But:
- Connects to Feishu's WebSocket for real-time message delivery
- Routes incoming messages into
chat(casual) orwork(task-oriented) scenes - Calls your configured LLM CLI backend with the right prompt, model, and permissions
- Sends progress updates, final replies, files, and images back to Feishu
- Exposes a local HTTP API for bundled skills and automation tasks
Key Features
- Multi-bot: One
aliceprocess, oneconfig.yaml, multiple independent bots - Scene routing: Separate
chatandworkmodes with per-scene LLM profiles - Six backends: OpenCode, Codex, Claude, Gemini, Kimi — switch per scene
- Session persistence: Resumable threads, session aliases, usage counters
- Live status cards: Real-time heartbeat showing backend activity and file changes
- Automation: Cron-like scheduled tasks with
send_text,run_llm, andrun_workflowactions - Bundled skills: Extendable skill scripts that call the runtime API
- Subprocess delegation:
alice delegatelets OpenCode agents send subtasks to other backends - Zero cloud dependency: Everything runs on your machine
Who is Alice for?
- Teams using Feishu that want LLM agent access without building custom integrations
- Developers who already use CLI agents and want them accessible in group chats
- Operators who need scheduled automation along with interactive LLM capabilities
Navigation
| Section | For |
|---|---|
| Tutorials | New users — get Alice running in 5 minutes |
| How-To Guides | Task-focused recipes for specific goals |
| Explanation | Deep dives into concepts and design |
| Reference | Comprehensive config, API, and CLI docs |
| Development | Contributor-oriented architecture and guides |
Quick Start
npm install -g @alice_space/alice
alice setup
# edit ~/.alice/config.yaml with your Feishu credentials
alice --feishu-websocket
See the Quick Start tutorial for detailed steps.
Quick Start
Get Alice running and responding to messages in 5 minutes.
Prerequisites
- Node.js (for
npm install) or Go 1.25+ (for source build) - A Feishu app with bot capability and long connection enabled
- At least one LLM CLI installed and authenticated:
If you haven't set up your Feishu app yet, follow the Feishu Platform Setup tutorial first.
Step 1: Install
Via npm (recommended):
npm install -g @alice_space/alice
Via installer script:
curl -fsSL https://cdn.jsdelivr.net/gh/Alice-space/alice@main/scripts/alice-installer.sh | bash -s -- install
From source:
git clone https://github.com/Alice-space/alice.git
cd alice
go build -o bin/alice ./cmd/connector
Step 2: Setup
alice setup
This creates the directory structure at ~/.alice/, writes a default config.yaml, syncs bundled skills, and (on Linux) registers a systemd user unit.
Step 3: Configure
Edit ~/.alice/config.yaml and fill in at minimum:
bots:
my_bot:
name: "Alice"
feishu_app_id: "cli_xxxxxxxx" # from Feishu Open Platform
feishu_app_secret: "your_secret" # from Feishu Open Platform
llm_profiles:
chat:
provider: "opencode"
model: "deepseek/deepseek-v4-flash"
work:
provider: "opencode"
model: "deepseek/deepseek-v4-pro"
The default config ships with OpenCode profiles targeting DeepSeek models. If you use a different LLM CLI, see Configure LLM Backends.
Step 4: Verify Backend Auth
Make sure your LLM CLI can authenticate:
opencode --version # or codex, claude, etc.
Step 5: Start
alice --feishu-websocket
You should see log output indicating the Feishu WebSocket connection and per-bot runtime initialization.
Step 6: Test with Work Mode
Most people use Alice for work mode — task-oriented engineering, debugging, and automation. Here's how:
In any Feishu group chat where your bot is present:
@Alice #work fix the login timeout in auth.go
What happens:
- Alice creates a Feishu thread for this task
- Starts the configured LLM backend (e.g. DeepSeek V4)
- Streams progress and tool calls back to the thread
- The session is persisted — you can resume from your terminal later
Try the built-in commands too:
/help — Show command list
/status — Show current session and backend info
/stop — Cancel the running task
Chat Mode (Casual)
Alice also supports a casual chat mode where the bot behaves like a persistent group participant. Just message with @Alice:
@Alice what's the weather like?
Chat mode uses the chat LLM profile (lighter model), shares one session per group, and doesn't create threads. Use /clear to reset the chat session.
Tip: The default
config.example.yamlenables both modes. Work mode is the primary use case for most operators. If you only need work mode, setgroup_scenes.chat.enabled: false.
What's Next?
- Configure separate chat and work scenes
- Switch to a different LLM backend
- Deploy as a persistent service
- Understand the system model
Feishu Platform Setup
This tutorial walks through creating a Feishu app that Alice can connect to. Estimated time: 15 minutes.
Overview
Alice needs a Feishu app with:
- Bot capability enabled
im.message.receive_v1event subscription- Required message permissions
- Long connection mode enabled
Step 1: Log into Feishu Open Platform
Visit Feishu Open Platform and sign in with your organization account.
Lark (international) users: Visit Lark Open Platform instead. Then set
feishu_base_url: "https://open.larksuite.com"in your bot config.
Step 2: Create an App
- Click Create App (创建应用)
- Choose Enterprise Self-built App (企业自建应用)
- Name your app (e.g., "Alice Bot") and upload an icon
- Click Create
Step 3: Enable Bot Capability
- In the left sidebar, go to Features → Bot (机器人)
- Toggle Enable Bot (启用机器人)
- Configure the bot's name, avatar, and description as desired
Step 4: Add Event Subscription
- Go to Event Subscriptions (事件订阅)
- Click Add Event (添加事件)
- Find and select Receive Message (接收消息) →
im.message.receive_v1 - Click Confirm
This is what allows Alice to receive all messages the bot can see.
Step 5: Configure Permissions
- Go to Permissions (权限管理)
- Search for and enable these permissions:
| Permission | Why |
|---|---|
im:message | Read messages sent to the bot |
im:message:send_as_bot | Send messages as the bot |
im:message:read | Read message content |
im:resource | Download images and files |
contact:user.id:readonly | Resolve user names |
contact:group.id:readonly | Access group chat info |
- Click Save (保存)
Step 6: Enable Long Connection
- Go to Features → Event Subscriptions (事件订阅)
- Find the Connection Mode (连接方式) section
- Switch from Request URL to Long Connection (长连接)
- Save the change
This is critical. Alice uses WebSocket long connections, not HTTP webhooks. If long connection mode is not enabled, Alice cannot receive messages.
Step 7: Get Credentials
- Go to App Settings → Basic Info (基础信息)
- Copy your App ID (应用凭证 → App ID)
- Copy your App Secret (应用凭证 → App Secret)
These go into your config.yaml:
bots:
my_bot:
feishu_app_id: "cli_xxxxxxxx" # your App ID
feishu_app_secret: "your_secret" # your App Secret
Step 8: Publish and Approve
- Go to Version Management (版本管理与发布)
- Click Create Version (创建版本), fill in version info
- After creation, click Apply for Release (申请发布)
- An admin in your Feishu org must approve the release
- Once approved, users in your org can find and interact with the bot
Tip: During development, you can add individual users as App Collaborators (应用协作者) under App Settings, allowing them to test the bot before publishing.
Verification
After starting Alice with alice --feishu-websocket, check the logs:
feishu-codex connector started (long connection mode)
If you see WebSocket connection errors, double-check that long connection mode is enabled and your credentials are correct.
Next Steps
Install Alice
Three ways to install. Pick the one that fits your workflow.
npm (Recommended)
npm install -g @alice_space/alice
After installation, run the setup wizard:
alice setup
This creates ~/.alice/, writes a starter config.yaml, syncs bundled skills, registers a systemd user unit (Linux), and installs the OpenCode delegate plugin.
Requirements: Node.js 18+
Installer Script
Single-command install from GitHub Releases:
# Install latest stable release
curl -fsSL https://cdn.jsdelivr.net/gh/Alice-space/alice@main/scripts/alice-installer.sh | bash -s -- install
# Install a specific version
curl -fsSL https://cdn.jsdelivr.net/gh/Alice-space/alice@main/scripts/alice-installer.sh | bash -s -- install --version v1.2.3
# Uninstall
curl -fsSL https://cdn.jsdelivr.net/gh/Alice-space/alice@main/scripts/alice-installer.sh | bash -s -- uninstall
The installer downloads the correct binary for your platform (darwin-amd64, darwin-arm64, linux-amd64, linux-arm64, win32-x64) and verifies checksums.
After installation, run alice setup to initialize the config and skills directory.
Requirements: curl, tar
Build from Source
git clone https://github.com/Alice-space/alice.git
cd alice
go build -o bin/alice ./cmd/connector
Optionally install to your PATH:
cp bin/alice /usr/local/bin/alice
Requirements: Go 1.25+
Verify Installation
alice --version
Should print the version string. If alice setup has been run, you can also check:
ls ~/.alice/
# config.yaml skills/ log/ bots/
Runtime Home
Alice uses different default home directories depending on the build channel:
| Build | Default Home |
|---|---|
| Release (npm / installer) | ~/.alice |
| Dev (source build) | ~/.alice-dev |
Override with --alice-home or the ALICE_HOME environment variable.
Configure Chat & Work Scenes
Alice routes incoming group messages into one of two scenes: chat for casual conversation, and work for explicit task execution.
Scene Routing Overview
Incoming Message
├─ Built-in command? (/help, /status, /stop, /clear, /session)
│ └─ Handle directly, no LLM
├─ Matches work trigger? (@Alice #work ...)
│ └─ Route to work scene
└─ Otherwise
└─ Route to chat scene (if enabled)
Both scenes are configured under bots.<id>.group_scenes.
Chat Scene
The chat scene is for low-friction, persistent conversation. One session per chat group.
group_scenes:
chat:
enabled: true
session_scope: "per_chat"
llm_profile: "chat"
no_reply_token: "[[NO_REPLY]]"
create_feishu_thread: false
| Field | Description |
|---|---|
enabled | Set to true to activate the chat scene |
session_scope | "per_chat" — one session for the whole group. "per_thread" — one session per Feishu thread |
llm_profile | Name of the LLM profile under llm_profiles to use |
no_reply_token | If the model returns this exact string, Alice stays silent instead of replying |
create_feishu_thread | Whether to wrap replies in a Feishu thread |
Use /clear to reset the chat session and start fresh.
Work Scene
The work scene is for task-oriented execution. Each work task gets its own thread and session.
group_scenes:
work:
enabled: true
trigger_tag: "#work"
session_scope: "per_thread"
llm_profile: "work"
create_feishu_thread: true
| Field | Description |
|---|---|
enabled | Set to true to activate the work scene |
trigger_tag | The tag that must appear in a message to trigger work mode (after the @bot mention) |
session_scope | "per_thread" — each Feishu thread gets its own session. "per_chat" — shared session |
llm_profile | Name of the LLM profile to use (typically a more capable model) |
create_feishu_thread | Automatically create a Feishu thread for work replies |
Work mode usage:
@Alice #work fix the login bug → Starts work, calls LLM
@Alice #work → Creates work thread without calling LLM
@Alice #work /session <backend-session-id> → Binds thread to existing backend session
Common Patterns
Chat-Only Bot
group_scenes:
chat:
enabled: true
session_scope: "per_chat"
llm_profile: "chat"
no_reply_token: "[[NO_REPLY]]"
work:
enabled: false
Split Chat + Work
Use a lighter model for chat and a more capable one for work:
llm_profiles:
chat:
provider: "opencode"
model: "deepseek/deepseek-v4-flash"
work:
provider: "opencode"
model: "deepseek/deepseek-v4-pro"
variant: "max"
permissions:
sandbox: "danger-full-access"
ask_for_approval: "never"
group_scenes:
chat:
enabled: true
session_scope: "per_chat"
llm_profile: "chat"
no_reply_token: "[[NO_REPLY]]"
work:
enabled: true
trigger_tag: "#work"
session_scope: "per_thread"
llm_profile: "work"
create_feishu_thread: true
Legacy Trigger Mode
If both chat and work are disabled, Alice falls back to a legacy trigger system:
bots:
my_bot:
trigger_mode: "at" # at | prefix | all
trigger_prefix: "" # only used when trigger_mode is "prefix"
| Mode | Behavior |
|---|---|
at | Only @bot messages are accepted |
prefix | Only messages starting with trigger_prefix |
all | Every message is accepted (no filter) |
New deployments should prefer explicit scene routing.
Configure LLM Backends
Alice supports five LLM backends. Each scene references an llm_profile that specifies which provider, model, and settings to use.
Supported Providers
| Provider | CLI Tool | Notes |
|---|---|---|
opencode | opencode | OpenCode CLI for DeepSeek and other models |
codex | codex | OpenAI Codex CLI. Supports reasoning_effort, personality, profile |
claude | claude | Anthropic Claude Code CLI. Streaming by default |
gemini | gemini | Google Gemini CLI |
kimi | kimi | Moonshot Kimi CLI |
Each provider must be installed and authenticated separately. Alice does not manage provider authentication.
Profile Configuration
Profiles are defined under bots.<id>.llm_profiles:
bots:
my_bot:
llm_profiles:
my_profile:
provider: "opencode"
model: "deepseek/deepseek-v4-pro"
variant: "max"
timeout_secs: 172800
permissions:
sandbox: "danger-full-access"
ask_for_approval: "never"
add_dirs: ["/data/corpus"]
Common Fields
| Field | All | Description |
|---|---|---|
provider | ✓ | Backend name: opencode, codex, claude, gemini, kimi |
command | ✓ | Path to the CLI binary. Defaults to the provider name (e.g. opencode) |
timeout_secs | ✓ | Per-run timeout in seconds. Default: 172800 (48 hours) |
model | ✓ | Model identifier (required) |
permissions.sandbox | ✓ | "read-only", "workspace-write", or "danger-full-access" |
permissions.ask_for_approval | ✓ | "untrusted", "on-request", or "never" |
permissions.add_dirs | ✓ | Extra directories accessible to the agent |
prompt_prefix | ✓ | Text prepended to every prompt |
Codex-Specific Fields
| Field | Description |
|---|---|
reasoning_effort | Thinking level: "low", "medium", "high", or "xhigh" |
personality | Named personality preset from Codex CLI config |
profile | Named sub-profile from Codex CLI config |
OpenCode-Specific Fields
| Field | Description |
|---|---|
variant | DeepSeek variant: "max", "high", "minimal" |
Custom Binary Path
If your CLI binary is outside $PATH, specify the absolute path:
llm_profiles:
work:
provider: "opencode"
command: "/usr/local/bin/opencode"
model: "deepseek/deepseek-v4-pro"
You can also extend $PATH via the env section:
bots:
my_bot:
env:
PATH: "/home/user/bin:/usr/local/bin:/usr/bin:/bin"
Per-Profile Overrides
Some backends support per-profile runner overrides via profile_overrides. This is an advanced feature used when the same provider needs different CLI configurations for different scenes.
llm_profiles:
executor:
provider: "codex"
model: "gpt-5.4-mini"
profile: "executor"
profile_overrides:
executor:
command: "/opt/bin/codex-executor"
provider_profile: "executor-v2"
timeout: 3600
exec_policy:
sandbox: "danger-full-access"
ask_for_approval: "never"
Environment Variables for Backend Processes
The env section under bots.<id> passes environment variables to every LLM subprocess:
bots:
my_bot:
env:
HTTPS_PROXY: "http://127.0.0.1:8080"
ALL_PROXY: "http://127.0.0.1:8080"
This is especially useful for proxy configuration and API key management.
Examples
OpenCode with DeepSeek (chat)
llm_profiles:
chat:
provider: "opencode"
model: "deepseek/deepseek-v4-flash"
Codex with reasoning
llm_profiles:
work:
provider: "codex"
command: "codex"
model: "gpt-5.4-mini"
reasoning_effort: "high"
permissions:
sandbox: "danger-full-access"
ask_for_approval: "never"
Claude
llm_profiles:
work:
provider: "claude"
model: "claude-sonnet-4-6"
prompt_prefix: "You are a senior software engineer. Be concise."
permissions:
sandbox: "danger-full-access"
ask_for_approval: "never"
Gemini
llm_profiles:
chat:
provider: "gemini"
model: "gemini-2.5-pro"
Kimi
llm_profiles:
chat:
provider: "kimi"
model: "kimi-model-identifier"
Customize SOUL.md Persona
Each bot can have a persona document called SOUL.md that defines its behavior, tone, and reply preferences.
What is SOUL.md?
SOUL.md is a Markdown file with YAML frontmatter. It serves two purposes:
- Persona: The Markdown body is injected into the LLM prompt for
chatscenes, shaping the bot's tone and behavior - Metadata: The YAML frontmatter controls machine-readable reply behavior
File Location
By default, Alice looks for SOUL.md in the bot's alice_home:
~/.alice/bots/<bot_id>/SOUL.md
You can customize the path with soul_path:
bots:
my_bot:
soul_path: "SOUL.md" # relative to alice_home (default)
# soul_path: "/path/to/custom/SOUL.md" # absolute path
If the file doesn't exist at startup, Alice writes an embedded template from prompts/SOUL.md.example.
Frontmatter Keys
---
image_refs:
- refs/avatar.png
- refs/signature.jpg
output_contract:
hidden_tags:
- reply_will
- motion
reply_will_tag: reply_will
reply_will_field: reply_will
motion_tag: motion
suppress_token: "[[NO_REPLY]]"
---
| Key | Description |
|---|---|
image_refs | List of local image paths the bot can reference. Paths are relative to the directory containing SOUL.md |
output_contract.hidden_tags | Tags in the bot's reply that Alice strips before sending to Feishu |
output_contract.reply_will_tag | Tag marking the bot's intent to reply |
output_contract.reply_will_field | Field name within the tag |
output_contract.motion_tag | Tag for motion/animation cues |
output_contract.suppress_token | If the bot outputs this token, Alice suppresses the reply entirely |
Full Example
---
image_refs:
- refs/avatar.png
output_contract:
hidden_tags:
- reply_will
- motion
reply_will_tag: reply_will
reply_will_field: reply_will
motion_tag: motion
suppress_token: "[[NO_REPLY]]"
---
# Persona
You are Alice, a helpful engineering assistant. You speak concisely in Chinese
mixed with English technical terms. You never use emoji unless explicitly asked.
## Rules
- Keep code snippets under 30 lines
- Prefer explaining the approach before showing code
- Never apologize — just fix the problem
When is SOUL.md Applied?
- Chat scene: The full body is prepended to the prompt, and frontmatter is parsed by Alice for reply control
- Work scene: SOUL.md is intentionally skipped. Work mode is for task execution, not persona roleplay
Testing Your Persona
- Edit
SOUL.md - Restart Alice (multi-bot mode requires restart; single-bot mode supports hot reload)
- Send a message in a chat scene — the bot should reflect the updated persona
- Use
/clearto reset the conversation if needed
Deploy to Server
Run Alice as a persistent background service so it survives restarts and runs reliably.
systemd (Linux)
alice setup automatically creates a systemd user unit if systemd is available.
# Start
systemctl --user start alice.service
# Enable auto-start on boot
systemctl --user enable alice.service
# Check status
systemctl --user status alice.service
# View logs
journalctl --user-unit alice.service -n 100 --no-pager
journalctl --user-unit alice.service --since "30 min ago" --no-pager
# Restart
systemctl --user restart alice.service
If you installed without alice setup, create the unit manually:
# ~/.config/systemd/user/alice.service
[Unit]
Description=Alice Feishu LLM Connector
After=network-online.target
[Service]
Type=simple
ExecStart=%h/.alice/bin/alice --feishu-websocket
Restart=on-failure
RestartSec=10
Environment=HOME=%h
[Install]
WantedBy=default.target
Then:
systemctl --user daemon-reload
systemctl --user start alice.service
macOS
On macOS, use launchd or run manually.
launchd
<!-- ~/Library/LaunchAgents/com.alice.connector.plist -->
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.alice.connector</string>
<key>ProgramArguments</key>
<array>
<string>/Users/you/.alice/bin/alice</string>
<string>--feishu-websocket</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/Users/you/.alice/log/stdout.log</string>
<key>StandardErrorPath</key>
<string>/Users/you/.alice/log/stderr.log</string>
</dict>
</plist>
launchctl load ~/Library/LaunchAgents/com.alice.connector.plist
Manual
alice --feishu-websocket
Use tmux or screen to keep it running after logout.
Runtime-Only Mode
For deployments that only need automation and the runtime API (no Feishu WebSocket):
alice --runtime-only
In headless environments:
alice-headless --runtime-only
Important:
alice-headlesscannot start the Feishu connector. It is explicitly limited to runtime-only mode.
Logging
Alice uses structured JSON logs via zerolog with daily log rotation.
log_level: "info" # debug | info | warn | error
log_file: "" # empty = <ALICE_HOME>/log/YYYY-MM-DD.log
log_max_size_mb: 20 # rotate after 20 MB
log_max_backups: 5 # keep 5 rotated files
log_max_age_days: 7 # keep logs for 7 days
log_compress: false # gzip rotated logs
Health Check
The runtime API exposes a health endpoint:
curl http://127.0.0.1:7331/healthz
# {"status":"ok"}
Monitoring
/statuscommand in Feishu shows usage totals and active automation tasksjournalctl(systemd) or log files for structured log analysis- Session and runtime state are persisted to JSON files for inspection
Multi-Bot Deployments
One alice process can host multiple bots. All bots share the same process but each gets its own runtime directory, workspace, and queue.
bots:
engineering_bot:
feishu_app_id: "cli_11111"
# ...
support_bot:
feishu_app_id: "cli_22222"
# ...
Multi-bot mode disables config hot reload. Restart the process after configuration changes.
Use alice delegate
The alice delegate subcommand sends a one-shot prompt to any configured LLM backend from the command line.
Basic Usage
alice delegate --provider codex --prompt "Refactor the auth module to use JWT"
alice delegate --provider claude --prompt "Review this code for security issues"
alice delegate --provider opencode --prompt "Explain how DNS resolution works"
Flags
| Flag | Description |
|---|---|
--provider | LLM backend: opencode, codex, claude, gemini, kimi |
--prompt | The prompt text (required) |
--model | Override the default model |
--workspace | Override the working directory |
Piping Input
Send a diff or file content via stdin:
cat diff.patch | alice delegate --provider claude --prompt "Review this PR diff"
alice delegate --provider codex --prompt "Summarize this log" < /var/log/app.log
OpenCode Plugin Integration
alice setup writes a plugin to ~/.config/opencode/plugins/alice-delegate.js. Once present, OpenCode agents (including DeepSeek) automatically gain two extra tools:
codex— delegates a subtask to Codexclaude— delegates a subtask to Claude
No extra configuration is needed. OpenCode loads plugins from that directory automatically.
This is the primary use case for alice delegate: allowing an OpenCode agent to fan out parallel work or delegate specialized tasks to other LLM backends.
How It Connects
alice delegate uses the same llm_profiles configuration as the main Alice runtime. A profile named delegate under the first bot is used by default. The profile determines the model, permissions, and environment variables for the delegated run.
bots:
my_bot:
llm_profiles:
delegate:
provider: "claude"
model: "claude-sonnet-4-6"
permissions:
sandbox: "workspace-write"
ask_for_approval: "never"
Examples
Quick code review
alice delegate --provider claude --prompt "Check this function for bugs and suggest improvements" < src/auth.go
Refactoring
alice delegate --provider codex --prompt "Extract the database logic into a separate package"
Documentation generation
alice delegate --provider opencode --prompt "Generate JSDoc comments for all exported functions"
Use Built-in Commands
Alice provides several slash commands that bypass the LLM and are handled directly by the connector. All commands work in both group chats and direct messages.
/help
Displays the built-in command help card with all available commands.
/help
/status
Shows a status card with:
- Total sessions and usage counters
- Active automation tasks
- Current LLM backend and session details
/status
/clear
Resets the current chat scene session. The next message starts a fresh conversation with no prior context.
/clear
Only affects
chatscenes.workscenes are thread-scoped and reset naturally when the thread ends.
/stop
Immediately cancels the currently running LLM call for the active session.
/stop
Use this when the agent is stuck in a loop or taking too long. The bot will acknowledge the stop and become available for new messages.
/session
Binds a Feishu work thread to an existing backend session. Useful for resuming long-running tasks after a restart.
/session <backend-session-id>
/session <backend-session-id> Continue the review
- Without an instruction: binds the session, no LLM call
- With an instruction: binds the session and immediately calls the LLM with the instruction
Only works in
workscene threads.
/cd, /ls, /pwd
Inspect and change the current working directory for the active work session:
/pwd # Show current directory
/ls # List files
/ls internal/ # List files in subdirectory
/cd /tmp/build # Change directory
These commands only affect work sessions. The directory change persists for the duration of the session.
Command Precedence
When a message starts with /, Alice checks for built-in commands before routing to the LLM:
- Built-in command match → handle directly
- No match → route to scene (LLM handles it)
To force a message starting with / to go to the LLM, prefix it with a space or use the work trigger:
/some-custom-command # Space before slash → LLM path
@Alice #work /some-cmd # Work trigger → LLM path
Configure Private Chat
Alice can handle direct messages (private chats) with the same scene routing as group chats.
Private vs Group Scenes
Group chats use group_scenes. Direct messages use private_scenes. They are configured identically but under different keys:
bots:
my_bot:
group_scenes:
chat: { ... }
work: { ... }
private_scenes:
chat: { ... }
work: { ... }
Private scenes are disabled by default. Enable them explicitly.
Chat Scene (Private)
private_scenes:
chat:
enabled: true
session_scope: "per_user" # one session per DM user
llm_profile: "chat"
no_reply_token: "[[NO_REPLY]]"
create_feishu_thread: false
| Field | Description |
|---|---|
session_scope | "per_user" — all DMs from the same user share one session. "per_message" — each DM creates a new session |
llm_profile | Same profile reference as group scenes |
no_reply_token | Suppress reply token |
Typical use: a personal assistant available in DMs, maintaining context per user.
Work Scene (Private)
private_scenes:
work:
enabled: true
trigger_tag: "#work"
session_scope: "per_message" # each #work DM starts fresh
llm_profile: "work"
create_feishu_thread: true
| Field | Description |
|---|---|
session_scope | "per_message" recommended for DMs — each task is isolated |
Full Example
bots:
my_bot:
group_scenes:
chat:
enabled: true
session_scope: "per_chat"
llm_profile: "chat"
no_reply_token: "[[NO_REPLY]]"
work:
enabled: true
trigger_tag: "#work"
session_scope: "per_thread"
llm_profile: "work"
create_feishu_thread: true
private_scenes:
chat:
enabled: true
session_scope: "per_user"
llm_profile: "chat"
no_reply_token: "[[NO_REPLY]]"
work:
enabled: true
trigger_tag: "#work"
session_scope: "per_message"
llm_profile: "work"
create_feishu_thread: true
Behavior Differences from Group Chat
- Mentions are implicit — DMs don't require @bot. Every message is directed at the bot.
- user_id resolution — Alice resolves the DM user's name via Feishu API
- Thread creation — When
create_feishu_thread: true, work replies create threads within the DM
Write a Bundled Skill
Bundled skills extend Alice with script-based tools that call the Runtime HTTP API. This guide shows you how to create one.
Skill Anatomy
A bundled skill is a directory under skills/:
skills/my-skill/
├── SKILL.md # Skill documentation
├── scripts/
│ └── my-skill.sh # Executable script
└── agents/
└── openai.yaml # OpenAI agent configuration (optional)
Step 1: Create the Directory
Create your skill under skills/ in the Alice source tree or under ${ALICE_HOME}/skills/ for local development.
Step 2: Write SKILL.md
SKILL.md documents your skill for both humans and LLM agents:
# my-skill
Sends a daily summary of active automation tasks to a specified Feishu chat.
## Usage
This skill is triggered by the automation system. It reads all active tasks
from the runtime API and sends a formatted summary card.
## Environment
Requires `ALICE_RUNTIME_API_BASE_URL` and `ALICE_RUNTIME_API_TOKEN` to be set.
Step 3: Write the Script
The script runs as a subprocess. Alice injects these environment variables:
| Variable | Description |
|---|---|
ALICE_RUNTIME_API_BASE_URL | Base URL of the runtime API (e.g. http://127.0.0.1:7331) |
ALICE_RUNTIME_API_TOKEN | Bearer token for API authentication |
ALICE_RUNTIME_BIN | Path to the alice binary |
ALICE_RECEIVE_ID_TYPE | Type of the receive target (e.g. chat_id) |
ALICE_RECEIVE_ID | ID of the receive target |
ALICE_SOURCE_MESSAGE_ID | ID of the triggering message (if applicable) |
ALICE_ACTOR_USER_ID | Feishu user ID of the person interacting |
ALICE_ACTOR_OPEN_ID | Feishu open ID of the person interacting |
ALICE_CHAT_TYPE | Chat type: group or p2p |
ALICE_SESSION_KEY | Canonical session key for the current conversation |
Example Script
#!/usr/bin/env bash
set -euo pipefail
# Get all active tasks
TASKS=$(curl -sS \
-H "Authorization: Bearer ${ALICE_RUNTIME_API_TOKEN}" \
"${ALICE_RUNTIME_API_BASE_URL}/api/v1/automation/tasks?status=active")
# Count and format
COUNT=$(echo "$TASKS" | jq '. | length')
echo "Active tasks: $COUNT"
Make it executable:
chmod +x skills/my-skill/scripts/my-skill.sh
Step 4: Register the Skill
Add your skill to the bot's allowed skills list:
bots:
my_bot:
permissions:
allowed_skills: ["alice-message", "alice-scheduler", "my-skill"]
Runtime API Endpoints Available to Skills
Skills primarily use these endpoints:
| Endpoint | Method | Purpose |
|---|---|---|
/api/v1/messages/image | POST | Send an image to the chat |
/api/v1/messages/file | POST | Send a file to the chat |
/api/v1/automation/tasks | GET | List automation tasks |
/api/v1/automation/tasks | POST | Create an automation task |
/api/v1/automation/tasks/:id | GET/PATCH/DELETE | Manage a specific task |
All requests require the Authorization: Bearer <token> header.
Permissions
Skills operate under the bot's runtime permissions:
permissions:
runtime_message: true # Allow sending messages via API
runtime_automation: true # Allow managing automation tasks
If a permission is disabled, the corresponding API endpoints return 403 Forbidden.
Built-in Skills Reference
Alice ships with two bundled skills:
- alice-message: Sends rich messages and attachments via the runtime API
- alice-scheduler: Manages automation tasks from Feishu conversations
Study their source (skills/alice-message/ and skills/alice-scheduler/) for real-world examples of skill structure and API usage.
Troubleshooting
Common problems and solutions for running Alice.
Bot doesn't respond in group chats
Check scene routing:
- Verify
group_scenes.chat.enabledistrue - If both scenes are disabled, check
trigger_mode(should beatorprefix)
Check bot identity:
- Bot
open_idis fetched automatically at startup now — no manualfeishu_bot_open_idconfig needed - Verify
feishu_app_idandfeishu_app_secretare correct
Check logs:
# Look for WebSocket connection status
grep "long connection" ~/.alice/log/*.log
# Look for auth errors
grep "error" ~/.alice/log/*.log | head -20
Work mode never triggers
- Verify
group_scenes.work.enabledistrue - Verify
trigger_tagis set (e.g.,"#work") - Message must contain
@BotName #work ...— both the @mention and the trigger tag - The trigger tag must appear after the @mention in the same message
Wrong model or reasoning level
- Check
llm_profilesfor the correctprovider,model, and provider-specific fields - Verify the scene points at the correct profile key:
group_scenes: work: llm_profile: "work" # must match a key under llm_profiles - Run the provider CLI directly to verify authentication:
codex --version claude --version
Skills can't send attachments or manage tasks
Check permissions:
permissions:
runtime_message: true
runtime_automation: true
Check API connectivity:
# From the machine running Alice
curl -s -H "Authorization: Bearer <token>" http://127.0.0.1:7331/healthz
# Should return {"status":"ok"}
The runtime HTTP API binds to the address in runtime_http_addr (default 127.0.0.1:7331). Multi-bot setups auto-increment the port.
Configuration changes don't apply
- Multi-bot mode: Config hot reload is disabled. Restart Alice.
- Single-bot mode: Partial hot reload is supported, but not all config keys are watched.
- Always check logs after a config change:
grep "config" ~/.alice/log/*.log | tail -5
WebSocket connection errors
If you see connection failures in the logs:
- Verify long connection mode is enabled in the Feishu Open Platform
- Check that the app has been published and approved
- Verify network connectivity to
open.feishu.cn(oropen.larksuite.comfor Lark) - Check that
feishu_base_urlis set correctly for Lark (international) users:feishu_base_url: "https://open.larksuite.com"
Provider CLI not found
Alice looks for the CLI binary in $PATH by default. If it's not found:
- Specify an absolute path:
llm_profiles: chat: command: "/usr/local/bin/opencode" - Or extend
$PATHin the bot's env:env: PATH: "/home/user/.local/bin:/usr/local/bin:/usr/bin:/bin"
LLM runs hang indefinitely
- Check
timeout_secsin the LLM profile (default: 48 hours) - Use
/stopin Feishu to cancel a running session - Check logs for provider-specific errors:
grep -E "timeout|cancelled|killed" ~/.alice/log/*.log - For Codex, check
codex_idle_timeout_secssettings
Logs show nothing useful
Increase log level to debug:
log_level: "debug"
Restart Alice. Debug mode includes:
- Provider and agent name per run
- Thread/session IDs
- Rendered input prompts
- Observed tool activity
- Final output or error
Warning: Debug logs may contain the full rendered prompt including SOUL.md content.
System Model
This page explains the fundamental concepts behind Alice: multi-bot architecture, scene routing, sessions, and startup modes. Understanding these helps you configure and troubleshoot effectively.
What Alice Is (And Isn't)
Alice is a connector, not a bot framework. It doesn't implement chat logic, NLU, or custom integrations directly. Instead, it:
- Receives messages from Feishu
- Decides which LLM backend to call and how
- Calls the LLM CLI as a subprocess
- Sends the response back to Feishu
The "intelligence" lives in the LLM backend (Codex, Claude, etc.). Alice handles the plumbing: routing, queuing, session management, attachment I/O, and progress display.
Multi-Bot Model
One alice process can host multiple independent bots from a single config.yaml:
bots:
engineering_bot:
feishu_app_id: "cli_11111"
# ...
support_bot:
feishu_app_id: "cli_22222"
# ...
Each bot has its own:
- Runtime directory (
~/.alice/bots/<bot_id>/) - Workspace, prompts, and SOUL.md
- Feishu credentials (App ID, App Secret)
- LLM profiles — can use different providers and models
- Scene configuration — independent chat/work routing
- Runtime API port — auto-incremented (7331, 7332, ...)
Bots share:
- The same process and worker pool
CODEX_HOMEby default (can be overridden per bot)
Bot Directory Layout
~/.alice/bots/<bot_id>/
├── workspace/ # Agent workspace
├── prompts/ # Prompt template overrides
├── SOUL.md # Bot persona
└── run/connector/
├── automation.db # Persistent task store (bbolt)
├── campaigns.db # Campaign index (bbolt)
├── session_state.json # Session aliases, usage counters
├── runtime_state.json # Mutable runtime state
└── resources/scopes/ # Downloaded attachments, artifacts
Scene Routing
Every incoming group message goes through a decision tree:
Incoming Message
│
├─ Is it a built-in command? (/help, /status, /stop, /clear, /session)
│ └─ Yes → Handle directly, no LLM involved
│
├─ Does it match the work trigger? (@Bot #work ...)
│ └─ Yes → Route to work scene
│
├─ Is the chat scene enabled?
│ └─ Yes → Route to chat scene
│
└─ Both scenes disabled?
└─ Fall back to legacy trigger_mode (at / prefix / all)
Scenes vs Legacy Triggers
The legacy trigger_mode (at/prefix/all) is a simple gate: it decides whether to accept a message or ignore it. If accepted, there's one LLM pipeline.
Scenes go further: they assign different LLM profiles, session scopes, thread behaviors, and SOUL.md treatment per scene. New deployments should always use scenes.
Session Management
A session is the LLM's context window. Alice decides when to start a new session vs. when to continue an existing one.
Session Keys
Alice identifies sessions with canonical keys:
| Format | Example |
|---|---|
{receive_id_type}:{receive_id} | chat_id:oc_123 |
| `{key} | scene:{scene}` |
| `{key} | scene:{scene} |
Session Scope
session_scope controls when sessions are created and reused:
| Scope | Behavior |
|---|---|
per_chat | One session for the entire group/DM |
per_thread | One session per Feishu thread |
per_user | (DM only) One session per user |
per_message | (DM only) New session for every message |
Session Persistence
Alice persists session metadata to session_state.json:
- Provider thread ID (for resuming with the backend)
- Session aliases
- Usage counters
- Last-message timestamp
- Work-thread ID aliases
When a job comes in, Alice checks if an active session exists. If yes:
- Provider-native steer: Some backends (Codex, Claude) allow injecting new input into a running session. Alice tries this first.
- Queuing: If native steer fails and an LLM run is active, the new job is queued. A newer job supersedes an older queued job.
- New run: If no run is active, a new RunRequest is dispatched to the LLM backend.
Cancellation and Interruption
/stopimmediately cancels the active run via context cancellation- A newer user message supersedes queued jobs but does not interrupt the active run
- Automation tasks can also be interrupted by user messages that acquire the session gate
Startup Modes
Alice supports two explicit startup modes:
--feishu-websocket
Full mode. Connects to Feishu WebSocket, processes live messages, runs automation, and exposes the runtime API.
--runtime-only
Local-only mode. The runtime API and automation engine run, but the Feishu connector does not start. Use for:
- Debugging and development
- Running only the automation scheduler
- Headless environments (use
alice-headless --runtime-only)
alice-headlessis a dedicated binary that cannot start the Feishu connector. Attemptingalice-headless --feishu-websocketwill error.
Config Hot Reload
- Single-bot mode: Limited partial hot reload is supported. Some config keys are watched for changes.
- Multi-bot mode: Hot reload is intentionally disabled. Always restart Alice after config changes.
Runtime Home
| Build Channel | Default Home |
|---|---|
| Release (npm / installer) | ~/.alice |
| Dev (source build) | ~/.alice-dev |
Override with --alice-home flag or ALICE_HOME environment variable.
Message Pipeline
This page walks through the full lifecycle of an incoming Feishu message — from WebSocket delivery to the final reply. Understanding this pipeline helps debug routing issues and tune behavior.
Overview
Feishu WebSocket
└─ App (job queue)
└─ Processor (execution)
└─ LLM Backend (subprocess)
└─ Reply Dispatcher (back to Feishu)
1. WebSocket Reception
Alice establishes a long connection to Feishu's WebSocket endpoint. When a user sends a message the bot can see, Feishu delivers an im.message.receive_v1 event over this connection.
The event contains:
- Sender identity (open_id, user_id, name)
- Message content (text, attachments, mentions)
- Chat context (chat_id, chat_type, thread_id if in a thread)
- Bot identity (which bot received this)
2. Job Creation
The raw event is normalized into a Job struct. This step:
- Extracts mentioned users
- Resolves the receive ID type (
chat_id,open_id, etc.) - Sets the bot's configured LLM profile, scene, and reply preferences
- Generates a session key and resource scope key
- Attaches a monotonic version number
3. Routing
routeIncomingJob decides what to do with the job:
Built-in Commands
If the message starts with /help, /status, /clear, /stop, /session, /cd, /ls, or /pwd, it's handled by the connector directly — no LLM call. See Use Built-in Commands.
Work Scene
If group_scenes.work.enabled and the message contains the trigger_tag (e.g., #work) after the @bot mention, the job is routed to the work scene. Work jobs use the work-scoped session key and LLM profile.
Chat Scene
If group_scenes.chat.enabled, all other messages are routed to chat. Chat jobs use the chat-scoped session key and LLM profile.
Legacy Fallback
If both scenes are disabled, Alice falls back to matching trigger_mode and trigger_prefix.
4. Queue and Serialization
Each session has a mutex that serializes execution:
- Active run exists → Try provider-native steer first (inject new input into running session)
- Native steer unavailable → Queue the new job. A newer job supersedes an older queued job.
- No active run → Accept the job and dispatch to the Processor.
The runtime store (runtime_store.go) keeps in-memory coordination state:
- Latest version per session
- Pending queued job
- Active run cancellation handle
- Per-session mutex
5. Pre-LLM Processing
Before the LLM call, the Processor:
- Loads and parses
SOUL.md(chat only) — separates YAML frontmatter from Markdown body - Downloads inbound attachments into the scoped resource directory
- Derives runtime environment variables for the conversation
- Prepares the rendered prompt text
Session State Check
Alice checks session_state.json:
- If a provider thread ID exists, the backend call resumes that thread
- If the session was recently active, context from the last turn is available
6. LLM Execution
The Processor builds a RunRequest and dispatches it to the LLM backend:
RunRequest {
ThreadID → from session state (empty = new session)
UserText → rendered prompt
Provider → from llm_profile
Model → from llm_profile
ReasoningEffort → from llm_profile
WorkspaceDir → per-bot workspace
ExecPolicy → sandbox + approval settings
Env → per-bot + process env
OnProgress → stream progress updates to Feishu
}
The backend spawns the provider CLI as a subprocess and streams output. Progress updates are sent to Feishu as status card patches.
7. Reply Dispatch
When the LLM finishes, Alice processes the reply:
Content Processing
- If the reply matches
no_reply_token, stay silent - If
output_contractis configured in SOUL.md, strip hidden tags - Apply formatting for Feishu (rich text, @mentions)
Threading
- Work scene with
create_feishu_thread: true: Reply is posted in a Feishu thread - Chat scene with
create_feishu_thread: false: Reply is posted as a top-level message - Thread replies: When Feishu supports it, reply is threaded. Falls back to direct reply otherwise.
Immediate Feedback
Before the LLM starts, Alice sends immediate acknowledgement:
immediate_feedback_mode: "reaction"→ Adds a reaction emoji to the source messageimmediate_feedback_mode: "reply"→ Sends explicit收到!reply
8. Post-Run
- Session state is persisted to
session_state.json(thread ID, usage counters, timestamp) - Downloaded attachments remain in the scoped resource directory
- Runtime state is flushed periodically
Key Invariants
- At most one LLM run per session at a time — enforced by per-session mutex
- Newer messages supersede queued, not active —
/stopis the only way to interrupt a running LLM - Session state is disk-backed — survives process restart
- Attachments are scoped — each conversation has its own resource directory
Prompt Assembly
How Alice constructs the prompt text sent to LLM backends.
Template System
Alice uses Go text/template with Sprig functions for prompt templating. Templates are .md.tmpl files.
Template Loading
1. Check disk: <prompt_dir>/<template>.tmpl
2. If not found, use embedded (compiled into binary)
Disk files override embedded templates, allowing per-bot customization.
Template Files
All templates live under prompts/:
| Template | Purpose |
|---|---|
connector/bot_soul.md.tmpl | Injects SOUL.md body into the prompt |
connector/current_user_input.md.tmpl | Formats the current user message |
connector/reply_context.md.tmpl | Adds context from a replied-to message |
connector/runtime_skill_hint.md.tmpl | Describes available bundled skills |
connector/synthetic_mention.md.tmpl | Formats synthetic @mentions |
connector/help.md.tmpl | The /help command response |
llm/initial_prompt.md.tmpl | First-turn system instructions |
goals/goal_start.tmpl | Goal initialization prompt |
goals/goal_continue.tmpl | Goal continuation prompt |
goals/goal_timeout.tmpl | Goal timeout notification |
Template Variables
Templates have access to the full Job context and session metadata. Key variables include:
| Variable | Description |
|---|---|
.UserText | The user's message text |
.BotName | Display name of the responding bot |
.SenderName | Name of the user who sent the message |
.MentionedUsers | List of users @mentioned in the message |
.ReplyContext | Text of the message being replied to |
.Attachments | Inbound attachment metadata |
.Scene | "chat" or "work" |
.SessionKey | Canonical session identifier |
.SoulBody | SOUL.md body content (chat only) |
.SkillDescriptions | Descriptions of enabled bundled skills |
First Turn vs Resume
The critical difference in prompt assembly:
First Turn (No Existing Thread)
- Full initial prompt is assembled
- Includes system instructions (
initial_prompt.md.tmpl) - In chat scenes: SOUL.md body is prepended
- Identity hints (
Name说:, @mention rules) unlessdisable_identity_hints: true
Resume (Existing Provider Thread)
- Only the current user's message text is sent
- Alice relies on the provider-side thread/session to hold prior context
- No system prompt, no SOUL.md, no identity hints
- This is more efficient — the backend model already has full conversation history
SOUL.md Injection
SOUL.md serves two purposes controlled by the scene:
Chat Scene
- Alice reads the file, parses the YAML frontmatter
- Frontmatter keys (
image_refs,output_contract) are consumed by Alice for reply control - The remaining Markdown body is prepended to the first-turn prompt via
bot_soul.md.tmpl
Work Scene
SOUL.md is intentionally skipped entirely. Work mode is for task execution — persona injection would interfere with tool use and code generation.
Identity Hints
When disable_identity_hints: false (default), Alice formats messages with identity context:
张三说:fix the login timeout
When disable_identity_hints: true, the raw message text is passed through as-is:
fix the login timeout
Prompt Prefix
Each LLM profile can have a prompt_prefix:
llm_profiles:
work:
prompt_prefix: "You are a senior Go engineer. Be concise, use idiomatic patterns."
This text is prepended to every prompt for that profile, including resumed sessions.
Prompts and Debugging
With log_level: debug, Alice logs the fully rendered prompt sent to each backend. Debug traces include:
- Provider name
- Model and profile
- Thread/session ID
- The complete rendered input text
- Observed tool activity and final output
Warning: Rendered prompts may contain SOUL.md content and conversation history. Avoid sharing debug logs publicly.
Automation Subsystem
Alice's automation engine schedules and executes recurring tasks, workflows, and system maintenance.
Architecture
The automation subsystem (internal/automation/) uses a tick-based execution model with persistent storage.
Automation Engine
├─ Tick Scheduler (periodic loop)
│ ├─ Claim due tasks
│ ├─ Execute tasks (send_text / run_llm / run_workflow)
│ └─ Handle completion / failures
├─ System Task Scheduler
│ ├─ Session state flush
│ └─ Campaign reconcile
├─ Watchdog
│ └─ Alert on overdue or stuck tasks
└─ Store (bbolt)
└─ Task persistence
Task Model
Scope
A task's scope defines where it executes:
| Scope | Description |
|---|---|
user | Scoped to a specific user (DM context) |
chat | Scoped to a specific group chat |
Actions
| Action | Description |
|---|---|
send_text | Send a predetermined text message to the scope |
run_llm | Run an LLM call with a specified prompt in the scope |
run_workflow | Run a multi-step workflow combining LLM calls and actions |
Scheduling
Tasks can be scheduled in two ways:
- Cron expressions:
"0 9 * * *"— runs at 9 AM daily - One-shot timestamps: ISO 8601 — runs once at the specified time
Task Lifecycle
Created → Active → Claimed → Executing → Completed
↓
Failed → Active (retry) / Cancelled
- Due tasks are claimed on a periodic tick (one claim per tick)
- Claimed tasks are executed in the scope's conversation context
- Completed tasks with cron expressions are re-scheduled for the next occurrence
- Failed tasks may be retried or cancelled
- Cancelled tasks are deleted or marked inactive
Execution Model
When a task executes:
- The engine acquires the session gate for the task's scope
- The task inherits the same conversation context as interactive runs:
- Same workspace directory
- Same LLM profile and permissions
- Same environment variables
- For
run_llmandrun_workflow, the task's prompt is sent to the LLM backend - Replies are dispatched to the task's scope (chat or user DM)
User messages can interrupt automation tasks that have acquired the session gate.
System Tasks
Alice registers built-in system tasks during bootstrap:
| Task | Interval | Purpose |
|---|---|---|
| Session state flush | Periodic | Persist in-memory session state to session_state.json |
| Campaign reconcile | Periodic | Sync campaign repository state |
Watchdog
The watchdog monitors automation tasks for anomalies:
- Overdue tasks: Tasks past their scheduled time that haven't been claimed
- Stuck tasks: Tasks that have been executing for too long
When the watchdog detects an issue, it can:
- Log a warning
- Send an alert message to a configured chat
- Force-cancel the stuck task
Storage
Tasks are persisted in a local bbolt database:
~/.alice/bots/<bot_id>/run/connector/automation.db
This survives process restarts. The store supports:
- CRUD operations on tasks
- Querying by scope, status, and due time
- Atomic claim-and-update to prevent duplicate execution
Managing Tasks
Via Runtime API
alice runtime automation create '{
"scope_type": "chat",
"scope_id": "oc_xxxxxxxxxxxxx",
"action": "send_text",
"text": "Daily standup reminder!",
"cron": "0 10 * * 1-5"
}'
Via Bundled Skills
The alice-scheduler skill lets users create and manage tasks directly from Feishu conversations.
See the Runtime API Reference for the complete task management endpoints.
Runtime API Design
The rationale and design decisions behind Alice's local HTTP API.
Why a Local API?
Alice runs bundled skills as subprocess scripts. These scripts need to interact with the running connector — send images, manage tasks, query state. Direct Go interop isn't possible from shell scripts, so Alice exposes a local HTTP API.
Design Principles
1. Local-Only by Default
The API binds to 127.0.0.1 (configurable via runtime_http_addr). It is not designed to be exposed to the network. If you need remote access, use a reverse proxy or SSH tunnel — but this is not the intended use case.
2. Bearer Token Auth
Every request (except /healthz) requires:
Authorization: Bearer <token>
The token is auto-generated at startup. Environment variables (ALICE_RUNTIME_API_TOKEN) inject it into skill scripts automatically.
3. Defense in Depth
Multiple layers of protection:
- Auth rate limiting: 120 requests per minute per token
- Body size limit: 1 MB per request
- File validation: Uploaded files must be readable, non-empty regular files
- Feishu size limits: Uploads are still subject to Feishu's file size and type restrictions
4. No Text Send Endpoint
The runtime API does not have a plain text send endpoint. Why?
Text replies normally go through the main reply pipeline — they need session context, proper threading, and reply metadata. The runtime API is designed for side-effects (sending images, files, managing tasks), not for bypassing the reply pipeline.
If a skill needs to send a text message as an automation task, it creates a send_text task via the automation API. The automation engine handles the rest.
API Surface
Messages
POST /api/v1/messages/image— upload and send an imagePOST /api/v1/messages/file— upload and send a file
Both accept multipart/form-data with an optional caption field.
Automation
- Full CRUD for automation tasks: create, list, get, update, delete
Goal
- Goal lifecycle management: get, create, pause, resume, complete, delete
Health
GET /healthz— no auth, responds 200 if the server is running
Environment Variable Injection
Skills don't need to know the API address or token. Alice injects them:
ALICE_RUNTIME_API_BASE_URL="http://127.0.0.1:7331"
ALICE_RUNTIME_API_TOKEN="<auto-generated>"
ALICE_RUNTIME_BIN="/usr/local/bin/alice"
Additional context variables:
ALICE_RECEIVE_ID_TYPE="chat_id"
ALICE_RECEIVE_ID="oc_xxxxxxxxxxxxx"
ALICE_SOURCE_MESSAGE_ID="om_xxxxxxxxxxxxx"
ALICE_ACTOR_USER_ID="ou_xxxxxxxxxxxxx"
ALICE_ACTOR_OPEN_ID="ou_xxxxxxxxxxxxx"
ALICE_CHAT_TYPE="group"
ALICE_SESSION_KEY="chat_id:oc_xxx|scene:chat"
Multi-Bot API Ports
In multi-bot mode, each bot gets its own Runtime API server on an auto-incremented port:
| Bot Index | Port |
|---|---|
| First | 7331 |
| Second | 7332 |
| Third | 7333 |
| ... | ... |
Skills target the correct bot by inheriting environment variables from the conversation scope they're triggered in.
Graceful Shutdown
When Alice receives a shutdown signal, the Runtime API server:
- Stops accepting new connections
- Waits up to
runtime_api_shutdown_timeout_secs(default: 5s) for in-flight requests to complete - Force-closes remaining connections after timeout
Extending the API
New endpoints can be added in internal/runtimeapi/. See the Contributing guide and Architecture Overview for developer guidance.
Configuration Manual
Complete reference for every configuration key in config.yaml. Structure follows the file layout.
Top-Level Keys
bots (required)
Map of bot IDs to bot configurations. Each key under bots is a bot ID used as the runtime identifier.
bots:
engineering_bot:
# bot config...
support_bot:
# bot config...
log_level
| Field | Value |
|---|---|
| Type | string |
| Default | "info" |
| Values | "debug", "info", "warn", "error" |
Structured log level for the entire process.
log_file
| Field | Value |
|---|---|
| Type | string |
| Default | "" (auto: <ALICE_HOME>/log/YYYY-MM-DD.log) |
Log file path with daily rotation. Empty uses the default.
log_max_size_mb
| Field | Value |
|---|---|
| Type | int |
| Default | 20 |
Maximum log file size in megabytes before rotation.
log_max_backups
| Field | Value |
|---|---|
| Type | int |
| Default | 5 |
Maximum number of rotated log files to retain.
log_max_age_days
| Field | Value |
|---|---|
| Type | int |
| Default | 7 |
Maximum days to keep rotated log files.
log_compress
| Field | Value |
|---|---|
| Type | bool |
| Default | false |
Whether to gzip rotated log files.
bots.<id>
Each bot is identified by its key and configured with the following fields.
name
| Field | Value |
|---|---|
| Type | string |
| Required | No |
Display name used in prompts and status cards. Defaults to the bot ID.
feishu_app_id (required)
| Field | Value |
|---|---|
| Type | string |
| Required | Yes |
Feishu Open Platform App ID (cli_...).
feishu_app_secret (required)
| Field | Value |
|---|---|
| Type | string |
| Required | Yes |
Feishu Open Platform App Secret. Keep this value secure.
feishu_base_url
| Field | Value |
|---|---|
| Type | string |
| Default | "https://open.feishu.cn" |
Feishu API base URL. Use "https://open.larksuite.com" for Lark (international edition).
Runtime Directories
alice_home
| Field | Value |
|---|---|
| Type | string |
| Default | "<ALICE_HOME>/bots/<bot_id>" |
Bot-specific runtime root directory. All per-bot state lives under this path.
workspace_dir
| Field | Value |
|---|---|
| Type | string |
| Default | "<alice_home>/workspace" |
Agent workspace directory. This is the working directory for LLM subprocesses.
prompt_dir
| Field | Value |
|---|---|
| Type | string |
| Default | "<alice_home>/prompts" |
Directory for bot-specific prompt template overrides. Files here override embedded templates.
codex_home
| Field | Value |
|---|---|
| Type | string |
| Default | "$CODEX_HOME" or "~/.codex" |
Codex configuration and authentication directory. Shared across bots by default unless overridden here.
soul_path
| Field | Value |
|---|---|
| Type | string |
| Default | "SOUL.md" (relative to alice_home) |
Path to the SOUL.md persona document. Relative paths resolve against alice_home. If the file doesn't exist at startup, Alice writes the embedded template.
Message Trigger (Legacy)
trigger_mode
| Field | Value |
|---|---|
| Type | string |
| Default | "at" |
| Values | "at", "prefix", "all" |
Legacy trigger mode. Only used when both group_scenes.chat.enabled and group_scenes.work.enabled are false.
| Value | Behavior |
|---|---|
"at" | Only @bot messages accepted |
"prefix" | Only messages starting with trigger_prefix |
"all" | Every message accepted |
trigger_prefix
| Field | Value |
|---|---|
| Type | string |
| Default | "" |
Prefix string for trigger_mode: "prefix".
llm_profiles
Map of profile names to LLM profile configurations. Each profile selects a provider, model, and settings.
Profile Fields
provider (required)
| Field | Value |
|---|---|
| Type | string |
| Values | "opencode", "codex", "claude", "gemini", "kimi" |
LLM backend provider.
command
| Field | Value |
|---|---|
| Type | string |
| Default | Same as provider (e.g., "opencode") |
Path or name of the CLI binary. Use an absolute path for binaries outside $PATH.
timeout_secs
| Field | Value |
|---|---|
| Type | int |
| Default | 172800 (48 hours) |
Per-run timeout in seconds.
model (required)
| Field | Value |
|---|---|
| Type | string |
Model identifier. Examples: "deepseek/deepseek-v4-pro", "gpt-5.4-mini", "claude-sonnet-4-6".
variant (OpenCode only)
| Field | Value |
|---|---|
| Type | string |
| Values | "max", "high", "minimal" |
Model variant for DeepSeek models via OpenCode.
profile (Codex only)
| Field | Value |
|---|---|
| Type | string |
Named sub-profile from Codex CLI configuration.
reasoning_effort (Codex only)
| Field | Value |
|---|---|
| Type | string |
| Values | "low", "medium", "high", "xhigh" |
Thinking intensity level.
personality (Codex only)
| Field | Value |
|---|---|
| Type | string |
Named personality preset from Codex CLI config.
prompt_prefix
| Field | Value |
|---|---|
| Type | string |
| Default | "" |
Text prepended to every prompt before sending to the model.
permissions
| Field | Value |
|---|---|
| Type | object |
Sandbox and approval settings.
permissions.sandbox
| Field | Value |
|---|---|
| Type | string |
| Default | "workspace-write" |
| Values | "read-only", "workspace-write", "danger-full-access" |
Filesystem access level for the LLM agent.
permissions.ask_for_approval
| Field | Value |
|---|---|
| Type | string |
| Default | "never" |
| Values | "untrusted", "on-request", "never" |
When the agent should ask for approval before executing tool calls.
permissions.add_dirs
| Field | Value |
|---|---|
| Type | string[] |
| Default | [] |
Extra directories accessible to the agent beyond the workspace.
profile_overrides
| Field | Value |
|---|---|
| Type | map[string]ProfileRunnerConfig |
| Default | {} |
Advanced: per-profile runner overrides. Keys are profile names. Each override can set:
command— binary path overridetimeout— timeout override (seconds)provider_profile— provider-specific profile nameexec_policy— per-override sandbox and approval settings
env
| Field | Value |
|---|---|
| Type | map[string]string |
| Default | {} |
Environment variables passed to all LLM subprocesses. Useful for PATH, proxy settings (HTTPS_PROXY, ALL_PROXY), and API keys.
Reply Messages
failure_message
| Field | Value |
|---|---|
| Type | string |
| Default | "暂时不可用,请稍后重试。" |
Message shown when the LLM backend fails.
thinking_message
| Field | Value |
|---|---|
| Type | string |
| Default | "正在思考中..." |
Message shown in the progress card while the LLM is processing.
Immediate Feedback
immediate_feedback_mode
| Field | Value |
|---|---|
| Type | string |
| Default | "reaction" |
| Values | "reaction", "reply" |
How Alice acknowledges a received message before the LLM responds.
immediate_feedback_reaction
| Field | Value |
|---|---|
| Type | string |
| Default | "OK" |
Feishu emoji name for the reaction feedback (e.g., "OK", "WINK", "THUMBSUP").
group_scenes
group_scenes.chat
| Field | Value |
|---|---|
| Type | object |
Chat scene configuration for group / topic-group chats.
enabled
| Field | Value |
|---|---|
| Type | bool |
| Default | true |
session_scope
| Field | Value |
|---|---|
| Type | string |
| Default | "per_chat" |
| Values | "per_chat", "per_thread" |
llm_profile
| Field | Value |
|---|---|
| Type | string |
Name of the LLM profile under llm_profiles to use.
no_reply_token
| Field | Value |
|---|---|
| Type | string |
| Default | "" |
If the model returns this exact string, Alice stays silent.
create_feishu_thread
| Field | Value |
|---|---|
| Type | bool |
| Default | false |
Whether to create a Feishu thread for replies.
group_scenes.work
| Field | Value |
|---|---|
| Type | object |
Work scene configuration for group / topic-group chats.
enabled
| Field | Value |
|---|---|
| Type | bool |
| Default | true |
trigger_tag
| Field | Value |
|---|---|
| Type | string |
| Default | "#work" |
Tag required in the message (after @bot mention) to trigger work mode.
session_scope
| Field | Value |
|---|---|
| Type | string |
| Default | "per_thread" |
| Values | "per_thread", "per_chat" |
llm_profile
| Field | Value |
|---|---|
| Type | string |
create_feishu_thread
| Field | Value |
|---|---|
| Type | bool |
| Default | true |
no_reply_token
| Field | Value |
|---|---|
| Type | string |
| Default | "" |
private_scenes
Same structure as group_scenes. Both chat and work sub-sections are disabled by default.
private_scenes.chat
Additional session scope: "per_user" — all DM messages from the same user share one session.
private_scenes.work
Additional session scope: "per_message" — each DM with #work creates a fresh session.
Runtime HTTP API
runtime_http_addr
| Field | Value |
|---|---|
| Type | string |
| Default | "127.0.0.1:7331" |
Listen address for the runtime HTTP API. Multi-bot setups auto-increment the port (7332, 7333, ...).
runtime_http_token
| Field | Value |
|---|---|
| Type | string |
| Default | auto-generated |
Bearer token for API authentication. Auto-generated if empty. Set explicitly for cross-process calls.
permissions
runtime_message
| Field | Value |
|---|---|
| Type | bool |
| Default | true |
Allow bundled skills to send messages via the runtime API.
runtime_automation
| Field | Value |
|---|---|
| Type | bool |
| Default | true |
Allow bundled skills to manage automation tasks via the runtime API.
allowed_skills
| Field | Value |
|---|---|
| Type | string[] |
| Default | ["alice-message", "alice-scheduler"] |
Bundled skills enabled for this bot. Built-in skills: alice-message, alice-scheduler, alice-goal.
Worker Pool
queue_capacity
| Field | Value |
|---|---|
| Type | int |
| Default | 256 |
Maximum pending jobs. Beyond this, new messages are dropped.
worker_concurrency
| Field | Value |
|---|---|
| Type | int |
| Default | 3 |
Number of concurrent workers processing jobs.
Timeouts
All values in seconds.
automation_task_timeout_secs
| Field | Value |
|---|---|
| Type | int |
| Default | 6000 |
Outer timeout for scheduled automation and workflow runs.
auth_status_timeout_secs
| Field | Value |
|---|---|
| Type | int |
| Default | 15 |
Timeout for provider auth status checks on startup.
runtime_api_shutdown_timeout_secs
| Field | Value |
|---|---|
| Type | int |
| Default | 5 |
Grace period when shutting down the runtime HTTP API server.
local_runtime_store_open_timeout_secs
| Field | Value |
|---|---|
| Type | int |
| Default | 10 |
Timeout for opening the local BoltDB runtime store on startup.
codex_idle_timeout_secs
| Field | Value |
|---|---|
| Type | int |
| Default | 900 |
Codex idle timeout for default/medium reasoning effort.
codex_high_idle_timeout_secs
| Field | Value |
|---|---|
| Type | int |
| Default | 1800 |
Codex idle timeout for high reasoning effort.
codex_xhigh_idle_timeout_secs
| Field | Value |
|---|---|
| Type | int |
| Default | 3600 |
Codex idle timeout for xhigh reasoning effort.
Display Options
show_shell_commands
| Field | Value |
|---|---|
| Type | bool |
| Default | true |
Show recently executed shell commands in the heartbeat status card.
disable_identity_hints
| Field | Value |
|---|---|
| Type | bool |
| Default | false |
When true, messages are sent to the LLM as raw text without identity context (Name说:, @mention rules). When false (default), identity hints are included.
Runtime HTTP API
Alice exposes a local authenticated HTTP API on 127.0.0.1. Bundled skills, automation scripts, and thin runtime tools use this API.
Authentication
All endpoints (except /healthz) require a Bearer token:
Authorization: Bearer <token>
The token is from bots.<id>.runtime_http_token in config, or auto-generated if empty.
Base URL
Default: http://127.0.0.1:7331. Multi-bot setups auto-increment: 7332, 7333, etc.
Limits
- Request body: 1 MB maximum
- Auth rate limit: 120 requests per minute
- List endpoints: 200 items maximum per request
Health
GET /healthz
No authentication required.
Response 200 OK:
{"status": "ok"}
Messages
POST /api/v1/messages/image
Send an image to the current conversation.
Request multipart/form-data:
| Field | Type | Required | Description |
|---|---|---|---|
image | file | Yes | Image file to upload |
caption | string | No | Optional caption text |
Response 200 OK:
{"message_id": "om_xxxxxxxxxxxxx"}
Errors:
| Code | Description |
|---|---|
400 | Invalid or missing image file |
403 | permissions.runtime_message is disabled |
413 | Request body exceeds 1 MB |
POST /api/v1/messages/file
Send a file to the current conversation.
Request multipart/form-data:
| Field | Type | Required | Description |
|---|---|---|---|
file | file | Yes | File to upload |
filename | string | No | Display filename (default: original filename) |
caption | string | No | Optional caption text |
Response 200 OK:
{"message_id": "om_xxxxxxxxxxxxx"}
Errors:
| Code | Description |
|---|---|
400 | Invalid or missing file |
403 | permissions.runtime_message is disabled |
413 | Request body exceeds 1 MB |
Automation Tasks
GET /api/v1/automation/tasks
List automation tasks.
Query Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
limit | int | 50 | Items per page (max 200) |
offset | int | 0 | Pagination offset |
status | string | — | Filter by status: active, completed, cancelled |
Response 200 OK:
[
{
"id": "task_abc123",
"scope_type": "chat",
"scope_id": "oc_xxxxxxxxxxxxx",
"action": "send_text",
"status": "active",
"cron": "0 9 * * *",
"created_at": "2025-01-15T09:00:00Z",
"updated_at": "2025-01-15T09:00:00Z"
}
]
POST /api/v1/automation/tasks
Create an automation task.
Request application/json:
| Field | Type | Required | Description |
|---|---|---|---|
scope_type | string | Yes | "chat" or "user" |
scope_id | string | Yes | Target ID (chat_id or user_id) |
action | string | Yes | "send_text", "run_llm", or "run_workflow" |
text | string | For send_text | Message text |
prompt | string | For run_llm | LLM prompt |
cron | string | No | Cron expression for recurring tasks |
run_at | string | No | ISO 8601 timestamp for one-shot tasks |
Response 201 Created:
{
"id": "task_abc123",
"scope_type": "chat",
"scope_id": "oc_xxxxxxxxxxxxx",
"action": "send_text",
"status": "active",
"cron": "0 9 * * *",
"created_at": "2025-01-15T09:00:00Z"
}
Errors:
| Code | Description |
|---|---|
400 | Invalid request body or missing required fields |
403 | permissions.runtime_automation is disabled |
GET /api/v1/automation/tasks/:taskID
Get a single automation task.
Response 200 OK: Same schema as list item.
Errors:
| Code | Description |
|---|---|
404 | Task not found |
PATCH /api/v1/automation/tasks/:taskID
Update an automation task. Send a JSON merge-patch with fields to update.
Request application/json:
{"status": "cancelled"}
Updatable fields: status, cron, run_at, text, prompt.
Response 200 OK: Updated task object.
Errors:
| Code | Description |
|---|---|
400 | Invalid update |
403 | permissions.runtime_automation is disabled |
404 | Task not found |
DELETE /api/v1/automation/tasks/:taskID
Delete an automation task.
Response 204 No Content
Errors:
| Code | Description |
|---|---|
403 | permissions.runtime_automation is disabled |
404 | Task not found |
Goal
GET /api/v1/goal
Get the current active goal for the conversation scope.
Response 200 OK:
{
"id": "goal_xyz",
"description": "Review PR #42",
"status": "in_progress",
"created_at": "2025-01-15T10:00:00Z"
}
Response 204 No Content: No active goal.
POST /api/v1/goal
Create a new goal for the conversation scope.
Request application/json:
{"description": "Review PR #42"}
Response 201 Created: Created goal object.
POST /api/v1/goal/pause
Pause the active goal.
Response 200 OK.
POST /api/v1/goal/resume
Resume a paused goal.
Response 200 OK.
POST /api/v1/goal/complete
Mark the active goal as completed.
Response 200 OK.
DELETE /api/v1/goal
Delete the active goal.
Response 204 No Content.
Common Error Response Format
All errors follow this format:
{
"error": "Human-readable error description"
}
HTTP status codes are used conventionally: 400 for client errors, 403 for permission denied, 404 for not found, 413 for payload too large, 429 for rate limited, 500 for internal errors.
CLI Commands
Alice provides several CLI subcommands for different operations.
Main Process
alice --feishu-websocket
Start the full Feishu connector runtime. Connects to Feishu WebSocket and processes live messages.
alice --feishu-websocket
alice --runtime-only
Start in runtime-only mode. The local HTTP API and automation engine run, but the Feishu WebSocket does not start.
alice --runtime-only
alice-headless --runtime-only
Headless runtime-only binary. Explicitly cannot start the Feishu connector.
alice-headless --runtime-only
alice-headlesswill error if invoked with--feishu-websocket.
Global Flags
| Flag | Description |
|---|---|
--alice-home <path> | Override the default runtime home directory |
--config <path> | Path to config.yaml (default: <alice_home>/config.yaml) |
--log-level <level> | Override log level (debug, info, warn, error) |
--version | Print version and exit |
Environment variable ALICE_HOME also overrides the default home directory.
alice setup
Initialize the Alice runtime environment.
alice setup
What it does:
- Creates the directory structure under
~/.alice/ - Writes a starter
config.yaml(based onconfig.example.yaml) - Syncs bundled skills to
${ALICE_HOME}/skills/ - On Linux: registers a systemd user unit at
~/.config/systemd/user/alice.service - Installs the OpenCode delegate plugin at
~/.config/opencode/plugins/alice-delegate.js
Run this once after installation.
alice delegate
Send a one-shot prompt to a configured LLM backend.
alice delegate --provider <name> --prompt "<text>"
Options
| Flag | Description |
|---|---|
--provider <name> | Backend: opencode, codex, claude, gemini, kimi |
--prompt <text> | Prompt text (required) |
--model <name> | Override the default model |
--workspace <path> | Override the working directory |
Examples
alice delegate --provider codex --prompt "Fix the null check in auth.go"
alice delegate --provider claude --prompt "Review this diff" < changes.patch
alice runtime message
Send messages via the runtime API.
alice runtime message image <path> [--caption <text>]
alice runtime message file <path> [--filename <name>] [--caption <text>]
| Subcommand | Description |
|---|---|
image <path> | Upload and send an image |
file <path> | Upload and send a file |
| Flag | Description |
|---|---|
--caption <text> | Optional caption text |
--filename <name> | Override file display name (file only) |
alice runtime automation
Manage automation tasks via the runtime API.
alice runtime automation list [--status <status>] [--limit <n>]
alice runtime automation create <payload>
alice runtime automation get <task-id>
alice runtime automation update <task-id> <payload>
alice runtime automation delete <task-id>
| Subcommand | Description |
|---|---|
list | List automation tasks |
create <json> | Create a task from JSON payload |
get <id> | Get a single task |
update <id> <json> | Update a task with JSON merge-patch |
delete <id> | Delete a task |
| Flag (list) | Description |
|---|---|
--status | Filter by status: active, completed, cancelled |
--limit | Items per page |
alice runtime goal
Manage the active goal for a conversation scope.
alice runtime goal get
alice runtime goal create <description>
alice runtime goal pause
alice runtime goal resume
alice runtime goal complete
alice runtime goal delete
| Subcommand | Description |
|---|---|
get | Get the current active goal |
create <desc> | Create a new goal |
pause | Pause the active goal |
resume | Resume a paused goal |
complete | Mark the active goal as completed |
delete | Delete the active goal |
alice skills
Manage bundled skills.
alice skills sync
alice skills list
| Subcommand | Description |
|---|---|
sync | Sync embedded bundled skills to the local skills directory |
list | List installed bundled skills |
alice skills sync is also run automatically at startup.
Exit Codes
| Code | Meaning |
|---|---|
0 | Success |
1 | General error |
2 | Configuration error |
3 | Authentication error |
Architecture Overview
This is the code-first architecture reference for Alice. Package names, runtime objects, and file paths match the live code under cmd/connector, internal/, prompts/, and skills/.
Reading Paths
Start with the section that matches your goal:
| Goal | Start here |
|---|---|
| Understand the whole system | §1 Process Model → §2 Bootstrap Path → §5 Message Pipeline |
| Add a new LLM backend | §2 Bootstrap Path → §7 Prompt Assembly → Adding a New LLM Backend |
| Modify message handling | §5 Inbound Message Pipeline → §6 Session Keys → §8 Reply Dispatch |
| Add a Runtime API endpoint | §9 Runtime API |
| Add or modify automation | §10 Automation Subsystem |
| Understand configuration | §2 Bootstrap Path → §12 Configuration Model |
1. Process Model
Alice is a multi-bot runtime. One alice process can host multiple bots from one config.yaml.
At startup, the process:
- Loads
config.yaml - Expands
bots.*into per-bot runtime configs - Verifies CLI auth where needed
- Syncs embedded bundled skills into the local skill directories
- Builds one
ConnectorRuntimeper bot - Runs all runtimes under one
RuntimeManager
The main runtime object per bot:
ConnectorRuntime
├─ App
├─ Processor
├─ llm.MultiBackend
├─ LarkSender
├─ automation.Engine
├─ runtimeapi.Server
├─ automation.Store
└─ campaign.Store
Startup mode is explicit:
--feishu-websocket: connect to Feishu and process live events--runtime-only: run automation and the local runtime API without the Feishu WebSocketalice-headless: runtime-only only; may not start the Feishu connector
2. Bootstrap Path
The process entrypoint is cmd/connector.
Key bootstrap steps:
cmd/connector/root.go: CLI flags, startup mode selection, config creation, PID locking, logging, auth preflight, bundled-skill sync, and runtime manager startup.internal/config: Pure multi-bot config model, path derivation, normalization, validation, and per-bot runtime expansion.internal/bootstrap: Builds the per-bot runtime graph and wires cross-cutting features such as prompt loading, runtime API auth, campaign reconcile loops, and config hot reload.
BuildRuntimeManager expands Config into []Config via RuntimeConfigs(), then builds one ConnectorRuntime for each bot.
Current hot-reload behavior:
- Single-bot mode: partial config hot reload is supported
- Multi-bot mode: hot reload is intentionally disabled; restart the process after config changes
3. Runtime Layout And Persisted State
Each bot gets its own runtime root under:
${ALICE_HOME}/bots/<bot_id>/
Important per-bot paths:
workspace/— Bot workspaceprompts/— Optional prompt overrides for that botrun/connector/automation.db— Persistent automation task store (bbolt)run/connector/campaigns.db— Persistent lightweight campaign index (bbolt)run/connector/session_state.json— Session aliases, provider thread ids, usage counters, work-thread metadatarun/connector/runtime_state.json— Mutable connector runtime staterun/connector/resources/scopes/<scope_type>/<scope_id>/— Downloaded inbound attachments and uploadable local artifacts scoped to the current conversation
The source tree also embeds:
prompts/skills/config.example.yamlprompts/SOUL.md.example
Disk files override embedded prompt files when present; embedded assets are the fallback.
4. Package Map
Core Packages
| Package | Responsibility |
|---|---|
cmd/connector | CLI entrypoint, runtime subcommands, and skills sync |
internal/bootstrap | Runtime construction, path resolution, auth checks, skill materialization, campaign reconcile bridging, and config reload |
internal/config | Config schema, validation, defaults, path derivation, and multi-bot expansion |
internal/connector | Feishu ingress, message normalization, scene routing, queueing, session serialization, native steer fallback, /stop interruption, prompt assembly, reply dispatch, attachment download, session persistence, and built-in commands |
internal/llm | Provider-agnostic Backend interface plus provider adapters for codex, claude, gemini, kimi, and opencode |
internal/prompting | Template loader with disk-first / embedded-fallback behavior, sprig helpers, and compiled-template caching |
internal/runtimeapi | Local authenticated HTTP server and client used by bundled skills and runtime-facing shell scripts |
internal/automation | Task model, persistence, claiming, execution, system-task scheduling, and workflow dispatch |
internal/statusview | Aggregates usage and automation data for /status |
internal/platform/feishu | Feishu sender implementation, attachment I/O, bot self-info lookup, message lookup, and user-name resolution helpers |
Support Packages
| Package | Responsibility |
|---|---|
internal/sessionctx | Session-context environment bridge for runtime API calls and bundled skills |
internal/runtimecfg | Helpers for scene-derived profile selection and thread-reply preference |
internal/sessionkey | Canonical session-key and visibility-key helpers |
internal/messaging | Narrow sender/uploader interfaces shared across connector and runtime API layers |
internal/storeutil | Shared bbolt helpers and string utilities |
internal/logging | Zerolog plus rotating file output configuration |
internal/buildinfo | Version reporting |
5. Inbound Message Pipeline
internal/connector.App owns the live Feishu connection and the per-bot job queue.
High-level flow:
- Feishu delivers
im.message.receive_v1over WebSocket Appnormalizes the event into aJobrouteIncomingJobdecides whether the message should be ignored, treated as a built-in command, handled aschat, or handled aswork- If the same session has an active provider-native interactive run, Alice first tries to steer the new input into that run
- If native steer is unavailable, the job is queued and serialized by session; newer queued jobs supersede older queued jobs without interrupting the active LLM run
/stopstill interrupts the active run, and user messages can still interrupt automation tasks that acquired the session gateProcessorexecutes the accepted job
Scene routing rules:
- Group/topic-group chats can use
group_scenes.chatandgroup_scenes.work - Work threads are identified by a trigger plus a stable work-scene session key
- If both scenes are disabled, Alice falls back to legacy
trigger_mode/trigger_prefix - Built-in commands such as
/help,/status,/clear, and/stopbypass the LLM path
6. Session Keys, Aliases, And Serialization
Alice routes and resumes work through canonical session keys plus aliases.
Common formats:
{receive_id_type}:{receive_id}{receive_id_type}:{receive_id}|scene:{scene}{receive_id_type}:{receive_id}|scene:{scene}|thread:{thread_id}{receive_id_type}:{receive_id}|scene:{scene}|message:{message_id}
Special cases:
- Work-scene seed key:
{receive_id_type}:{receive_id}|scene:work|seed:{source_message_id} - Chat reset alias:
{chat_key}|reset:{message_id}
Persisted in session_state.json:
- Provider thread id
- Work-thread id alias
- Session aliases
- Usage counters
- Last-message timestamp
- Scope key for status aggregation
internal/connector/runtime_store.go keeps the live in-memory coordination state:
- Latest version per session
- Pending job per session
- Active run cancellation handle
- Per-session mutex for serialization
- Superseded-version tracking
7. Prompt Assembly And LLM Execution
internal/connector.Processor is the execution core for one accepted job.
Before an LLM call it:
- Loads and parses
SOUL.mdif needed - Downloads inbound attachments into the scoped resource directory
- Derives runtime env vars for the current conversation
- Prepares prompt text
Current prompt assets:
prompts/llm/initial_prompt.md.tmplprompts/connector/bot_soul.md.tmplprompts/connector/current_user_input.md.tmplprompts/connector/reply_context.md.tmplprompts/connector/runtime_skill_hint.md.tmplprompts/connector/synthetic_mention.md.tmpl
Important prompt behavior:
- First-turn or non-resumed runs render the current-user-input template and may append reply context, bot soul, and runtime-skill hints
- Resumed provider threads send only the current user input; Alice relies on the provider-side thread/session to hold prior context
chatruns can prependSOUL.md;workruns intentionally skip bot-soul injection
The LLM layer is selected like this:
- Scene selects an outer
llm_profiles.<name> - The outer profile chooses provider / model / profile / reasoning / personality / prompt prefix
llm.MultiBackenddispatches to the correct provider adapter
Currently supported providers: codex, claude, gemini, kimi, opencode
8. Reply Dispatch
Alice distinguishes between:
- Immediate acknowledgement
- Streamed progress messages from the backend
- Final replies
- File/image follow-ups
Current behavior:
- Work-scene messages usually receive an immediate reaction or
收到! - Backend progress messages are sent as threaded replies when possible
- Final replies are posted via the reply dispatcher
- Thread replies fall back to direct replies when Feishu does not support threaded replies for that target
internal/connector/card.go, internal/connector/outgoing_mentions.go, internal/connector/outgoing_plaintext.go, and related files own:
- Message send / reply / patch-card operations
- Reactions
- Upload of images and files
- Attachment download
- Scoped resource-root resolution
9. Runtime API And Bundled Skills
Alice exposes a local authenticated runtime API intended for bundled skills and thin runtime scripts.
Current HTTP surface:
POST /api/v1/messages/imagePOST /api/v1/messages/fileGET|POST|PATCH|DELETE /api/v1/automation/tasksGET|POST /api/v1/goal+ pause/resume/complete/delete
There is no standalone text-send endpoint. Plain text is normally returned through the main reply pipeline.
Current safeguards:
- Bearer token auth
- Request-body size limit (1 MB)
- In-process auth rate limiting (120 req/min)
- Local uploads require readable, non-empty regular files and remain subject to Feishu size limits
Runtime-facing shell entrypoints:
alice runtime message ...alice runtime automation ...alice runtime goal ...
Bundled skills shipped in the current tree:
skills/alice-messageskills/alice-schedulerskills/alice-goal
Runtime context is injected through environment variables (see Runtime API Design).
10. Automation Subsystem
internal/automation persists tasks in bbolt and executes them in-process.
Current task scopes: user, chat
Current task actions: send_text, run_llm, run_workflow
Execution model:
- Due tasks are claimed on a periodic tick
- Long-lived system tasks are scheduled separately
- Task env inherits the same conversation context bridge used for interactive runs
- Workflow tasks call the same LLM backend but with workflow-specific agent names, env vars, and workspace hints
Built-in system tasks registered during bootstrap:
- Periodic session/runtime state flush
- Periodic campaign-repo reconcile
11. Configuration Model
The config model is pure multi-bot.
Important keys:
bots.<id>llm_profilesgroup_scenes.chat,group_scenes.workprivate_scenes.chat,private_scenes.workpermissionsruntime_http_addrworkspace_dir,prompt_dir,codex_home
Behavior worth calling out:
RuntimeConfigs()derives missing bot paths and increments default runtime API ports across bots- Each outer
llm_profileskey is a stable runtime selector - Provider-specific profile selectors still live inside each profile via the inner
profilefield - Runtime permissions gate bundled skills and runtime API surfaces independently
12. Observability And Debugging
Current observability surfaces:
- Structured logs via
zerolog - Rotating log files via
lumberjack - Session usage counters stored in
session_state.json /statuspowered bystatusview- Per-run markdown debug traces when
log_level=debug
Debug traces include, when the backend exposes them:
- Provider, agent name, thread/session id, model/profile
- Rendered input, observed tool activity, final output or error
13. Extension Boundaries
The supported extension surfaces:
llmprovider adapters- Prompt templates under
prompts/ - Bundled skills under
skills/ - Runtime API handlers
Contributing
Contributions are welcome. This guide covers workflow, standards, and review expectations for all contributors (human and AI).
中文版见下方。
1. Branch and Change Scope
- Base daily work on latest
dev. Submit PRs todev. mainonly accepts merge commits fromdev.- Branch naming:
feat/*,fix/*,docs/*,chore/*. - One commit, one goal. Don't mix unrelated changes in one commit.
2. Commit Message Format
Conventional Commits are enforced by a commit-msg hook:
type(scope): subject
type: subject
Allowed types: feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert.
Examples:
feat(connector): support codex resume threadfix: keep proxy env for codex execdocs: add configuration reference
3. Pre-Commit Checks
First-time setup:
make precommit-install
Every commit must pass:
make check
make check runs in order:
| Gate | Command |
|---|---|
| Secret scan | secret-check |
| Shell syntax | script-check |
| Format check | fmt-check (gofmt) |
| Vet | go vet ./... |
| Unit tests | go test ./... |
| Race tests | go test -race ./internal/connector |
Do not commit until make check passes with zero failures.
For cross-cutting or concurrency changes, also run:
go test -race ./...
If formatting fails, run make fmt first.
4. Code Rules
- Use
gofmtfor all Go code. - Files over 500 lines must be split in the same change (prevent mega-files).
- New or changed behavior must include/update tests.
- Never log sensitive information (app secrets, tokens, user content).
- CLI flag changes may be breaking but must be clearly documented with migration instructions.
5. Configuration Change Rules
- This project uses YAML config (
${ALICE_HOME}/config.yaml), not environment variables as primary config. - New config keys require updates to:
config.example.yamlinternal/config(defaults and validation)- Documentation (both English README and docs site)
- Config keys affecting session/memory behavior (e.g.,
idle_summary_hours) must have corresponding tests.
6. Documentation Sync
Any user-visible change (commands, flags, config, behavior) must sync:
README.mdREADME.zh-CN.mdbook/src/(docs site)
Keep English and Chinese docs consistent.
7. Merge Checklist
-
make checkpasses locally -
Key path runs (at minimum one start-up test):
go run ./cmd/connector --feishu-websocket - Documentation synced with changes
- No unrelated files or debug content included
8. Runtime Isolation Rules
When debugging or testing with isolated runtimes:
- Use explicit startup mode:
--feishu-websocketor--runtime-only alice-headlessmust use--runtime-onlyonly- Never connect isolated debug runtimes to the real Feishu WebSocket
- After startup, verify logs show
runtime-only mode enabled; Feishu websocket connector disabled - If logs show
feishu-codex connector startedfor an isolated runtime, stop it immediately
贡献指南
欢迎贡献。本指南涵盖所有贡献者(人类和 AI)的工作流、标准和评审要求。
1. 分支与变更范围
- 日常开发基于最新
dev分支,提交到dev。 main只接受dev -> main的合并提交。- 分支命名:
feat/*、fix/*、docs/*、chore/*。 - 每次提交只做一件事,避免无关改动混在一起。
2. 提交信息规范
强制使用 Conventional Commits,由 commit-msg hook 校验:
type(scope): subject
type: subject
允许的 type:feat、fix、docs、style、refactor、perf、test、build、ci、chore、revert。
示例:
feat(connector): support codex resume threadfix: keep proxy env for codex exec
3. 提交前必须检查
首次执行:
make precommit-install
每次提交前必须通过:
make check
make check 包含:secret-check → script-check → fmt-check → go vet → go test → go test -race
未通过 make check 不得提交。
格式不通过时先执行 make fmt。
4. 代码规则
- 统一使用
gofmt格式化代码。 - 单文件超过 500 行必须拆分(防止巨型文件增长)。
- 新增或修改行为必须补充/更新测试。
- 不要在日志中输出敏感信息。
- CLI 参数变更允许破坏性调整,但必须明确说明。
5. 配置变更规则
- 使用 YAML 配置文件,不用环境变量作主配置入口。
- 新增配置项必须同步更新:
config.example.yaml、internal/config、文档。 - 会话相关配置项需补充对应测试。
6. 文档同步规则
任何用户可见变更必须同步更新文档,保持中英文一致。
7. 合并前自检清单
-
本地
make check通过 -
至少验证一次启动:
go run ./cmd/connector --feishu-websocket - 文档已同步更新
- 不包含无关文件或调试内容
8. 运行时隔离规则
调试或测试隔离 runtime 时:
- 使用显式启动模式
alice-headless只能用--runtime-only- 不允许隔离 runtime 连接真实飞书 WebSocket
- 启动后确认日志显示
runtime-only mode enabled; Feishu websocket connector disabled
Adding a New LLM Backend
This guide walks through adding support for a new LLM provider CLI to Alice. Follow the same pattern used by the existing backends (codex, claude, gemini, kimi, opencode).
Prerequisites
- The provider must have a CLI tool that Alice can run as a subprocess
- The CLI must accept a prompt via stdin or CLI flags
- The CLI must output results to stdout
Step 1: Understand the Backend Interface
The core interface is in internal/llm/backend.go:
type Backend interface {
Run(ctx context.Context, req RunRequest) (RunResult, error)
}
type RunRequest struct {
ThreadID string
UserText string
Model string
// ... other fields
OnProgress ProgressFunc
OnRawEvent RawEventFunc
}
type RunResult struct {
Reply string
NextThreadID string
GoalDone bool
Usage Usage
}
Your backend must:
- Build the correct CLI command from
RunRequest - Execute it as a subprocess
- Parse stdout/stderr into
RunResult - Stream intermediate progress via
OnProgress - Handle
ctx.Done()for cancellation
Step 2: Create the Backend File
Create internal/llm/<provider>_backend.go. Follow the pattern in codex_backend.go:
package llm
import (
"context"
"os/exec"
)
type myProviderBackend struct {
config MyProviderConfig
}
func newMyProviderBackend(cfg MyProviderConfig) *myProviderBackend {
return &myProviderBackend{config: cfg}
}
func (b *myProviderBackend) Run(ctx context.Context, req RunRequest) (RunResult, error) {
// 1. Build command
args := []string{"run", "--model", req.Model}
if req.ThreadID != "" {
args = append(args, "--continue", req.ThreadID)
}
cmd := exec.CommandContext(ctx, b.config.Command, args...)
cmd.Dir = req.WorkspaceDir
cmd.Env = mergeEnv(b.config.Env)
// 2. Pipe user text to stdin
stdin, _ := cmd.StdinPipe()
go func() {
defer stdin.Close()
io.WriteString(stdin, req.UserText)
}()
// 3. Stream and parse output
stdout, _ := cmd.StdoutPipe()
// ... parse JSON-lines from stdout ...
// ... call req.OnProgress for intermediate messages ...
// 4. Run
err := cmd.Run()
// 5. Return result
return RunResult{
Reply: finalReply,
NextThreadID: nextThreadID,
Usage: usage,
}, err
}
Step 3: Add Configuration
Add a config struct and provider constant in internal/llm/factory.go:
const ProviderMyProvider = "myprovider"
type MyProviderConfig struct {
Command string
Timeout time.Duration
Model string
Env map[string]string
WorkspaceDir string
ProfileOverrides map[string]ProfileRunnerConfig
}
Step 4: Register in the Factory
Add your provider to NewProvider in factory.go:
func NewProvider(cfg FactoryConfig) (Provider, error) {
provider := normalizeProvider(cfg.Provider)
switch provider {
case ProviderCodex:
return providerBundle{backend: newCodexBackend(cfg.Codex)}, nil
case ProviderClaude:
return providerBundle{backend: newClaudeBackend(cfg.Claude)}, nil
case ProviderMyProvider: // NEW
return providerBundle{backend: newMyProviderBackend(cfg.MyProvider)}, nil // NEW
default:
return nil, fmt.Errorf("unsupported llm_provider %q", provider)
}
}
Also add the field to FactoryConfig:
type FactoryConfig struct {
Provider string
Codex CodexConfig
Claude ClaudeConfig
Gemini GeminiConfig
Kimi KimiConfig
OpenCode OpenCodeConfig
MyProvider MyProviderConfig // NEW
}
Step 5: Wire Configuration from config.yaml
In internal/config, extend the LLM profile to accept the new provider. The profile config should map to your MyProviderConfig fields (Command, Timeout, Model, Env, etc.).
Step 6: Add Example Config
Add a profile example in config.example.yaml:
# Example: MyProvider profile.
# chat_myprovider:
# provider: "myprovider"
# command: "myprovider"
# model: "myprovider-model-v1"
# permissions:
# sandbox: "workspace-write"
# ask_for_approval: "never"
Step 7: Write Tests
Create internal/llm/<provider>_backend_test.go. Test at minimum:
- Command construction with different request fields
- Timeout handling
- Progress callback delivery
- Cancellation via context
- Error handling for invalid output
Use the existing test patterns in codex_backend_test.go or opencode_appserver_driver_test.go as reference.
Step 8: Interactive Session Support (Optional)
Some backends support long-running interactive sessions where new input can be injected without restarting the subprocess. If your provider supports this:
- Implement the
InteractiveProviderSessionpattern (seeclaude_stream_driver.gooropencode_appserver_driver.go) - Wire the interactive mode into the main
Runmethod - Add a
DisableStream*escape hatch for fallback
Implementation Checklist
-
internal/llm/<provider>_backend.go— backend implementation -
internal/llm/factory.go— provider constant + config struct + switch case -
internal/config— LLM profile config wiring -
config.example.yaml— example profile -
internal/llm/<provider>_backend_test.go— tests -
book/src/reference/configuration.md— update provider list -
book/src/how-to/configure-backend.md— add provider example
Reference Implementations
Study these existing backends for patterns:
| Backend | File | Notes |
|---|---|---|
| Codex | codex_backend.go | Full implementation with reasoning, personality, idle timeout |
| Claude | claude_stream_driver.go | Streaming interactive sessions |
| OpenCode | opencode_appserver_driver.go | Appserver mode with persistent server |
| Kimi | kimi_wire_driver.go | Wire-protocol driver |
Release Process
How Alice releases are built and published.
Branch Strategy
- Day-to-day development happens on
dev - Releases go through
dev → mainmerge commits only - Never push directly to
main
CI Pipeline
On dev Push
- Run quality gate (
make check) - Build dev binaries
- Update prerelease
dev-latest
On main Merge from dev
- Run quality gate (
make check) - Auto-create next
vX.Y.Ztag - Build release binaries for all platforms
- Publish GitHub Release
Manual v* Tags
- Pushing a
v*tag triggers the release workflow directly
Release Artifacts
Each release publishes:
- Binary builds for: linux-amd64, linux-arm64, darwin-amd64, darwin-arm64, win32-x64
- npm package:
@alice_space/alice - Installer script:
scripts/alice-installer.sh
Making a Release
- Ensure
devpasses all checks and is ready - Create a PR from
devtomain - Merge with merge commit (do NOT squash or rebase)
- CI auto-creates the tag and publishes the release
- Verify the GitHub Release shows all artifacts
Version Numbers
Tags follow semver: vX.Y.Z. The CI auto-increments the patch version from the previous release tag.
Post-Release
- The installer script (
alice-installer.sh) automatically picks up the latest release - npm users get the update via
npm update -g @alice_space/alice
CI Workflow Files
.github/workflows/ci.yml— Dev branch quality gate and dev binaries.github/workflows/main-release.yml— Main branch release build.github/workflows/release-on-tag.yml— Manual tag release