Alice

AI Local Interactive Cross-device Engine

Same AI session, anywhere. Terminal ↔ Feishu. No cloud lock-in.

Alice is a Feishu long-connection connector that turns CLI-based LLM agents into interactive bots inside Feishu. Works with OpenCode (DeepSeek V4), Codex, Claude, Gemini, Kimi.

The core idea

Your terminal agent and your Feishu bot are the same session. Start a refactor in your IDE, check progress from your phone, send the next instruction via Feishu. Alice bridges your local CLI agent to Feishu's WebSocket so you're never locked to one device.

What problem does Alice solve?

You have an LLM agent CLI installed — and it works great in your terminal. But:

  • Connects to Feishu's WebSocket for real-time message delivery
  • Routes incoming messages into chat (casual) or work (task-oriented) scenes
  • Calls your configured LLM CLI backend with the right prompt, model, and permissions
  • Sends progress updates, final replies, files, and images back to Feishu
  • Exposes a local HTTP API for bundled skills and automation tasks

Key Features

  • Multi-bot: One alice process, one config.yaml, multiple independent bots
  • Scene routing: Separate chat and work modes with per-scene LLM profiles
  • Six backends: OpenCode, Codex, Claude, Gemini, Kimi — switch per scene
  • Session persistence: Resumable threads, session aliases, usage counters
  • Live status cards: Real-time heartbeat showing backend activity and file changes
  • Automation: Cron-like scheduled tasks with send_text, run_llm, and run_workflow actions
  • Bundled skills: Extendable skill scripts that call the runtime API
  • Subprocess delegation: alice delegate lets OpenCode agents send subtasks to other backends
  • Zero cloud dependency: Everything runs on your machine

Who is Alice for?

  • Teams using Feishu that want LLM agent access without building custom integrations
  • Developers who already use CLI agents and want them accessible in group chats
  • Operators who need scheduled automation along with interactive LLM capabilities
SectionFor
TutorialsNew users — get Alice running in 5 minutes
How-To GuidesTask-focused recipes for specific goals
ExplanationDeep dives into concepts and design
ReferenceComprehensive config, API, and CLI docs
DevelopmentContributor-oriented architecture and guides

Quick Start

npm install -g @alice_space/alice
alice setup
# edit ~/.alice/config.yaml with your Feishu credentials
alice --feishu-websocket

See the Quick Start tutorial for detailed steps.


中文版 · GitHub

Quick Start

Get Alice running and responding to messages in 5 minutes.

Prerequisites

  • Node.js (for npm install) or Go 1.25+ (for source build)
  • A Feishu app with bot capability and long connection enabled
  • At least one LLM CLI installed and authenticated:

If you haven't set up your Feishu app yet, follow the Feishu Platform Setup tutorial first.

Step 1: Install

Via npm (recommended):

npm install -g @alice_space/alice

Via installer script:

curl -fsSL https://cdn.jsdelivr.net/gh/Alice-space/alice@main/scripts/alice-installer.sh | bash -s -- install

From source:

git clone https://github.com/Alice-space/alice.git
cd alice
go build -o bin/alice ./cmd/connector

Step 2: Setup

alice setup

This creates the directory structure at ~/.alice/, writes a default config.yaml, syncs bundled skills, and (on Linux) registers a systemd user unit.

Step 3: Configure

Edit ~/.alice/config.yaml and fill in at minimum:

bots:
  my_bot:
    name: "Alice"
    feishu_app_id: "cli_xxxxxxxx"      # from Feishu Open Platform
    feishu_app_secret: "your_secret"    # from Feishu Open Platform
    llm_profiles:
      chat:
        provider: "opencode"
        model: "deepseek/deepseek-v4-flash"
      work:
        provider: "opencode"
        model: "deepseek/deepseek-v4-pro"

The default config ships with OpenCode profiles targeting DeepSeek models. If you use a different LLM CLI, see Configure LLM Backends.

Step 4: Verify Backend Auth

Make sure your LLM CLI can authenticate:

opencode --version    # or codex, claude, etc.

Step 5: Start

alice --feishu-websocket

You should see log output indicating the Feishu WebSocket connection and per-bot runtime initialization.

Step 6: Test with Work Mode

Most people use Alice for work mode — task-oriented engineering, debugging, and automation. Here's how:

In any Feishu group chat where your bot is present:

@Alice #work fix the login timeout in auth.go

What happens:

  1. Alice creates a Feishu thread for this task
  2. Starts the configured LLM backend (e.g. DeepSeek V4)
  3. Streams progress and tool calls back to the thread
  4. The session is persisted — you can resume from your terminal later

Try the built-in commands too:

/help       — Show command list
/status     — Show current session and backend info
/stop       — Cancel the running task

Chat Mode (Casual)

Alice also supports a casual chat mode where the bot behaves like a persistent group participant. Just message with @Alice:

@Alice what's the weather like?

Chat mode uses the chat LLM profile (lighter model), shares one session per group, and doesn't create threads. Use /clear to reset the chat session.

Tip: The default config.example.yaml enables both modes. Work mode is the primary use case for most operators. If you only need work mode, set group_scenes.chat.enabled: false.

What's Next?

Feishu Platform Setup

This tutorial walks through creating a Feishu app that Alice can connect to. Estimated time: 15 minutes.

Overview

Alice needs a Feishu app with:

  1. Bot capability enabled
  2. im.message.receive_v1 event subscription
  3. Required message permissions
  4. Long connection mode enabled

Step 1: Log into Feishu Open Platform

Visit Feishu Open Platform and sign in with your organization account.

Lark (international) users: Visit Lark Open Platform instead. Then set feishu_base_url: "https://open.larksuite.com" in your bot config.

Step 2: Create an App

  1. Click Create App (创建应用)
  2. Choose Enterprise Self-built App (企业自建应用)
  3. Name your app (e.g., "Alice Bot") and upload an icon
  4. Click Create

Step 3: Enable Bot Capability

  1. In the left sidebar, go to FeaturesBot (机器人)
  2. Toggle Enable Bot (启用机器人)
  3. Configure the bot's name, avatar, and description as desired

Step 4: Add Event Subscription

  1. Go to Event Subscriptions (事件订阅)
  2. Click Add Event (添加事件)
  3. Find and select Receive Message (接收消息) → im.message.receive_v1
  4. Click Confirm

This is what allows Alice to receive all messages the bot can see.

Step 5: Configure Permissions

  1. Go to Permissions (权限管理)
  2. Search for and enable these permissions:
PermissionWhy
im:messageRead messages sent to the bot
im:message:send_as_botSend messages as the bot
im:message:readRead message content
im:resourceDownload images and files
contact:user.id:readonlyResolve user names
contact:group.id:readonlyAccess group chat info
  1. Click Save (保存)

Step 6: Enable Long Connection

  1. Go to FeaturesEvent Subscriptions (事件订阅)
  2. Find the Connection Mode (连接方式) section
  3. Switch from Request URL to Long Connection (长连接)
  4. Save the change

This is critical. Alice uses WebSocket long connections, not HTTP webhooks. If long connection mode is not enabled, Alice cannot receive messages.

Step 7: Get Credentials

  1. Go to App SettingsBasic Info (基础信息)
  2. Copy your App ID (应用凭证 → App ID)
  3. Copy your App Secret (应用凭证 → App Secret)

These go into your config.yaml:

bots:
  my_bot:
    feishu_app_id: "cli_xxxxxxxx"      # your App ID
    feishu_app_secret: "your_secret"    # your App Secret

Step 8: Publish and Approve

  1. Go to Version Management (版本管理与发布)
  2. Click Create Version (创建版本), fill in version info
  3. After creation, click Apply for Release (申请发布)
  4. An admin in your Feishu org must approve the release
  5. Once approved, users in your org can find and interact with the bot

Tip: During development, you can add individual users as App Collaborators (应用协作者) under App Settings, allowing them to test the bot before publishing.

Verification

After starting Alice with alice --feishu-websocket, check the logs:

feishu-codex connector started (long connection mode)

If you see WebSocket connection errors, double-check that long connection mode is enabled and your credentials are correct.

Next Steps

Install Alice

Three ways to install. Pick the one that fits your workflow.

npm install -g @alice_space/alice

After installation, run the setup wizard:

alice setup

This creates ~/.alice/, writes a starter config.yaml, syncs bundled skills, registers a systemd user unit (Linux), and installs the OpenCode delegate plugin.

Requirements: Node.js 18+

Installer Script

Single-command install from GitHub Releases:

# Install latest stable release
curl -fsSL https://cdn.jsdelivr.net/gh/Alice-space/alice@main/scripts/alice-installer.sh | bash -s -- install

# Install a specific version
curl -fsSL https://cdn.jsdelivr.net/gh/Alice-space/alice@main/scripts/alice-installer.sh | bash -s -- install --version v1.2.3

# Uninstall
curl -fsSL https://cdn.jsdelivr.net/gh/Alice-space/alice@main/scripts/alice-installer.sh | bash -s -- uninstall

The installer downloads the correct binary for your platform (darwin-amd64, darwin-arm64, linux-amd64, linux-arm64, win32-x64) and verifies checksums.

After installation, run alice setup to initialize the config and skills directory.

Requirements: curl, tar

Build from Source

git clone https://github.com/Alice-space/alice.git
cd alice
go build -o bin/alice ./cmd/connector

Optionally install to your PATH:

cp bin/alice /usr/local/bin/alice

Requirements: Go 1.25+

Verify Installation

alice --version

Should print the version string. If alice setup has been run, you can also check:

ls ~/.alice/
# config.yaml  skills/  log/  bots/

Runtime Home

Alice uses different default home directories depending on the build channel:

BuildDefault Home
Release (npm / installer)~/.alice
Dev (source build)~/.alice-dev

Override with --alice-home or the ALICE_HOME environment variable.

Configure Chat & Work Scenes

Alice routes incoming group messages into one of two scenes: chat for casual conversation, and work for explicit task execution.

Scene Routing Overview

Incoming Message
  ├─ Built-in command? (/help, /status, /stop, /clear, /session)
  │   └─ Handle directly, no LLM
  ├─ Matches work trigger? (@Alice #work ...)
  │   └─ Route to work scene
  └─ Otherwise
      └─ Route to chat scene (if enabled)

Both scenes are configured under bots.<id>.group_scenes.

Chat Scene

The chat scene is for low-friction, persistent conversation. One session per chat group.

group_scenes:
  chat:
    enabled: true
    session_scope: "per_chat"
    llm_profile: "chat"
    no_reply_token: "[[NO_REPLY]]"
    create_feishu_thread: false
FieldDescription
enabledSet to true to activate the chat scene
session_scope"per_chat" — one session for the whole group. "per_thread" — one session per Feishu thread
llm_profileName of the LLM profile under llm_profiles to use
no_reply_tokenIf the model returns this exact string, Alice stays silent instead of replying
create_feishu_threadWhether to wrap replies in a Feishu thread

Use /clear to reset the chat session and start fresh.

Work Scene

The work scene is for task-oriented execution. Each work task gets its own thread and session.

group_scenes:
  work:
    enabled: true
    trigger_tag: "#work"
    session_scope: "per_thread"
    llm_profile: "work"
    create_feishu_thread: true
FieldDescription
enabledSet to true to activate the work scene
trigger_tagThe tag that must appear in a message to trigger work mode (after the @bot mention)
session_scope"per_thread" — each Feishu thread gets its own session. "per_chat" — shared session
llm_profileName of the LLM profile to use (typically a more capable model)
create_feishu_threadAutomatically create a Feishu thread for work replies

Work mode usage:

@Alice #work fix the login bug              → Starts work, calls LLM
@Alice #work                                 → Creates work thread without calling LLM
@Alice #work /session <backend-session-id>   → Binds thread to existing backend session

Common Patterns

Chat-Only Bot

group_scenes:
  chat:
    enabled: true
    session_scope: "per_chat"
    llm_profile: "chat"
    no_reply_token: "[[NO_REPLY]]"
  work:
    enabled: false

Split Chat + Work

Use a lighter model for chat and a more capable one for work:

llm_profiles:
  chat:
    provider: "opencode"
    model: "deepseek/deepseek-v4-flash"
  work:
    provider: "opencode"
    model: "deepseek/deepseek-v4-pro"
    variant: "max"
    permissions:
      sandbox: "danger-full-access"
      ask_for_approval: "never"

group_scenes:
  chat:
    enabled: true
    session_scope: "per_chat"
    llm_profile: "chat"
    no_reply_token: "[[NO_REPLY]]"
  work:
    enabled: true
    trigger_tag: "#work"
    session_scope: "per_thread"
    llm_profile: "work"
    create_feishu_thread: true

Legacy Trigger Mode

If both chat and work are disabled, Alice falls back to a legacy trigger system:

bots:
  my_bot:
    trigger_mode: "at"       # at | prefix | all
    trigger_prefix: ""       # only used when trigger_mode is "prefix"
ModeBehavior
atOnly @bot messages are accepted
prefixOnly messages starting with trigger_prefix
allEvery message is accepted (no filter)

New deployments should prefer explicit scene routing.

Configure LLM Backends

Alice supports five LLM backends. Each scene references an llm_profile that specifies which provider, model, and settings to use.

Supported Providers

ProviderCLI ToolNotes
opencodeopencodeOpenCode CLI for DeepSeek and other models
codexcodexOpenAI Codex CLI. Supports reasoning_effort, personality, profile
claudeclaudeAnthropic Claude Code CLI. Streaming by default
geminigeminiGoogle Gemini CLI
kimikimiMoonshot Kimi CLI

Each provider must be installed and authenticated separately. Alice does not manage provider authentication.

Profile Configuration

Profiles are defined under bots.<id>.llm_profiles:

bots:
  my_bot:
    llm_profiles:
      my_profile:
        provider: "opencode"
        model: "deepseek/deepseek-v4-pro"
        variant: "max"
        timeout_secs: 172800
        permissions:
          sandbox: "danger-full-access"
          ask_for_approval: "never"
          add_dirs: ["/data/corpus"]

Common Fields

FieldAllDescription
providerBackend name: opencode, codex, claude, gemini, kimi
commandPath to the CLI binary. Defaults to the provider name (e.g. opencode)
timeout_secsPer-run timeout in seconds. Default: 172800 (48 hours)
modelModel identifier (required)
permissions.sandbox"read-only", "workspace-write", or "danger-full-access"
permissions.ask_for_approval"untrusted", "on-request", or "never"
permissions.add_dirsExtra directories accessible to the agent
prompt_prefixText prepended to every prompt

Codex-Specific Fields

FieldDescription
reasoning_effortThinking level: "low", "medium", "high", or "xhigh"
personalityNamed personality preset from Codex CLI config
profileNamed sub-profile from Codex CLI config

OpenCode-Specific Fields

FieldDescription
variantDeepSeek variant: "max", "high", "minimal"

Custom Binary Path

If your CLI binary is outside $PATH, specify the absolute path:

llm_profiles:
  work:
    provider: "opencode"
    command: "/usr/local/bin/opencode"
    model: "deepseek/deepseek-v4-pro"

You can also extend $PATH via the env section:

bots:
  my_bot:
    env:
      PATH: "/home/user/bin:/usr/local/bin:/usr/bin:/bin"

Per-Profile Overrides

Some backends support per-profile runner overrides via profile_overrides. This is an advanced feature used when the same provider needs different CLI configurations for different scenes.

llm_profiles:
  executor:
    provider: "codex"
    model: "gpt-5.4-mini"
    profile: "executor"
    profile_overrides:
      executor:
        command: "/opt/bin/codex-executor"
        provider_profile: "executor-v2"
        timeout: 3600
        exec_policy:
          sandbox: "danger-full-access"
          ask_for_approval: "never"

Environment Variables for Backend Processes

The env section under bots.<id> passes environment variables to every LLM subprocess:

bots:
  my_bot:
    env:
      HTTPS_PROXY: "http://127.0.0.1:8080"
      ALL_PROXY: "http://127.0.0.1:8080"

This is especially useful for proxy configuration and API key management.

Examples

OpenCode with DeepSeek (chat)

llm_profiles:
  chat:
    provider: "opencode"
    model: "deepseek/deepseek-v4-flash"

Codex with reasoning

llm_profiles:
  work:
    provider: "codex"
    command: "codex"
    model: "gpt-5.4-mini"
    reasoning_effort: "high"
    permissions:
      sandbox: "danger-full-access"
      ask_for_approval: "never"

Claude

llm_profiles:
  work:
    provider: "claude"
    model: "claude-sonnet-4-6"
    prompt_prefix: "You are a senior software engineer. Be concise."
    permissions:
      sandbox: "danger-full-access"
      ask_for_approval: "never"

Gemini

llm_profiles:
  chat:
    provider: "gemini"
    model: "gemini-2.5-pro"

Kimi

llm_profiles:
  chat:
    provider: "kimi"
    model: "kimi-model-identifier"

Customize SOUL.md Persona

Each bot can have a persona document called SOUL.md that defines its behavior, tone, and reply preferences.

What is SOUL.md?

SOUL.md is a Markdown file with YAML frontmatter. It serves two purposes:

  1. Persona: The Markdown body is injected into the LLM prompt for chat scenes, shaping the bot's tone and behavior
  2. Metadata: The YAML frontmatter controls machine-readable reply behavior

File Location

By default, Alice looks for SOUL.md in the bot's alice_home:

~/.alice/bots/<bot_id>/SOUL.md

You can customize the path with soul_path:

bots:
  my_bot:
    soul_path: "SOUL.md"            # relative to alice_home (default)
    # soul_path: "/path/to/custom/SOUL.md"  # absolute path

If the file doesn't exist at startup, Alice writes an embedded template from prompts/SOUL.md.example.

Frontmatter Keys

---
image_refs:
  - refs/avatar.png
  - refs/signature.jpg
output_contract:
  hidden_tags:
    - reply_will
    - motion
  reply_will_tag: reply_will
  reply_will_field: reply_will
  motion_tag: motion
  suppress_token: "[[NO_REPLY]]"
---
KeyDescription
image_refsList of local image paths the bot can reference. Paths are relative to the directory containing SOUL.md
output_contract.hidden_tagsTags in the bot's reply that Alice strips before sending to Feishu
output_contract.reply_will_tagTag marking the bot's intent to reply
output_contract.reply_will_fieldField name within the tag
output_contract.motion_tagTag for motion/animation cues
output_contract.suppress_tokenIf the bot outputs this token, Alice suppresses the reply entirely

Full Example

---
image_refs:
  - refs/avatar.png
output_contract:
  hidden_tags:
    - reply_will
    - motion
  reply_will_tag: reply_will
  reply_will_field: reply_will
  motion_tag: motion
  suppress_token: "[[NO_REPLY]]"
---

# Persona

You are Alice, a helpful engineering assistant. You speak concisely in Chinese
mixed with English technical terms. You never use emoji unless explicitly asked.

## Rules

- Keep code snippets under 30 lines
- Prefer explaining the approach before showing code
- Never apologize — just fix the problem

When is SOUL.md Applied?

  • Chat scene: The full body is prepended to the prompt, and frontmatter is parsed by Alice for reply control
  • Work scene: SOUL.md is intentionally skipped. Work mode is for task execution, not persona roleplay

Testing Your Persona

  1. Edit SOUL.md
  2. Restart Alice (multi-bot mode requires restart; single-bot mode supports hot reload)
  3. Send a message in a chat scene — the bot should reflect the updated persona
  4. Use /clear to reset the conversation if needed

Deploy to Server

Run Alice as a persistent background service so it survives restarts and runs reliably.

systemd (Linux)

alice setup automatically creates a systemd user unit if systemd is available.

# Start
systemctl --user start alice.service

# Enable auto-start on boot
systemctl --user enable alice.service

# Check status
systemctl --user status alice.service

# View logs
journalctl --user-unit alice.service -n 100 --no-pager
journalctl --user-unit alice.service --since "30 min ago" --no-pager

# Restart
systemctl --user restart alice.service

If you installed without alice setup, create the unit manually:

# ~/.config/systemd/user/alice.service
[Unit]
Description=Alice Feishu LLM Connector
After=network-online.target

[Service]
Type=simple
ExecStart=%h/.alice/bin/alice --feishu-websocket
Restart=on-failure
RestartSec=10
Environment=HOME=%h

[Install]
WantedBy=default.target

Then:

systemctl --user daemon-reload
systemctl --user start alice.service

macOS

On macOS, use launchd or run manually.

launchd

<!-- ~/Library/LaunchAgents/com.alice.connector.plist -->
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.alice.connector</string>
    <key>ProgramArguments</key>
    <array>
        <string>/Users/you/.alice/bin/alice</string>
        <string>--feishu-websocket</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
    <key>StandardOutPath</key>
    <string>/Users/you/.alice/log/stdout.log</string>
    <key>StandardErrorPath</key>
    <string>/Users/you/.alice/log/stderr.log</string>
</dict>
</plist>
launchctl load ~/Library/LaunchAgents/com.alice.connector.plist

Manual

alice --feishu-websocket

Use tmux or screen to keep it running after logout.

Runtime-Only Mode

For deployments that only need automation and the runtime API (no Feishu WebSocket):

alice --runtime-only

In headless environments:

alice-headless --runtime-only

Important: alice-headless cannot start the Feishu connector. It is explicitly limited to runtime-only mode.

Logging

Alice uses structured JSON logs via zerolog with daily log rotation.

log_level: "info"          # debug | info | warn | error
log_file: ""               # empty = <ALICE_HOME>/log/YYYY-MM-DD.log
log_max_size_mb: 20        # rotate after 20 MB
log_max_backups: 5         # keep 5 rotated files
log_max_age_days: 7        # keep logs for 7 days
log_compress: false        # gzip rotated logs

Health Check

The runtime API exposes a health endpoint:

curl http://127.0.0.1:7331/healthz
# {"status":"ok"}

Monitoring

  • /status command in Feishu shows usage totals and active automation tasks
  • journalctl (systemd) or log files for structured log analysis
  • Session and runtime state are persisted to JSON files for inspection

Multi-Bot Deployments

One alice process can host multiple bots. All bots share the same process but each gets its own runtime directory, workspace, and queue.

bots:
  engineering_bot:
    feishu_app_id: "cli_11111"
    # ...
  support_bot:
    feishu_app_id: "cli_22222"
    # ...

Multi-bot mode disables config hot reload. Restart the process after configuration changes.

Use alice delegate

The alice delegate subcommand sends a one-shot prompt to any configured LLM backend from the command line.

Basic Usage

alice delegate --provider codex --prompt "Refactor the auth module to use JWT"
alice delegate --provider claude --prompt "Review this code for security issues"
alice delegate --provider opencode --prompt "Explain how DNS resolution works"

Flags

FlagDescription
--providerLLM backend: opencode, codex, claude, gemini, kimi
--promptThe prompt text (required)
--modelOverride the default model
--workspaceOverride the working directory

Piping Input

Send a diff or file content via stdin:

cat diff.patch | alice delegate --provider claude --prompt "Review this PR diff"
alice delegate --provider codex --prompt "Summarize this log" < /var/log/app.log

OpenCode Plugin Integration

alice setup writes a plugin to ~/.config/opencode/plugins/alice-delegate.js. Once present, OpenCode agents (including DeepSeek) automatically gain two extra tools:

  • codex — delegates a subtask to Codex
  • claude — delegates a subtask to Claude

No extra configuration is needed. OpenCode loads plugins from that directory automatically.

This is the primary use case for alice delegate: allowing an OpenCode agent to fan out parallel work or delegate specialized tasks to other LLM backends.

How It Connects

alice delegate uses the same llm_profiles configuration as the main Alice runtime. A profile named delegate under the first bot is used by default. The profile determines the model, permissions, and environment variables for the delegated run.

bots:
  my_bot:
    llm_profiles:
      delegate:
        provider: "claude"
        model: "claude-sonnet-4-6"
        permissions:
          sandbox: "workspace-write"
          ask_for_approval: "never"

Examples

Quick code review

alice delegate --provider claude --prompt "Check this function for bugs and suggest improvements" < src/auth.go

Refactoring

alice delegate --provider codex --prompt "Extract the database logic into a separate package"

Documentation generation

alice delegate --provider opencode --prompt "Generate JSDoc comments for all exported functions"

Use Built-in Commands

Alice provides several slash commands that bypass the LLM and are handled directly by the connector. All commands work in both group chats and direct messages.

/help

Displays the built-in command help card with all available commands.

/help

/status

Shows a status card with:

  • Total sessions and usage counters
  • Active automation tasks
  • Current LLM backend and session details
/status

/clear

Resets the current chat scene session. The next message starts a fresh conversation with no prior context.

/clear

Only affects chat scenes. work scenes are thread-scoped and reset naturally when the thread ends.

/stop

Immediately cancels the currently running LLM call for the active session.

/stop

Use this when the agent is stuck in a loop or taking too long. The bot will acknowledge the stop and become available for new messages.

/session

Binds a Feishu work thread to an existing backend session. Useful for resuming long-running tasks after a restart.

/session <backend-session-id>
/session <backend-session-id> Continue the review
  • Without an instruction: binds the session, no LLM call
  • With an instruction: binds the session and immediately calls the LLM with the instruction

Only works in work scene threads.

/cd, /ls, /pwd

Inspect and change the current working directory for the active work session:

/pwd               # Show current directory
/ls                # List files
/ls internal/      # List files in subdirectory
/cd /tmp/build     # Change directory

These commands only affect work sessions. The directory change persists for the duration of the session.

Command Precedence

When a message starts with /, Alice checks for built-in commands before routing to the LLM:

  1. Built-in command match → handle directly
  2. No match → route to scene (LLM handles it)

To force a message starting with / to go to the LLM, prefix it with a space or use the work trigger:

 /some-custom-command     # Space before slash → LLM path
@Alice #work /some-cmd    # Work trigger → LLM path

Configure Private Chat

Alice can handle direct messages (private chats) with the same scene routing as group chats.

Private vs Group Scenes

Group chats use group_scenes. Direct messages use private_scenes. They are configured identically but under different keys:

bots:
  my_bot:
    group_scenes:
      chat: { ... }
      work: { ... }

    private_scenes:
      chat: { ... }
      work: { ... }

Private scenes are disabled by default. Enable them explicitly.

Chat Scene (Private)

private_scenes:
  chat:
    enabled: true
    session_scope: "per_user"        # one session per DM user
    llm_profile: "chat"
    no_reply_token: "[[NO_REPLY]]"
    create_feishu_thread: false
FieldDescription
session_scope"per_user" — all DMs from the same user share one session. "per_message" — each DM creates a new session
llm_profileSame profile reference as group scenes
no_reply_tokenSuppress reply token

Typical use: a personal assistant available in DMs, maintaining context per user.

Work Scene (Private)

private_scenes:
  work:
    enabled: true
    trigger_tag: "#work"
    session_scope: "per_message"     # each #work DM starts fresh
    llm_profile: "work"
    create_feishu_thread: true
FieldDescription
session_scope"per_message" recommended for DMs — each task is isolated

Full Example

bots:
  my_bot:
    group_scenes:
      chat:
        enabled: true
        session_scope: "per_chat"
        llm_profile: "chat"
        no_reply_token: "[[NO_REPLY]]"
      work:
        enabled: true
        trigger_tag: "#work"
        session_scope: "per_thread"
        llm_profile: "work"
        create_feishu_thread: true

    private_scenes:
      chat:
        enabled: true
        session_scope: "per_user"
        llm_profile: "chat"
        no_reply_token: "[[NO_REPLY]]"
      work:
        enabled: true
        trigger_tag: "#work"
        session_scope: "per_message"
        llm_profile: "work"
        create_feishu_thread: true

Behavior Differences from Group Chat

  • Mentions are implicit — DMs don't require @bot. Every message is directed at the bot.
  • user_id resolution — Alice resolves the DM user's name via Feishu API
  • Thread creation — When create_feishu_thread: true, work replies create threads within the DM

Write a Bundled Skill

Bundled skills extend Alice with script-based tools that call the Runtime HTTP API. This guide shows you how to create one.

Skill Anatomy

A bundled skill is a directory under skills/:

skills/my-skill/
├── SKILL.md           # Skill documentation
├── scripts/
│   └── my-skill.sh    # Executable script
└── agents/
    └── openai.yaml    # OpenAI agent configuration (optional)

Step 1: Create the Directory

Create your skill under skills/ in the Alice source tree or under ${ALICE_HOME}/skills/ for local development.

Step 2: Write SKILL.md

SKILL.md documents your skill for both humans and LLM agents:

# my-skill

Sends a daily summary of active automation tasks to a specified Feishu chat.

## Usage

This skill is triggered by the automation system. It reads all active tasks
from the runtime API and sends a formatted summary card.

## Environment

Requires `ALICE_RUNTIME_API_BASE_URL` and `ALICE_RUNTIME_API_TOKEN` to be set.

Step 3: Write the Script

The script runs as a subprocess. Alice injects these environment variables:

VariableDescription
ALICE_RUNTIME_API_BASE_URLBase URL of the runtime API (e.g. http://127.0.0.1:7331)
ALICE_RUNTIME_API_TOKENBearer token for API authentication
ALICE_RUNTIME_BINPath to the alice binary
ALICE_RECEIVE_ID_TYPEType of the receive target (e.g. chat_id)
ALICE_RECEIVE_IDID of the receive target
ALICE_SOURCE_MESSAGE_IDID of the triggering message (if applicable)
ALICE_ACTOR_USER_IDFeishu user ID of the person interacting
ALICE_ACTOR_OPEN_IDFeishu open ID of the person interacting
ALICE_CHAT_TYPEChat type: group or p2p
ALICE_SESSION_KEYCanonical session key for the current conversation

Example Script

#!/usr/bin/env bash
set -euo pipefail

# Get all active tasks
TASKS=$(curl -sS \
  -H "Authorization: Bearer ${ALICE_RUNTIME_API_TOKEN}" \
  "${ALICE_RUNTIME_API_BASE_URL}/api/v1/automation/tasks?status=active")

# Count and format
COUNT=$(echo "$TASKS" | jq '. | length')
echo "Active tasks: $COUNT"

Make it executable:

chmod +x skills/my-skill/scripts/my-skill.sh

Step 4: Register the Skill

Add your skill to the bot's allowed skills list:

bots:
  my_bot:
    permissions:
      allowed_skills: ["alice-message", "alice-scheduler", "my-skill"]

Runtime API Endpoints Available to Skills

Skills primarily use these endpoints:

EndpointMethodPurpose
/api/v1/messages/imagePOSTSend an image to the chat
/api/v1/messages/filePOSTSend a file to the chat
/api/v1/automation/tasksGETList automation tasks
/api/v1/automation/tasksPOSTCreate an automation task
/api/v1/automation/tasks/:idGET/PATCH/DELETEManage a specific task

All requests require the Authorization: Bearer <token> header.

Permissions

Skills operate under the bot's runtime permissions:

permissions:
  runtime_message: true       # Allow sending messages via API
  runtime_automation: true    # Allow managing automation tasks

If a permission is disabled, the corresponding API endpoints return 403 Forbidden.

Built-in Skills Reference

Alice ships with two bundled skills:

  • alice-message: Sends rich messages and attachments via the runtime API
  • alice-scheduler: Manages automation tasks from Feishu conversations

Study their source (skills/alice-message/ and skills/alice-scheduler/) for real-world examples of skill structure and API usage.

Troubleshooting

Common problems and solutions for running Alice.

Bot doesn't respond in group chats

Check scene routing:

  • Verify group_scenes.chat.enabled is true
  • If both scenes are disabled, check trigger_mode (should be at or prefix)

Check bot identity:

  • Bot open_id is fetched automatically at startup now — no manual feishu_bot_open_id config needed
  • Verify feishu_app_id and feishu_app_secret are correct

Check logs:

# Look for WebSocket connection status
grep "long connection" ~/.alice/log/*.log
# Look for auth errors
grep "error" ~/.alice/log/*.log | head -20

Work mode never triggers

  • Verify group_scenes.work.enabled is true
  • Verify trigger_tag is set (e.g., "#work")
  • Message must contain @BotName #work ... — both the @mention and the trigger tag
  • The trigger tag must appear after the @mention in the same message

Wrong model or reasoning level

  • Check llm_profiles for the correct provider, model, and provider-specific fields
  • Verify the scene points at the correct profile key:
    group_scenes:
      work:
        llm_profile: "work"  # must match a key under llm_profiles
    
  • Run the provider CLI directly to verify authentication:
    codex --version
    claude --version
    

Skills can't send attachments or manage tasks

Check permissions:

permissions:
  runtime_message: true
  runtime_automation: true

Check API connectivity:

# From the machine running Alice
curl -s -H "Authorization: Bearer <token>" http://127.0.0.1:7331/healthz
# Should return {"status":"ok"}

The runtime HTTP API binds to the address in runtime_http_addr (default 127.0.0.1:7331). Multi-bot setups auto-increment the port.

Configuration changes don't apply

  • Multi-bot mode: Config hot reload is disabled. Restart Alice.
  • Single-bot mode: Partial hot reload is supported, but not all config keys are watched.
  • Always check logs after a config change:
    grep "config" ~/.alice/log/*.log | tail -5
    

WebSocket connection errors

If you see connection failures in the logs:

  1. Verify long connection mode is enabled in the Feishu Open Platform
  2. Check that the app has been published and approved
  3. Verify network connectivity to open.feishu.cn (or open.larksuite.com for Lark)
  4. Check that feishu_base_url is set correctly for Lark (international) users:
    feishu_base_url: "https://open.larksuite.com"
    

Provider CLI not found

Alice looks for the CLI binary in $PATH by default. If it's not found:

  1. Specify an absolute path:
    llm_profiles:
      chat:
        command: "/usr/local/bin/opencode"
    
  2. Or extend $PATH in the bot's env:
    env:
      PATH: "/home/user/.local/bin:/usr/local/bin:/usr/bin:/bin"
    

LLM runs hang indefinitely

  • Check timeout_secs in the LLM profile (default: 48 hours)
  • Use /stop in Feishu to cancel a running session
  • Check logs for provider-specific errors:
    grep -E "timeout|cancelled|killed" ~/.alice/log/*.log
    
  • For Codex, check codex_idle_timeout_secs settings

Logs show nothing useful

Increase log level to debug:

log_level: "debug"

Restart Alice. Debug mode includes:

  • Provider and agent name per run
  • Thread/session IDs
  • Rendered input prompts
  • Observed tool activity
  • Final output or error

Warning: Debug logs may contain the full rendered prompt including SOUL.md content.

System Model

This page explains the fundamental concepts behind Alice: multi-bot architecture, scene routing, sessions, and startup modes. Understanding these helps you configure and troubleshoot effectively.

What Alice Is (And Isn't)

Alice is a connector, not a bot framework. It doesn't implement chat logic, NLU, or custom integrations directly. Instead, it:

  1. Receives messages from Feishu
  2. Decides which LLM backend to call and how
  3. Calls the LLM CLI as a subprocess
  4. Sends the response back to Feishu

The "intelligence" lives in the LLM backend (Codex, Claude, etc.). Alice handles the plumbing: routing, queuing, session management, attachment I/O, and progress display.

Multi-Bot Model

One alice process can host multiple independent bots from a single config.yaml:

bots:
  engineering_bot:
    feishu_app_id: "cli_11111"
    # ...
  support_bot:
    feishu_app_id: "cli_22222"
    # ...

Each bot has its own:

  • Runtime directory (~/.alice/bots/<bot_id>/)
  • Workspace, prompts, and SOUL.md
  • Feishu credentials (App ID, App Secret)
  • LLM profiles — can use different providers and models
  • Scene configuration — independent chat/work routing
  • Runtime API port — auto-incremented (7331, 7332, ...)

Bots share:

  • The same process and worker pool
  • CODEX_HOME by default (can be overridden per bot)

Bot Directory Layout

~/.alice/bots/<bot_id>/
├── workspace/                        # Agent workspace
├── prompts/                          # Prompt template overrides
├── SOUL.md                           # Bot persona
└── run/connector/
    ├── automation.db                 # Persistent task store (bbolt)
    ├── campaigns.db                  # Campaign index (bbolt)
    ├── session_state.json            # Session aliases, usage counters
    ├── runtime_state.json            # Mutable runtime state
    └── resources/scopes/             # Downloaded attachments, artifacts

Scene Routing

Every incoming group message goes through a decision tree:

Incoming Message
  │
  ├─ Is it a built-in command? (/help, /status, /stop, /clear, /session)
  │   └─ Yes → Handle directly, no LLM involved
  │
  ├─ Does it match the work trigger? (@Bot #work ...)
  │   └─ Yes → Route to work scene
  │
  ├─ Is the chat scene enabled?
  │   └─ Yes → Route to chat scene
  │
  └─ Both scenes disabled?
      └─ Fall back to legacy trigger_mode (at / prefix / all)

Scenes vs Legacy Triggers

The legacy trigger_mode (at/prefix/all) is a simple gate: it decides whether to accept a message or ignore it. If accepted, there's one LLM pipeline.

Scenes go further: they assign different LLM profiles, session scopes, thread behaviors, and SOUL.md treatment per scene. New deployments should always use scenes.

Session Management

A session is the LLM's context window. Alice decides when to start a new session vs. when to continue an existing one.

Session Keys

Alice identifies sessions with canonical keys:

FormatExample
{receive_id_type}:{receive_id}chat_id:oc_123
`{key}scene:{scene}`
`{key}scene:{scene}

Session Scope

session_scope controls when sessions are created and reused:

ScopeBehavior
per_chatOne session for the entire group/DM
per_threadOne session per Feishu thread
per_user(DM only) One session per user
per_message(DM only) New session for every message

Session Persistence

Alice persists session metadata to session_state.json:

  • Provider thread ID (for resuming with the backend)
  • Session aliases
  • Usage counters
  • Last-message timestamp
  • Work-thread ID aliases

When a job comes in, Alice checks if an active session exists. If yes:

  • Provider-native steer: Some backends (Codex, Claude) allow injecting new input into a running session. Alice tries this first.
  • Queuing: If native steer fails and an LLM run is active, the new job is queued. A newer job supersedes an older queued job.
  • New run: If no run is active, a new RunRequest is dispatched to the LLM backend.

Cancellation and Interruption

  • /stop immediately cancels the active run via context cancellation
  • A newer user message supersedes queued jobs but does not interrupt the active run
  • Automation tasks can also be interrupted by user messages that acquire the session gate

Startup Modes

Alice supports two explicit startup modes:

--feishu-websocket

Full mode. Connects to Feishu WebSocket, processes live messages, runs automation, and exposes the runtime API.

--runtime-only

Local-only mode. The runtime API and automation engine run, but the Feishu connector does not start. Use for:

  • Debugging and development
  • Running only the automation scheduler
  • Headless environments (use alice-headless --runtime-only)

alice-headless is a dedicated binary that cannot start the Feishu connector. Attempting alice-headless --feishu-websocket will error.

Config Hot Reload

  • Single-bot mode: Limited partial hot reload is supported. Some config keys are watched for changes.
  • Multi-bot mode: Hot reload is intentionally disabled. Always restart Alice after config changes.

Runtime Home

Build ChannelDefault Home
Release (npm / installer)~/.alice
Dev (source build)~/.alice-dev

Override with --alice-home flag or ALICE_HOME environment variable.

Message Pipeline

This page walks through the full lifecycle of an incoming Feishu message — from WebSocket delivery to the final reply. Understanding this pipeline helps debug routing issues and tune behavior.

Overview

Feishu WebSocket
  └─ App (job queue)
      └─ Processor (execution)
          └─ LLM Backend (subprocess)
              └─ Reply Dispatcher (back to Feishu)

1. WebSocket Reception

Alice establishes a long connection to Feishu's WebSocket endpoint. When a user sends a message the bot can see, Feishu delivers an im.message.receive_v1 event over this connection.

The event contains:

  • Sender identity (open_id, user_id, name)
  • Message content (text, attachments, mentions)
  • Chat context (chat_id, chat_type, thread_id if in a thread)
  • Bot identity (which bot received this)

2. Job Creation

The raw event is normalized into a Job struct. This step:

  • Extracts mentioned users
  • Resolves the receive ID type (chat_id, open_id, etc.)
  • Sets the bot's configured LLM profile, scene, and reply preferences
  • Generates a session key and resource scope key
  • Attaches a monotonic version number

3. Routing

routeIncomingJob decides what to do with the job:

Built-in Commands

If the message starts with /help, /status, /clear, /stop, /session, /cd, /ls, or /pwd, it's handled by the connector directly — no LLM call. See Use Built-in Commands.

Work Scene

If group_scenes.work.enabled and the message contains the trigger_tag (e.g., #work) after the @bot mention, the job is routed to the work scene. Work jobs use the work-scoped session key and LLM profile.

Chat Scene

If group_scenes.chat.enabled, all other messages are routed to chat. Chat jobs use the chat-scoped session key and LLM profile.

Legacy Fallback

If both scenes are disabled, Alice falls back to matching trigger_mode and trigger_prefix.

4. Queue and Serialization

Each session has a mutex that serializes execution:

  • Active run exists → Try provider-native steer first (inject new input into running session)
  • Native steer unavailable → Queue the new job. A newer job supersedes an older queued job.
  • No active run → Accept the job and dispatch to the Processor.

The runtime store (runtime_store.go) keeps in-memory coordination state:

  • Latest version per session
  • Pending queued job
  • Active run cancellation handle
  • Per-session mutex

5. Pre-LLM Processing

Before the LLM call, the Processor:

  1. Loads and parses SOUL.md (chat only) — separates YAML frontmatter from Markdown body
  2. Downloads inbound attachments into the scoped resource directory
  3. Derives runtime environment variables for the conversation
  4. Prepares the rendered prompt text

Session State Check

Alice checks session_state.json:

  • If a provider thread ID exists, the backend call resumes that thread
  • If the session was recently active, context from the last turn is available

6. LLM Execution

The Processor builds a RunRequest and dispatches it to the LLM backend:

RunRequest {
    ThreadID       → from session state (empty = new session)
    UserText       → rendered prompt
    Provider       → from llm_profile
    Model          → from llm_profile
    ReasoningEffort → from llm_profile
    WorkspaceDir   → per-bot workspace
    ExecPolicy     → sandbox + approval settings
    Env            → per-bot + process env
    OnProgress     → stream progress updates to Feishu
}

The backend spawns the provider CLI as a subprocess and streams output. Progress updates are sent to Feishu as status card patches.

7. Reply Dispatch

When the LLM finishes, Alice processes the reply:

Content Processing

  • If the reply matches no_reply_token, stay silent
  • If output_contract is configured in SOUL.md, strip hidden tags
  • Apply formatting for Feishu (rich text, @mentions)

Threading

  • Work scene with create_feishu_thread: true: Reply is posted in a Feishu thread
  • Chat scene with create_feishu_thread: false: Reply is posted as a top-level message
  • Thread replies: When Feishu supports it, reply is threaded. Falls back to direct reply otherwise.

Immediate Feedback

Before the LLM starts, Alice sends immediate acknowledgement:

  • immediate_feedback_mode: "reaction" → Adds a reaction emoji to the source message
  • immediate_feedback_mode: "reply" → Sends explicit 收到! reply

8. Post-Run

  • Session state is persisted to session_state.json (thread ID, usage counters, timestamp)
  • Downloaded attachments remain in the scoped resource directory
  • Runtime state is flushed periodically

Key Invariants

  1. At most one LLM run per session at a time — enforced by per-session mutex
  2. Newer messages supersede queued, not active/stop is the only way to interrupt a running LLM
  3. Session state is disk-backed — survives process restart
  4. Attachments are scoped — each conversation has its own resource directory

Prompt Assembly

How Alice constructs the prompt text sent to LLM backends.

Template System

Alice uses Go text/template with Sprig functions for prompt templating. Templates are .md.tmpl files.

Template Loading

1. Check disk: <prompt_dir>/<template>.tmpl
2. If not found, use embedded (compiled into binary)

Disk files override embedded templates, allowing per-bot customization.

Template Files

All templates live under prompts/:

TemplatePurpose
connector/bot_soul.md.tmplInjects SOUL.md body into the prompt
connector/current_user_input.md.tmplFormats the current user message
connector/reply_context.md.tmplAdds context from a replied-to message
connector/runtime_skill_hint.md.tmplDescribes available bundled skills
connector/synthetic_mention.md.tmplFormats synthetic @mentions
connector/help.md.tmplThe /help command response
llm/initial_prompt.md.tmplFirst-turn system instructions
goals/goal_start.tmplGoal initialization prompt
goals/goal_continue.tmplGoal continuation prompt
goals/goal_timeout.tmplGoal timeout notification

Template Variables

Templates have access to the full Job context and session metadata. Key variables include:

VariableDescription
.UserTextThe user's message text
.BotNameDisplay name of the responding bot
.SenderNameName of the user who sent the message
.MentionedUsersList of users @mentioned in the message
.ReplyContextText of the message being replied to
.AttachmentsInbound attachment metadata
.Scene"chat" or "work"
.SessionKeyCanonical session identifier
.SoulBodySOUL.md body content (chat only)
.SkillDescriptionsDescriptions of enabled bundled skills

First Turn vs Resume

The critical difference in prompt assembly:

First Turn (No Existing Thread)

  • Full initial prompt is assembled
  • Includes system instructions (initial_prompt.md.tmpl)
  • In chat scenes: SOUL.md body is prepended
  • Identity hints (Name说:, @mention rules) unless disable_identity_hints: true

Resume (Existing Provider Thread)

  • Only the current user's message text is sent
  • Alice relies on the provider-side thread/session to hold prior context
  • No system prompt, no SOUL.md, no identity hints
  • This is more efficient — the backend model already has full conversation history

SOUL.md Injection

SOUL.md serves two purposes controlled by the scene:

Chat Scene

  1. Alice reads the file, parses the YAML frontmatter
  2. Frontmatter keys (image_refs, output_contract) are consumed by Alice for reply control
  3. The remaining Markdown body is prepended to the first-turn prompt via bot_soul.md.tmpl

Work Scene

SOUL.md is intentionally skipped entirely. Work mode is for task execution — persona injection would interfere with tool use and code generation.

Identity Hints

When disable_identity_hints: false (default), Alice formats messages with identity context:

张三说:fix the login timeout

When disable_identity_hints: true, the raw message text is passed through as-is:

fix the login timeout

Prompt Prefix

Each LLM profile can have a prompt_prefix:

llm_profiles:
  work:
    prompt_prefix: "You are a senior Go engineer. Be concise, use idiomatic patterns."

This text is prepended to every prompt for that profile, including resumed sessions.

Prompts and Debugging

With log_level: debug, Alice logs the fully rendered prompt sent to each backend. Debug traces include:

  • Provider name
  • Model and profile
  • Thread/session ID
  • The complete rendered input text
  • Observed tool activity and final output

Warning: Rendered prompts may contain SOUL.md content and conversation history. Avoid sharing debug logs publicly.

Automation Subsystem

Alice's automation engine schedules and executes recurring tasks, workflows, and system maintenance.

Architecture

The automation subsystem (internal/automation/) uses a tick-based execution model with persistent storage.

Automation Engine
  ├─ Tick Scheduler (periodic loop)
  │   ├─ Claim due tasks
  │   ├─ Execute tasks (send_text / run_llm / run_workflow)
  │   └─ Handle completion / failures
  ├─ System Task Scheduler
  │   ├─ Session state flush
  │   └─ Campaign reconcile
  ├─ Watchdog
  │   └─ Alert on overdue or stuck tasks
  └─ Store (bbolt)
      └─ Task persistence

Task Model

Scope

A task's scope defines where it executes:

ScopeDescription
userScoped to a specific user (DM context)
chatScoped to a specific group chat

Actions

ActionDescription
send_textSend a predetermined text message to the scope
run_llmRun an LLM call with a specified prompt in the scope
run_workflowRun a multi-step workflow combining LLM calls and actions

Scheduling

Tasks can be scheduled in two ways:

  • Cron expressions: "0 9 * * *" — runs at 9 AM daily
  • One-shot timestamps: ISO 8601 — runs once at the specified time

Task Lifecycle

Created → Active → Claimed → Executing → Completed
                                  ↓
                              Failed → Active (retry) / Cancelled
  • Due tasks are claimed on a periodic tick (one claim per tick)
  • Claimed tasks are executed in the scope's conversation context
  • Completed tasks with cron expressions are re-scheduled for the next occurrence
  • Failed tasks may be retried or cancelled
  • Cancelled tasks are deleted or marked inactive

Execution Model

When a task executes:

  1. The engine acquires the session gate for the task's scope
  2. The task inherits the same conversation context as interactive runs:
    • Same workspace directory
    • Same LLM profile and permissions
    • Same environment variables
  3. For run_llm and run_workflow, the task's prompt is sent to the LLM backend
  4. Replies are dispatched to the task's scope (chat or user DM)

User messages can interrupt automation tasks that have acquired the session gate.

System Tasks

Alice registers built-in system tasks during bootstrap:

TaskIntervalPurpose
Session state flushPeriodicPersist in-memory session state to session_state.json
Campaign reconcilePeriodicSync campaign repository state

Watchdog

The watchdog monitors automation tasks for anomalies:

  • Overdue tasks: Tasks past their scheduled time that haven't been claimed
  • Stuck tasks: Tasks that have been executing for too long

When the watchdog detects an issue, it can:

  • Log a warning
  • Send an alert message to a configured chat
  • Force-cancel the stuck task

Storage

Tasks are persisted in a local bbolt database:

~/.alice/bots/<bot_id>/run/connector/automation.db

This survives process restarts. The store supports:

  • CRUD operations on tasks
  • Querying by scope, status, and due time
  • Atomic claim-and-update to prevent duplicate execution

Managing Tasks

Via Runtime API

alice runtime automation create '{
  "scope_type": "chat",
  "scope_id": "oc_xxxxxxxxxxxxx",
  "action": "send_text",
  "text": "Daily standup reminder!",
  "cron": "0 10 * * 1-5"
}'

Via Bundled Skills

The alice-scheduler skill lets users create and manage tasks directly from Feishu conversations.

See the Runtime API Reference for the complete task management endpoints.

Runtime API Design

The rationale and design decisions behind Alice's local HTTP API.

Why a Local API?

Alice runs bundled skills as subprocess scripts. These scripts need to interact with the running connector — send images, manage tasks, query state. Direct Go interop isn't possible from shell scripts, so Alice exposes a local HTTP API.

Design Principles

1. Local-Only by Default

The API binds to 127.0.0.1 (configurable via runtime_http_addr). It is not designed to be exposed to the network. If you need remote access, use a reverse proxy or SSH tunnel — but this is not the intended use case.

2. Bearer Token Auth

Every request (except /healthz) requires:

Authorization: Bearer <token>

The token is auto-generated at startup. Environment variables (ALICE_RUNTIME_API_TOKEN) inject it into skill scripts automatically.

3. Defense in Depth

Multiple layers of protection:

  • Auth rate limiting: 120 requests per minute per token
  • Body size limit: 1 MB per request
  • File validation: Uploaded files must be readable, non-empty regular files
  • Feishu size limits: Uploads are still subject to Feishu's file size and type restrictions

4. No Text Send Endpoint

The runtime API does not have a plain text send endpoint. Why?

Text replies normally go through the main reply pipeline — they need session context, proper threading, and reply metadata. The runtime API is designed for side-effects (sending images, files, managing tasks), not for bypassing the reply pipeline.

If a skill needs to send a text message as an automation task, it creates a send_text task via the automation API. The automation engine handles the rest.

API Surface

Messages

  • POST /api/v1/messages/image — upload and send an image
  • POST /api/v1/messages/file — upload and send a file

Both accept multipart/form-data with an optional caption field.

Automation

  • Full CRUD for automation tasks: create, list, get, update, delete

Goal

  • Goal lifecycle management: get, create, pause, resume, complete, delete

Health

  • GET /healthz — no auth, responds 200 if the server is running

Environment Variable Injection

Skills don't need to know the API address or token. Alice injects them:

ALICE_RUNTIME_API_BASE_URL="http://127.0.0.1:7331"
ALICE_RUNTIME_API_TOKEN="<auto-generated>"
ALICE_RUNTIME_BIN="/usr/local/bin/alice"

Additional context variables:

ALICE_RECEIVE_ID_TYPE="chat_id"
ALICE_RECEIVE_ID="oc_xxxxxxxxxxxxx"
ALICE_SOURCE_MESSAGE_ID="om_xxxxxxxxxxxxx"
ALICE_ACTOR_USER_ID="ou_xxxxxxxxxxxxx"
ALICE_ACTOR_OPEN_ID="ou_xxxxxxxxxxxxx"
ALICE_CHAT_TYPE="group"
ALICE_SESSION_KEY="chat_id:oc_xxx|scene:chat"

Multi-Bot API Ports

In multi-bot mode, each bot gets its own Runtime API server on an auto-incremented port:

Bot IndexPort
First7331
Second7332
Third7333
......

Skills target the correct bot by inheriting environment variables from the conversation scope they're triggered in.

Graceful Shutdown

When Alice receives a shutdown signal, the Runtime API server:

  1. Stops accepting new connections
  2. Waits up to runtime_api_shutdown_timeout_secs (default: 5s) for in-flight requests to complete
  3. Force-closes remaining connections after timeout

Extending the API

New endpoints can be added in internal/runtimeapi/. See the Contributing guide and Architecture Overview for developer guidance.

Configuration Manual

Complete reference for every configuration key in config.yaml. Structure follows the file layout.


Top-Level Keys

bots (required)

Map of bot IDs to bot configurations. Each key under bots is a bot ID used as the runtime identifier.

bots:
  engineering_bot:
    # bot config...
  support_bot:
    # bot config...

log_level

FieldValue
Typestring
Default"info"
Values"debug", "info", "warn", "error"

Structured log level for the entire process.

log_file

FieldValue
Typestring
Default"" (auto: <ALICE_HOME>/log/YYYY-MM-DD.log)

Log file path with daily rotation. Empty uses the default.

log_max_size_mb

FieldValue
Typeint
Default20

Maximum log file size in megabytes before rotation.

log_max_backups

FieldValue
Typeint
Default5

Maximum number of rotated log files to retain.

log_max_age_days

FieldValue
Typeint
Default7

Maximum days to keep rotated log files.

log_compress

FieldValue
Typebool
Defaultfalse

Whether to gzip rotated log files.


bots.<id>

Each bot is identified by its key and configured with the following fields.

name

FieldValue
Typestring
RequiredNo

Display name used in prompts and status cards. Defaults to the bot ID.

feishu_app_id (required)

FieldValue
Typestring
RequiredYes

Feishu Open Platform App ID (cli_...).

feishu_app_secret (required)

FieldValue
Typestring
RequiredYes

Feishu Open Platform App Secret. Keep this value secure.

feishu_base_url

FieldValue
Typestring
Default"https://open.feishu.cn"

Feishu API base URL. Use "https://open.larksuite.com" for Lark (international edition).


Runtime Directories

alice_home

FieldValue
Typestring
Default"<ALICE_HOME>/bots/<bot_id>"

Bot-specific runtime root directory. All per-bot state lives under this path.

workspace_dir

FieldValue
Typestring
Default"<alice_home>/workspace"

Agent workspace directory. This is the working directory for LLM subprocesses.

prompt_dir

FieldValue
Typestring
Default"<alice_home>/prompts"

Directory for bot-specific prompt template overrides. Files here override embedded templates.

codex_home

FieldValue
Typestring
Default"$CODEX_HOME" or "~/.codex"

Codex configuration and authentication directory. Shared across bots by default unless overridden here.

soul_path

FieldValue
Typestring
Default"SOUL.md" (relative to alice_home)

Path to the SOUL.md persona document. Relative paths resolve against alice_home. If the file doesn't exist at startup, Alice writes the embedded template.


Message Trigger (Legacy)

trigger_mode

FieldValue
Typestring
Default"at"
Values"at", "prefix", "all"

Legacy trigger mode. Only used when both group_scenes.chat.enabled and group_scenes.work.enabled are false.

ValueBehavior
"at"Only @bot messages accepted
"prefix"Only messages starting with trigger_prefix
"all"Every message accepted

trigger_prefix

FieldValue
Typestring
Default""

Prefix string for trigger_mode: "prefix".


llm_profiles

Map of profile names to LLM profile configurations. Each profile selects a provider, model, and settings.

Profile Fields

provider (required)
FieldValue
Typestring
Values"opencode", "codex", "claude", "gemini", "kimi"

LLM backend provider.

command
FieldValue
Typestring
DefaultSame as provider (e.g., "opencode")

Path or name of the CLI binary. Use an absolute path for binaries outside $PATH.

timeout_secs
FieldValue
Typeint
Default172800 (48 hours)

Per-run timeout in seconds.

model (required)
FieldValue
Typestring

Model identifier. Examples: "deepseek/deepseek-v4-pro", "gpt-5.4-mini", "claude-sonnet-4-6".

variant (OpenCode only)
FieldValue
Typestring
Values"max", "high", "minimal"

Model variant for DeepSeek models via OpenCode.

profile (Codex only)
FieldValue
Typestring

Named sub-profile from Codex CLI configuration.

reasoning_effort (Codex only)
FieldValue
Typestring
Values"low", "medium", "high", "xhigh"

Thinking intensity level.

personality (Codex only)
FieldValue
Typestring

Named personality preset from Codex CLI config.

prompt_prefix
FieldValue
Typestring
Default""

Text prepended to every prompt before sending to the model.

permissions
FieldValue
Typeobject

Sandbox and approval settings.

permissions.sandbox
FieldValue
Typestring
Default"workspace-write"
Values"read-only", "workspace-write", "danger-full-access"

Filesystem access level for the LLM agent.

permissions.ask_for_approval
FieldValue
Typestring
Default"never"
Values"untrusted", "on-request", "never"

When the agent should ask for approval before executing tool calls.

permissions.add_dirs
FieldValue
Typestring[]
Default[]

Extra directories accessible to the agent beyond the workspace.

profile_overrides
FieldValue
Typemap[string]ProfileRunnerConfig
Default{}

Advanced: per-profile runner overrides. Keys are profile names. Each override can set:

  • command — binary path override
  • timeout — timeout override (seconds)
  • provider_profile — provider-specific profile name
  • exec_policy — per-override sandbox and approval settings

env

FieldValue
Typemap[string]string
Default{}

Environment variables passed to all LLM subprocesses. Useful for PATH, proxy settings (HTTPS_PROXY, ALL_PROXY), and API keys.


Reply Messages

failure_message

FieldValue
Typestring
Default"暂时不可用,请稍后重试。"

Message shown when the LLM backend fails.

thinking_message

FieldValue
Typestring
Default"正在思考中..."

Message shown in the progress card while the LLM is processing.


Immediate Feedback

immediate_feedback_mode

FieldValue
Typestring
Default"reaction"
Values"reaction", "reply"

How Alice acknowledges a received message before the LLM responds.

immediate_feedback_reaction

FieldValue
Typestring
Default"OK"

Feishu emoji name for the reaction feedback (e.g., "OK", "WINK", "THUMBSUP").


group_scenes

group_scenes.chat

FieldValue
Typeobject

Chat scene configuration for group / topic-group chats.

enabled
FieldValue
Typebool
Defaulttrue
session_scope
FieldValue
Typestring
Default"per_chat"
Values"per_chat", "per_thread"
llm_profile
FieldValue
Typestring

Name of the LLM profile under llm_profiles to use.

no_reply_token
FieldValue
Typestring
Default""

If the model returns this exact string, Alice stays silent.

create_feishu_thread
FieldValue
Typebool
Defaultfalse

Whether to create a Feishu thread for replies.

group_scenes.work

FieldValue
Typeobject

Work scene configuration for group / topic-group chats.

enabled
FieldValue
Typebool
Defaulttrue
trigger_tag
FieldValue
Typestring
Default"#work"

Tag required in the message (after @bot mention) to trigger work mode.

session_scope
FieldValue
Typestring
Default"per_thread"
Values"per_thread", "per_chat"
llm_profile
FieldValue
Typestring
create_feishu_thread
FieldValue
Typebool
Defaulttrue
no_reply_token
FieldValue
Typestring
Default""

private_scenes

Same structure as group_scenes. Both chat and work sub-sections are disabled by default.

private_scenes.chat

Additional session scope: "per_user" — all DM messages from the same user share one session.

private_scenes.work

Additional session scope: "per_message" — each DM with #work creates a fresh session.


Runtime HTTP API

runtime_http_addr

FieldValue
Typestring
Default"127.0.0.1:7331"

Listen address for the runtime HTTP API. Multi-bot setups auto-increment the port (7332, 7333, ...).

runtime_http_token

FieldValue
Typestring
Defaultauto-generated

Bearer token for API authentication. Auto-generated if empty. Set explicitly for cross-process calls.


permissions

runtime_message

FieldValue
Typebool
Defaulttrue

Allow bundled skills to send messages via the runtime API.

runtime_automation

FieldValue
Typebool
Defaulttrue

Allow bundled skills to manage automation tasks via the runtime API.

allowed_skills

FieldValue
Typestring[]
Default["alice-message", "alice-scheduler"]

Bundled skills enabled for this bot. Built-in skills: alice-message, alice-scheduler, alice-goal.


Worker Pool

queue_capacity

FieldValue
Typeint
Default256

Maximum pending jobs. Beyond this, new messages are dropped.

worker_concurrency

FieldValue
Typeint
Default3

Number of concurrent workers processing jobs.


Timeouts

All values in seconds.

automation_task_timeout_secs

FieldValue
Typeint
Default6000

Outer timeout for scheduled automation and workflow runs.

auth_status_timeout_secs

FieldValue
Typeint
Default15

Timeout for provider auth status checks on startup.

runtime_api_shutdown_timeout_secs

FieldValue
Typeint
Default5

Grace period when shutting down the runtime HTTP API server.

local_runtime_store_open_timeout_secs

FieldValue
Typeint
Default10

Timeout for opening the local BoltDB runtime store on startup.

codex_idle_timeout_secs

FieldValue
Typeint
Default900

Codex idle timeout for default/medium reasoning effort.

codex_high_idle_timeout_secs

FieldValue
Typeint
Default1800

Codex idle timeout for high reasoning effort.

codex_xhigh_idle_timeout_secs

FieldValue
Typeint
Default3600

Codex idle timeout for xhigh reasoning effort.


Display Options

show_shell_commands

FieldValue
Typebool
Defaulttrue

Show recently executed shell commands in the heartbeat status card.

disable_identity_hints

FieldValue
Typebool
Defaultfalse

When true, messages are sent to the LLM as raw text without identity context (Name说:, @mention rules). When false (default), identity hints are included.

Runtime HTTP API

Alice exposes a local authenticated HTTP API on 127.0.0.1. Bundled skills, automation scripts, and thin runtime tools use this API.

Authentication

All endpoints (except /healthz) require a Bearer token:

Authorization: Bearer <token>

The token is from bots.<id>.runtime_http_token in config, or auto-generated if empty.

Base URL

Default: http://127.0.0.1:7331. Multi-bot setups auto-increment: 7332, 7333, etc.

Limits

  • Request body: 1 MB maximum
  • Auth rate limit: 120 requests per minute
  • List endpoints: 200 items maximum per request

Health

GET /healthz

No authentication required.

Response 200 OK:

{"status": "ok"}

Messages

POST /api/v1/messages/image

Send an image to the current conversation.

Request multipart/form-data:

FieldTypeRequiredDescription
imagefileYesImage file to upload
captionstringNoOptional caption text

Response 200 OK:

{"message_id": "om_xxxxxxxxxxxxx"}

Errors:

CodeDescription
400Invalid or missing image file
403permissions.runtime_message is disabled
413Request body exceeds 1 MB

POST /api/v1/messages/file

Send a file to the current conversation.

Request multipart/form-data:

FieldTypeRequiredDescription
filefileYesFile to upload
filenamestringNoDisplay filename (default: original filename)
captionstringNoOptional caption text

Response 200 OK:

{"message_id": "om_xxxxxxxxxxxxx"}

Errors:

CodeDescription
400Invalid or missing file
403permissions.runtime_message is disabled
413Request body exceeds 1 MB

Automation Tasks

GET /api/v1/automation/tasks

List automation tasks.

Query Parameters:

ParamTypeDefaultDescription
limitint50Items per page (max 200)
offsetint0Pagination offset
statusstringFilter by status: active, completed, cancelled

Response 200 OK:

[
  {
    "id": "task_abc123",
    "scope_type": "chat",
    "scope_id": "oc_xxxxxxxxxxxxx",
    "action": "send_text",
    "status": "active",
    "cron": "0 9 * * *",
    "created_at": "2025-01-15T09:00:00Z",
    "updated_at": "2025-01-15T09:00:00Z"
  }
]

POST /api/v1/automation/tasks

Create an automation task.

Request application/json:

FieldTypeRequiredDescription
scope_typestringYes"chat" or "user"
scope_idstringYesTarget ID (chat_id or user_id)
actionstringYes"send_text", "run_llm", or "run_workflow"
textstringFor send_textMessage text
promptstringFor run_llmLLM prompt
cronstringNoCron expression for recurring tasks
run_atstringNoISO 8601 timestamp for one-shot tasks

Response 201 Created:

{
  "id": "task_abc123",
  "scope_type": "chat",
  "scope_id": "oc_xxxxxxxxxxxxx",
  "action": "send_text",
  "status": "active",
  "cron": "0 9 * * *",
  "created_at": "2025-01-15T09:00:00Z"
}

Errors:

CodeDescription
400Invalid request body or missing required fields
403permissions.runtime_automation is disabled

GET /api/v1/automation/tasks/:taskID

Get a single automation task.

Response 200 OK: Same schema as list item.

Errors:

CodeDescription
404Task not found

PATCH /api/v1/automation/tasks/:taskID

Update an automation task. Send a JSON merge-patch with fields to update.

Request application/json:

{"status": "cancelled"}

Updatable fields: status, cron, run_at, text, prompt.

Response 200 OK: Updated task object.

Errors:

CodeDescription
400Invalid update
403permissions.runtime_automation is disabled
404Task not found

DELETE /api/v1/automation/tasks/:taskID

Delete an automation task.

Response 204 No Content

Errors:

CodeDescription
403permissions.runtime_automation is disabled
404Task not found

Goal

GET /api/v1/goal

Get the current active goal for the conversation scope.

Response 200 OK:

{
  "id": "goal_xyz",
  "description": "Review PR #42",
  "status": "in_progress",
  "created_at": "2025-01-15T10:00:00Z"
}

Response 204 No Content: No active goal.

POST /api/v1/goal

Create a new goal for the conversation scope.

Request application/json:

{"description": "Review PR #42"}

Response 201 Created: Created goal object.

POST /api/v1/goal/pause

Pause the active goal.

Response 200 OK.

POST /api/v1/goal/resume

Resume a paused goal.

Response 200 OK.

POST /api/v1/goal/complete

Mark the active goal as completed.

Response 200 OK.

DELETE /api/v1/goal

Delete the active goal.

Response 204 No Content.


Common Error Response Format

All errors follow this format:

{
  "error": "Human-readable error description"
}

HTTP status codes are used conventionally: 400 for client errors, 403 for permission denied, 404 for not found, 413 for payload too large, 429 for rate limited, 500 for internal errors.

CLI Commands

Alice provides several CLI subcommands for different operations.


Main Process

alice --feishu-websocket

Start the full Feishu connector runtime. Connects to Feishu WebSocket and processes live messages.

alice --feishu-websocket

alice --runtime-only

Start in runtime-only mode. The local HTTP API and automation engine run, but the Feishu WebSocket does not start.

alice --runtime-only

alice-headless --runtime-only

Headless runtime-only binary. Explicitly cannot start the Feishu connector.

alice-headless --runtime-only

alice-headless will error if invoked with --feishu-websocket.


Global Flags

FlagDescription
--alice-home <path>Override the default runtime home directory
--config <path>Path to config.yaml (default: <alice_home>/config.yaml)
--log-level <level>Override log level (debug, info, warn, error)
--versionPrint version and exit

Environment variable ALICE_HOME also overrides the default home directory.


alice setup

Initialize the Alice runtime environment.

alice setup

What it does:

  1. Creates the directory structure under ~/.alice/
  2. Writes a starter config.yaml (based on config.example.yaml)
  3. Syncs bundled skills to ${ALICE_HOME}/skills/
  4. On Linux: registers a systemd user unit at ~/.config/systemd/user/alice.service
  5. Installs the OpenCode delegate plugin at ~/.config/opencode/plugins/alice-delegate.js

Run this once after installation.


alice delegate

Send a one-shot prompt to a configured LLM backend.

alice delegate --provider <name> --prompt "<text>"

Options

FlagDescription
--provider <name>Backend: opencode, codex, claude, gemini, kimi
--prompt <text>Prompt text (required)
--model <name>Override the default model
--workspace <path>Override the working directory

Examples

alice delegate --provider codex --prompt "Fix the null check in auth.go"
alice delegate --provider claude --prompt "Review this diff" < changes.patch

alice runtime message

Send messages via the runtime API.

alice runtime message image <path> [--caption <text>]
alice runtime message file <path> [--filename <name>] [--caption <text>]
SubcommandDescription
image <path>Upload and send an image
file <path>Upload and send a file
FlagDescription
--caption <text>Optional caption text
--filename <name>Override file display name (file only)

alice runtime automation

Manage automation tasks via the runtime API.

alice runtime automation list [--status <status>] [--limit <n>]
alice runtime automation create <payload>
alice runtime automation get <task-id>
alice runtime automation update <task-id> <payload>
alice runtime automation delete <task-id>
SubcommandDescription
listList automation tasks
create <json>Create a task from JSON payload
get <id>Get a single task
update <id> <json>Update a task with JSON merge-patch
delete <id>Delete a task
Flag (list)Description
--statusFilter by status: active, completed, cancelled
--limitItems per page

alice runtime goal

Manage the active goal for a conversation scope.

alice runtime goal get
alice runtime goal create <description>
alice runtime goal pause
alice runtime goal resume
alice runtime goal complete
alice runtime goal delete
SubcommandDescription
getGet the current active goal
create <desc>Create a new goal
pausePause the active goal
resumeResume a paused goal
completeMark the active goal as completed
deleteDelete the active goal

alice skills

Manage bundled skills.

alice skills sync
alice skills list
SubcommandDescription
syncSync embedded bundled skills to the local skills directory
listList installed bundled skills

alice skills sync is also run automatically at startup.


Exit Codes

CodeMeaning
0Success
1General error
2Configuration error
3Authentication error

Architecture Overview

This is the code-first architecture reference for Alice. Package names, runtime objects, and file paths match the live code under cmd/connector, internal/, prompts/, and skills/.

Reading Paths

Start with the section that matches your goal:

GoalStart here
Understand the whole system§1 Process Model → §2 Bootstrap Path → §5 Message Pipeline
Add a new LLM backend§2 Bootstrap Path → §7 Prompt Assembly → Adding a New LLM Backend
Modify message handling§5 Inbound Message Pipeline → §6 Session Keys → §8 Reply Dispatch
Add a Runtime API endpoint§9 Runtime API
Add or modify automation§10 Automation Subsystem
Understand configuration§2 Bootstrap Path → §12 Configuration Model

1. Process Model

Alice is a multi-bot runtime. One alice process can host multiple bots from one config.yaml.

At startup, the process:

  1. Loads config.yaml
  2. Expands bots.* into per-bot runtime configs
  3. Verifies CLI auth where needed
  4. Syncs embedded bundled skills into the local skill directories
  5. Builds one ConnectorRuntime per bot
  6. Runs all runtimes under one RuntimeManager

The main runtime object per bot:

ConnectorRuntime
  ├─ App
  ├─ Processor
  ├─ llm.MultiBackend
  ├─ LarkSender
  ├─ automation.Engine
  ├─ runtimeapi.Server
  ├─ automation.Store
  └─ campaign.Store

Startup mode is explicit:

  • --feishu-websocket: connect to Feishu and process live events
  • --runtime-only: run automation and the local runtime API without the Feishu WebSocket
  • alice-headless: runtime-only only; may not start the Feishu connector

2. Bootstrap Path

The process entrypoint is cmd/connector.

Key bootstrap steps:

  • cmd/connector/root.go: CLI flags, startup mode selection, config creation, PID locking, logging, auth preflight, bundled-skill sync, and runtime manager startup.
  • internal/config: Pure multi-bot config model, path derivation, normalization, validation, and per-bot runtime expansion.
  • internal/bootstrap: Builds the per-bot runtime graph and wires cross-cutting features such as prompt loading, runtime API auth, campaign reconcile loops, and config hot reload.

BuildRuntimeManager expands Config into []Config via RuntimeConfigs(), then builds one ConnectorRuntime for each bot.

Current hot-reload behavior:

  • Single-bot mode: partial config hot reload is supported
  • Multi-bot mode: hot reload is intentionally disabled; restart the process after config changes

3. Runtime Layout And Persisted State

Each bot gets its own runtime root under:

${ALICE_HOME}/bots/<bot_id>/

Important per-bot paths:

  • workspace/ — Bot workspace
  • prompts/ — Optional prompt overrides for that bot
  • run/connector/automation.db — Persistent automation task store (bbolt)
  • run/connector/campaigns.db — Persistent lightweight campaign index (bbolt)
  • run/connector/session_state.json — Session aliases, provider thread ids, usage counters, work-thread metadata
  • run/connector/runtime_state.json — Mutable connector runtime state
  • run/connector/resources/scopes/<scope_type>/<scope_id>/ — Downloaded inbound attachments and uploadable local artifacts scoped to the current conversation

The source tree also embeds:

  • prompts/
  • skills/
  • config.example.yaml
  • prompts/SOUL.md.example

Disk files override embedded prompt files when present; embedded assets are the fallback.

4. Package Map

Core Packages

PackageResponsibility
cmd/connectorCLI entrypoint, runtime subcommands, and skills sync
internal/bootstrapRuntime construction, path resolution, auth checks, skill materialization, campaign reconcile bridging, and config reload
internal/configConfig schema, validation, defaults, path derivation, and multi-bot expansion
internal/connectorFeishu ingress, message normalization, scene routing, queueing, session serialization, native steer fallback, /stop interruption, prompt assembly, reply dispatch, attachment download, session persistence, and built-in commands
internal/llmProvider-agnostic Backend interface plus provider adapters for codex, claude, gemini, kimi, and opencode
internal/promptingTemplate loader with disk-first / embedded-fallback behavior, sprig helpers, and compiled-template caching
internal/runtimeapiLocal authenticated HTTP server and client used by bundled skills and runtime-facing shell scripts
internal/automationTask model, persistence, claiming, execution, system-task scheduling, and workflow dispatch
internal/statusviewAggregates usage and automation data for /status
internal/platform/feishuFeishu sender implementation, attachment I/O, bot self-info lookup, message lookup, and user-name resolution helpers

Support Packages

PackageResponsibility
internal/sessionctxSession-context environment bridge for runtime API calls and bundled skills
internal/runtimecfgHelpers for scene-derived profile selection and thread-reply preference
internal/sessionkeyCanonical session-key and visibility-key helpers
internal/messagingNarrow sender/uploader interfaces shared across connector and runtime API layers
internal/storeutilShared bbolt helpers and string utilities
internal/loggingZerolog plus rotating file output configuration
internal/buildinfoVersion reporting

5. Inbound Message Pipeline

internal/connector.App owns the live Feishu connection and the per-bot job queue.

High-level flow:

  1. Feishu delivers im.message.receive_v1 over WebSocket
  2. App normalizes the event into a Job
  3. routeIncomingJob decides whether the message should be ignored, treated as a built-in command, handled as chat, or handled as work
  4. If the same session has an active provider-native interactive run, Alice first tries to steer the new input into that run
  5. If native steer is unavailable, the job is queued and serialized by session; newer queued jobs supersede older queued jobs without interrupting the active LLM run
  6. /stop still interrupts the active run, and user messages can still interrupt automation tasks that acquired the session gate
  7. Processor executes the accepted job

Scene routing rules:

  • Group/topic-group chats can use group_scenes.chat and group_scenes.work
  • Work threads are identified by a trigger plus a stable work-scene session key
  • If both scenes are disabled, Alice falls back to legacy trigger_mode / trigger_prefix
  • Built-in commands such as /help, /status, /clear, and /stop bypass the LLM path

6. Session Keys, Aliases, And Serialization

Alice routes and resumes work through canonical session keys plus aliases.

Common formats:

  • {receive_id_type}:{receive_id}
  • {receive_id_type}:{receive_id}|scene:{scene}
  • {receive_id_type}:{receive_id}|scene:{scene}|thread:{thread_id}
  • {receive_id_type}:{receive_id}|scene:{scene}|message:{message_id}

Special cases:

  • Work-scene seed key: {receive_id_type}:{receive_id}|scene:work|seed:{source_message_id}
  • Chat reset alias: {chat_key}|reset:{message_id}

Persisted in session_state.json:

  • Provider thread id
  • Work-thread id alias
  • Session aliases
  • Usage counters
  • Last-message timestamp
  • Scope key for status aggregation

internal/connector/runtime_store.go keeps the live in-memory coordination state:

  • Latest version per session
  • Pending job per session
  • Active run cancellation handle
  • Per-session mutex for serialization
  • Superseded-version tracking

7. Prompt Assembly And LLM Execution

internal/connector.Processor is the execution core for one accepted job.

Before an LLM call it:

  • Loads and parses SOUL.md if needed
  • Downloads inbound attachments into the scoped resource directory
  • Derives runtime env vars for the current conversation
  • Prepares prompt text

Current prompt assets:

  • prompts/llm/initial_prompt.md.tmpl
  • prompts/connector/bot_soul.md.tmpl
  • prompts/connector/current_user_input.md.tmpl
  • prompts/connector/reply_context.md.tmpl
  • prompts/connector/runtime_skill_hint.md.tmpl
  • prompts/connector/synthetic_mention.md.tmpl

Important prompt behavior:

  • First-turn or non-resumed runs render the current-user-input template and may append reply context, bot soul, and runtime-skill hints
  • Resumed provider threads send only the current user input; Alice relies on the provider-side thread/session to hold prior context
  • chat runs can prepend SOUL.md; work runs intentionally skip bot-soul injection

The LLM layer is selected like this:

  1. Scene selects an outer llm_profiles.<name>
  2. The outer profile chooses provider / model / profile / reasoning / personality / prompt prefix
  3. llm.MultiBackend dispatches to the correct provider adapter

Currently supported providers: codex, claude, gemini, kimi, opencode

8. Reply Dispatch

Alice distinguishes between:

  • Immediate acknowledgement
  • Streamed progress messages from the backend
  • Final replies
  • File/image follow-ups

Current behavior:

  • Work-scene messages usually receive an immediate reaction or 收到!
  • Backend progress messages are sent as threaded replies when possible
  • Final replies are posted via the reply dispatcher
  • Thread replies fall back to direct replies when Feishu does not support threaded replies for that target

internal/connector/card.go, internal/connector/outgoing_mentions.go, internal/connector/outgoing_plaintext.go, and related files own:

  • Message send / reply / patch-card operations
  • Reactions
  • Upload of images and files
  • Attachment download
  • Scoped resource-root resolution

9. Runtime API And Bundled Skills

Alice exposes a local authenticated runtime API intended for bundled skills and thin runtime scripts.

Current HTTP surface:

  • POST /api/v1/messages/image
  • POST /api/v1/messages/file
  • GET|POST|PATCH|DELETE /api/v1/automation/tasks
  • GET|POST /api/v1/goal + pause/resume/complete/delete

There is no standalone text-send endpoint. Plain text is normally returned through the main reply pipeline.

Current safeguards:

  • Bearer token auth
  • Request-body size limit (1 MB)
  • In-process auth rate limiting (120 req/min)
  • Local uploads require readable, non-empty regular files and remain subject to Feishu size limits

Runtime-facing shell entrypoints:

  • alice runtime message ...
  • alice runtime automation ...
  • alice runtime goal ...

Bundled skills shipped in the current tree:

  • skills/alice-message
  • skills/alice-scheduler
  • skills/alice-goal

Runtime context is injected through environment variables (see Runtime API Design).

10. Automation Subsystem

internal/automation persists tasks in bbolt and executes them in-process.

Current task scopes: user, chat Current task actions: send_text, run_llm, run_workflow

Execution model:

  • Due tasks are claimed on a periodic tick
  • Long-lived system tasks are scheduled separately
  • Task env inherits the same conversation context bridge used for interactive runs
  • Workflow tasks call the same LLM backend but with workflow-specific agent names, env vars, and workspace hints

Built-in system tasks registered during bootstrap:

  • Periodic session/runtime state flush
  • Periodic campaign-repo reconcile

11. Configuration Model

The config model is pure multi-bot.

Important keys:

  • bots.<id>
  • llm_profiles
  • group_scenes.chat, group_scenes.work
  • private_scenes.chat, private_scenes.work
  • permissions
  • runtime_http_addr
  • workspace_dir, prompt_dir, codex_home

Behavior worth calling out:

  • RuntimeConfigs() derives missing bot paths and increments default runtime API ports across bots
  • Each outer llm_profiles key is a stable runtime selector
  • Provider-specific profile selectors still live inside each profile via the inner profile field
  • Runtime permissions gate bundled skills and runtime API surfaces independently

12. Observability And Debugging

Current observability surfaces:

  • Structured logs via zerolog
  • Rotating log files via lumberjack
  • Session usage counters stored in session_state.json
  • /status powered by statusview
  • Per-run markdown debug traces when log_level=debug

Debug traces include, when the backend exposes them:

  • Provider, agent name, thread/session id, model/profile
  • Rendered input, observed tool activity, final output or error

13. Extension Boundaries

The supported extension surfaces:

  • llm provider adapters
  • Prompt templates under prompts/
  • Bundled skills under skills/
  • Runtime API handlers

Contributing

Contributions are welcome. This guide covers workflow, standards, and review expectations for all contributors (human and AI).

中文版见下方。

1. Branch and Change Scope

  • Base daily work on latest dev. Submit PRs to dev.
  • main only accepts merge commits from dev.
  • Branch naming: feat/*, fix/*, docs/*, chore/*.
  • One commit, one goal. Don't mix unrelated changes in one commit.

2. Commit Message Format

Conventional Commits are enforced by a commit-msg hook:

type(scope): subject
type: subject

Allowed types: feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert.

Examples:

  • feat(connector): support codex resume thread
  • fix: keep proxy env for codex exec
  • docs: add configuration reference

3. Pre-Commit Checks

First-time setup:

make precommit-install

Every commit must pass:

make check

make check runs in order:

GateCommand
Secret scansecret-check
Shell syntaxscript-check
Format checkfmt-check (gofmt)
Vetgo vet ./...
Unit testsgo test ./...
Race testsgo test -race ./internal/connector

Do not commit until make check passes with zero failures.

For cross-cutting or concurrency changes, also run:

go test -race ./...

If formatting fails, run make fmt first.

4. Code Rules

  • Use gofmt for all Go code.
  • Files over 500 lines must be split in the same change (prevent mega-files).
  • New or changed behavior must include/update tests.
  • Never log sensitive information (app secrets, tokens, user content).
  • CLI flag changes may be breaking but must be clearly documented with migration instructions.

5. Configuration Change Rules

  • This project uses YAML config (${ALICE_HOME}/config.yaml), not environment variables as primary config.
  • New config keys require updates to:
    • config.example.yaml
    • internal/config (defaults and validation)
    • Documentation (both English README and docs site)
  • Config keys affecting session/memory behavior (e.g., idle_summary_hours) must have corresponding tests.

6. Documentation Sync

Any user-visible change (commands, flags, config, behavior) must sync:

  • README.md
  • README.zh-CN.md
  • book/src/ (docs site)

Keep English and Chinese docs consistent.

7. Merge Checklist

  • make check passes locally
  • Key path runs (at minimum one start-up test):
    go run ./cmd/connector --feishu-websocket
    
  • Documentation synced with changes
  • No unrelated files or debug content included

8. Runtime Isolation Rules

When debugging or testing with isolated runtimes:

  • Use explicit startup mode: --feishu-websocket or --runtime-only
  • alice-headless must use --runtime-only only
  • Never connect isolated debug runtimes to the real Feishu WebSocket
  • After startup, verify logs show runtime-only mode enabled; Feishu websocket connector disabled
  • If logs show feishu-codex connector started for an isolated runtime, stop it immediately

贡献指南

欢迎贡献。本指南涵盖所有贡献者(人类和 AI)的工作流、标准和评审要求。

1. 分支与变更范围

  • 日常开发基于最新 dev 分支,提交到 dev
  • main 只接受 dev -> main 的合并提交。
  • 分支命名:feat/*fix/*docs/*chore/*
  • 每次提交只做一件事,避免无关改动混在一起。

2. 提交信息规范

强制使用 Conventional Commits,由 commit-msg hook 校验:

type(scope): subject
type: subject

允许的 type:featfixdocsstylerefactorperftestbuildcichorerevert

示例:

  • feat(connector): support codex resume thread
  • fix: keep proxy env for codex exec

3. 提交前必须检查

首次执行:

make precommit-install

每次提交前必须通过:

make check

make check 包含:secret-check → script-check → fmt-check → go vet → go test → go test -race

未通过 make check 不得提交。

格式不通过时先执行 make fmt

4. 代码规则

  • 统一使用 gofmt 格式化代码。
  • 单文件超过 500 行必须拆分(防止巨型文件增长)。
  • 新增或修改行为必须补充/更新测试。
  • 不要在日志中输出敏感信息。
  • CLI 参数变更允许破坏性调整,但必须明确说明。

5. 配置变更规则

  • 使用 YAML 配置文件,不用环境变量作主配置入口。
  • 新增配置项必须同步更新:config.example.yamlinternal/config、文档。
  • 会话相关配置项需补充对应测试。

6. 文档同步规则

任何用户可见变更必须同步更新文档,保持中英文一致。

7. 合并前自检清单

  • 本地 make check 通过
  • 至少验证一次启动:go run ./cmd/connector --feishu-websocket
  • 文档已同步更新
  • 不包含无关文件或调试内容

8. 运行时隔离规则

调试或测试隔离 runtime 时:

  • 使用显式启动模式
  • alice-headless 只能用 --runtime-only
  • 不允许隔离 runtime 连接真实飞书 WebSocket
  • 启动后确认日志显示 runtime-only mode enabled; Feishu websocket connector disabled

Adding a New LLM Backend

This guide walks through adding support for a new LLM provider CLI to Alice. Follow the same pattern used by the existing backends (codex, claude, gemini, kimi, opencode).

Prerequisites

  • The provider must have a CLI tool that Alice can run as a subprocess
  • The CLI must accept a prompt via stdin or CLI flags
  • The CLI must output results to stdout

Step 1: Understand the Backend Interface

The core interface is in internal/llm/backend.go:

type Backend interface {
    Run(ctx context.Context, req RunRequest) (RunResult, error)
}

type RunRequest struct {
    ThreadID        string
    UserText        string
    Model           string
    // ... other fields
    OnProgress      ProgressFunc
    OnRawEvent      RawEventFunc
}

type RunResult struct {
    Reply        string
    NextThreadID string
    GoalDone     bool
    Usage        Usage
}

Your backend must:

  1. Build the correct CLI command from RunRequest
  2. Execute it as a subprocess
  3. Parse stdout/stderr into RunResult
  4. Stream intermediate progress via OnProgress
  5. Handle ctx.Done() for cancellation

Step 2: Create the Backend File

Create internal/llm/<provider>_backend.go. Follow the pattern in codex_backend.go:

package llm

import (
    "context"
    "os/exec"
)

type myProviderBackend struct {
    config MyProviderConfig
}

func newMyProviderBackend(cfg MyProviderConfig) *myProviderBackend {
    return &myProviderBackend{config: cfg}
}

func (b *myProviderBackend) Run(ctx context.Context, req RunRequest) (RunResult, error) {
    // 1. Build command
    args := []string{"run", "--model", req.Model}
    if req.ThreadID != "" {
        args = append(args, "--continue", req.ThreadID)
    }
    cmd := exec.CommandContext(ctx, b.config.Command, args...)
    cmd.Dir = req.WorkspaceDir
    cmd.Env = mergeEnv(b.config.Env)

    // 2. Pipe user text to stdin
    stdin, _ := cmd.StdinPipe()
    go func() {
        defer stdin.Close()
        io.WriteString(stdin, req.UserText)
    }()

    // 3. Stream and parse output
    stdout, _ := cmd.StdoutPipe()
    // ... parse JSON-lines from stdout ...
    // ... call req.OnProgress for intermediate messages ...

    // 4. Run
    err := cmd.Run()

    // 5. Return result
    return RunResult{
        Reply:        finalReply,
        NextThreadID: nextThreadID,
        Usage:        usage,
    }, err
}

Step 3: Add Configuration

Add a config struct and provider constant in internal/llm/factory.go:

const ProviderMyProvider = "myprovider"

type MyProviderConfig struct {
    Command      string
    Timeout      time.Duration
    Model        string
    Env          map[string]string
    WorkspaceDir string
    ProfileOverrides map[string]ProfileRunnerConfig
}

Step 4: Register in the Factory

Add your provider to NewProvider in factory.go:

func NewProvider(cfg FactoryConfig) (Provider, error) {
    provider := normalizeProvider(cfg.Provider)
    switch provider {
    case ProviderCodex:
        return providerBundle{backend: newCodexBackend(cfg.Codex)}, nil
    case ProviderClaude:
        return providerBundle{backend: newClaudeBackend(cfg.Claude)}, nil
    case ProviderMyProvider:                                // NEW
        return providerBundle{backend: newMyProviderBackend(cfg.MyProvider)}, nil  // NEW
    default:
        return nil, fmt.Errorf("unsupported llm_provider %q", provider)
    }
}

Also add the field to FactoryConfig:

type FactoryConfig struct {
    Provider   string
    Codex      CodexConfig
    Claude     ClaudeConfig
    Gemini     GeminiConfig
    Kimi       KimiConfig
    OpenCode   OpenCodeConfig
    MyProvider MyProviderConfig   // NEW
}

Step 5: Wire Configuration from config.yaml

In internal/config, extend the LLM profile to accept the new provider. The profile config should map to your MyProviderConfig fields (Command, Timeout, Model, Env, etc.).

Step 6: Add Example Config

Add a profile example in config.example.yaml:

# Example: MyProvider profile.
# chat_myprovider:
#   provider: "myprovider"
#   command: "myprovider"
#   model: "myprovider-model-v1"
#   permissions:
#     sandbox: "workspace-write"
#     ask_for_approval: "never"

Step 7: Write Tests

Create internal/llm/<provider>_backend_test.go. Test at minimum:

  • Command construction with different request fields
  • Timeout handling
  • Progress callback delivery
  • Cancellation via context
  • Error handling for invalid output

Use the existing test patterns in codex_backend_test.go or opencode_appserver_driver_test.go as reference.

Step 8: Interactive Session Support (Optional)

Some backends support long-running interactive sessions where new input can be injected without restarting the subprocess. If your provider supports this:

  1. Implement the InteractiveProviderSession pattern (see claude_stream_driver.go or opencode_appserver_driver.go)
  2. Wire the interactive mode into the main Run method
  3. Add a DisableStream* escape hatch for fallback

Implementation Checklist

  • internal/llm/<provider>_backend.go — backend implementation
  • internal/llm/factory.go — provider constant + config struct + switch case
  • internal/config — LLM profile config wiring
  • config.example.yaml — example profile
  • internal/llm/<provider>_backend_test.go — tests
  • book/src/reference/configuration.md — update provider list
  • book/src/how-to/configure-backend.md — add provider example

Reference Implementations

Study these existing backends for patterns:

BackendFileNotes
Codexcodex_backend.goFull implementation with reasoning, personality, idle timeout
Claudeclaude_stream_driver.goStreaming interactive sessions
OpenCodeopencode_appserver_driver.goAppserver mode with persistent server
Kimikimi_wire_driver.goWire-protocol driver

Release Process

How Alice releases are built and published.

Branch Strategy

  • Day-to-day development happens on dev
  • Releases go through dev → main merge commits only
  • Never push directly to main

CI Pipeline

On dev Push

  1. Run quality gate (make check)
  2. Build dev binaries
  3. Update prerelease dev-latest

On main Merge from dev

  1. Run quality gate (make check)
  2. Auto-create next vX.Y.Z tag
  3. Build release binaries for all platforms
  4. Publish GitHub Release

Manual v* Tags

  • Pushing a v* tag triggers the release workflow directly

Release Artifacts

Each release publishes:

  • Binary builds for: linux-amd64, linux-arm64, darwin-amd64, darwin-arm64, win32-x64
  • npm package: @alice_space/alice
  • Installer script: scripts/alice-installer.sh

Making a Release

  1. Ensure dev passes all checks and is ready
  2. Create a PR from dev to main
  3. Merge with merge commit (do NOT squash or rebase)
  4. CI auto-creates the tag and publishes the release
  5. Verify the GitHub Release shows all artifacts

Version Numbers

Tags follow semver: vX.Y.Z. The CI auto-increments the patch version from the previous release tag.

Post-Release

  • The installer script (alice-installer.sh) automatically picks up the latest release
  • npm users get the update via npm update -g @alice_space/alice

CI Workflow Files

  • .github/workflows/ci.yml — Dev branch quality gate and dev binaries
  • .github/workflows/main-release.yml — Main branch release build
  • .github/workflows/release-on-tag.yml — Manual tag release