Go to file
2026-01-10 22:46:08 +09:00
docs update: Locked -> CacheLocked 2026-01-10 22:46:08 +09:00
llm-worker update: Locked -> CacheLocked 2026-01-10 22:46:08 +09:00
llm-worker-macros feat: Redesign the tool system 2026-01-10 00:31:14 +09:00
.env.example feat: add Google Gemini LLM client integration 2026-01-07 00:54:58 +09:00
.envrc init 2026-01-05 23:03:48 +09:00
.gitignore feat: Implement AnthropicClient 2026-01-06 00:25:08 +09:00
AGENTS.md update: Merge worker-types crate into worker crate 2026-01-08 22:15:11 +09:00
Cargo.lock feat: Implement HookEventKind 2026-01-09 19:18:20 +09:00
Cargo.toml update: worker -> llm-worker 2026-01-09 00:42:33 +09:00
deny.toml chore: Update package license, repository, etc 2026-01-09 00:14:27 +09:00
flake.lock init 2026-01-05 23:03:48 +09:00
flake.nix init 2026-01-05 23:03:48 +09:00
LICENSE chore: Update package license, repository, etc 2026-01-09 00:14:27 +09:00
README.md update: Locked -> CacheLocked 2026-01-10 22:46:08 +09:00

llm-worker

Rusty, Efficient, and Agentic LLM Client Library

llm-worker is a Rust library for building autonomous LLM-powered systems. Define tools, register hooks, and let the Worker handle the agentic loop — tool calls are executed automatically until the task completes.

Features

  • Autonomous Execution: The Worker manages the full request-response-tool cycle. You provide tools and input; it loops until done.
  • Multi-Provider Support: Unified interface for Anthropic, Gemini, OpenAI, and Ollama.
  • Tool System: Define tools as async functions. The Worker automatically parses LLM tool calls, executes them in parallel, and feeds results back.
  • Hook System: Intercept execution flow with before_tool_call, after_tool_call, and on_turn_end hooks for validation, logging, or self-correction.
  • Event-Driven Streaming: Subscribe to real-time events (text deltas, tool calls, usage) for responsive UIs.
  • Cache-Aware State Management: Type-state pattern (MutableCacheLocked) ensures KV cache efficiency by protecting the conversation prefix.

Quick Start

use llm_worker::{Worker, Message};

// Create a Worker with your LLM client
let mut worker = Worker::new(client)
    .system_prompt("You are a helpful assistant.");

// Register tools (optional)
worker.register_tool(SearchTool::new());
worker.register_tool(CalculatorTool::new());

// Run — the Worker handles tool calls automatically
let history = worker.run("What is 2+2?").await?;

License

MIT