feat: OpenResponses

This commit is contained in:
Keisuke Hirata 2026-02-19 17:37:29 +09:00
parent 6c43ac9969
commit 3c62970967
15 changed files with 2066 additions and 673 deletions

View File

@ -0,0 +1,83 @@
# Worker API/DSL 実装計画
## 目的
- [Open Responses](https://www.openresponses.org)(以後"OR")に準拠した正規化を前提に、
Item/Part の2段スコープを扱える Worker API を設計する。
- APIの煩雑化を防ぐため、worker.on_xxx として公開するのを避けつつ、
Text/Thinking/Tool など型の違いを静的に扱える DSL を提供する。
## 方針
- 内部は Timeline が Event を正規化し、Item/Part/Meta
を単一ストリームとして扱う。
- API では Item/Part 型ごとに ctx を持てるようにし、DSL
で記述の冗長さを削減する。
- まず macro_rules! 版を作り、必要なら proc-macro に拡張する。
- Item/Part の型パラメータはクレートが公開する Kind 型を使う。
## 仕様の前提
- Item は OR の item (message, function_call, reasoning など) に対応する。
- Part は OR の content part (output_text, reasoning_text など) に対応する。
- Item は必ず start/stop を持つ。Part は Item 内で複数発生し得る。
- Item/Part の型指定は `Item<Message>` / `Part<ReasoningText>` のように書く。
## 設計ステップ
### 1. 内部イベントモデルの整理
- Event を Item/Part/Meta の3層に整理する。
- ItemEvent / PartEvent は型パラメータで区別する。
- 例: ItemEvent<Message>, PartEvent<Message, OutputText>
### 2. スコープの二段化
- Item ctx: Item 型ごとに1つ
- Part ctx: Part 型ごとに1つ
- Part のイベントでは常に item ctx と part ctx の両方を渡す。
### 3. Handler trait の再定義
- Item/Part を型で指定できる trait を導入する。
- 例:
- trait ItemHandler<I>
- trait PartHandler<I, P>
- PartHandler には ItemHandler の ItemCtx を必須で渡す。
- Part の ctx 型は `PartKind::Ctx` 方式 or enum 方式で切り替える。
### 4. Timeline との結合
- Timeline は ItemStart で ItemCtx を生成
- PartStart で PartCtx を生成
- Delta/Stop は対応 ctx に流す
- ItemStop で ItemCtx を破棄
### 5. DSL (macro_rules!) の導入
- まず宣言的 DSL を提供する。
- 例:
- handler! { Item<Message> { type ItemCtx = ...; Part<OutputText> { type
PartCtx = ...; } } }
- DSL は ItemHandler / PartHandler 実装を生成する。
- Item/Part の Kind 型はクレートが公開する型を参照する。
### 6. 拡張ポイント
- 追加 Part (output_image など) を DSL に追加しやすい形にする。
- 必要なら proc-macro に移行して構文自由度を上げる。
## 実装順序
1. Event/Item/Part の型定義の整理
2. Item/Part ctx を持つ Timeline 実装
3. Handler trait の定義・既存コードの移行
4. macro_rules! DSL の実装
5. 既存ユースケースの移植
## TODO
- Item と Part の型対応表を整理する
- OR と既存 llm_client の差分を再確認する
- Tool args の delta を OR 拡張として扱うか検討する
- macro_rules! で表現可能な DSL の最小文法を確定する

View File

@ -0,0 +1,80 @@
# Open Responses mapping (llm_client -> Open Responses)
This document maps the current `llm_client` event model to Open Responses items
and streaming events. It focuses on output streaming; input items are noted
where they are the closest semantic match.
## Legend
- **OR item**: Open Responses item types used in `response.output`.
- **OR event**: Open Responses streaming events (`response.*`).
- **Note**: Gaps or required adaptation decisions.
## Response lifecycle / meta events
| llm_client | Open Responses | Note |
| ------------------------ | ------------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
| `StatusEvent::Started` | `response.created`, `response.queued`, `response.in_progress` | OR has finer-grained lifecycle states; pick a subset or map Started -> `response.in_progress`. |
| `StatusEvent::Completed` | `response.completed` | |
| `StatusEvent::Failed` | `response.failed` | |
| `StatusEvent::Cancelled` | (no direct event) | Could map to `response.incomplete` or `response.failed` depending on semantics. |
| `UsageEvent` | `response.completed` payload usage | OR reports usage on the response object, not as a dedicated streaming event. |
| `ErrorEvent` | `error` event | OR has a dedicated error streaming event. |
| `PingEvent` | (no direct event) | OR does not define a heartbeat event. |
## Output block lifecycle
### Text block
| llm_client | Open Responses | Note |
| ------------------------------------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
| `BlockStart { block_type: Text, metadata: Text }` | `response.output_item.added` with item type `message` (assistant) | OR output items are message/function_call/reasoning. This creates the message item. |
| `BlockDelta { delta: Text(..) }` | `response.output_text.delta` | Text deltas map 1:1 to output text deltas. |
| `BlockStop { block_type: Text }` | `response.output_text.done` + `response.content_part.done` + `response.output_item.done` | OR emits separate done events for content parts and items. |
### Tool use (function call)
| llm_client | Open Responses | Note |
| -------------------------------------------------------------------- | --------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
| `BlockStart { block_type: ToolUse, metadata: ToolUse { id, name } }` | `response.output_item.added` with item type `function_call` | OR uses `call_id` + `name` + `arguments` string. Map `id` -> `call_id`. |
| `BlockDelta { delta: InputJson(..) }` | `response.function_call_arguments.delta` | OR spec does not explicitly require argument deltas; treat as OpenAI-compatible extension if adopted. |
| `BlockStop { block_type: ToolUse }` | `response.function_call_arguments.done` + `response.output_item.done` | Item status can be set to `completed` or `incomplete`. |
### Tool result (function call output)
| llm_client | Open Responses | Note |
| ----------------------------------------------------------------------------- | ------------------------------------- | ---------------------------------------------------------------------------------------- |
| `BlockStart { block_type: ToolResult, metadata: ToolResult { tool_use_id } }` | **Input item** `function_call_output` | OR treats tool results as input items, not output items. This is a request-side mapping. |
| `BlockDelta` | (no direct output event) | OR does not stream tool output deltas as response events. |
| `BlockStop` | (no direct output event) | Tool output lives on the next request as an input item. |
### Thinking / reasoning
| llm_client | Open Responses | Note |
| --------------------------------------------------------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `BlockStart { block_type: Thinking, metadata: Thinking }` | `response.output_item.added` with item type `reasoning` | OR models reasoning as a separate item type. |
| `BlockDelta { delta: Thinking(..) }` | `response.reasoning.delta` | OR has dedicated reasoning delta events. |
| `BlockStop { block_type: Thinking }` | `response.reasoning.done` | OR separates reasoning summary events (`response.reasoning_summary_*`) from reasoning deltas. Decide whether Thinking maps to full reasoning or summary only. |
## Stop reasons
| llm_client `StopReason` | Open Responses | Note |
| ----------------------- | ------------------------------------------------------------------------------ | ---------------------------------------------- |
| `EndTurn` | `response.completed` + item status `completed` | |
| `MaxTokens` | `response.incomplete` + item status `incomplete` | |
| `StopSequence` | `response.completed` | |
| `ToolUse` | `response.completed` for message item, followed by `function_call` output item | OR models tool call as a separate output item. |
## Gaps / open decisions
- `PingEvent` has no OR equivalent. If needed, keep as internal only.
- `Cancelled` status needs a policy: map to `response.incomplete` or
`response.failed`.
- OR has `response.refusal.delta` / `response.refusal.done`. `llm_client` has no
refusal delta type; consider adding a new block or delta variant if needed.
- OR splits _item_ and _content part_ lifecycles. `llm_client` currently has a
single block lifecycle, so mapping should decide whether to synthesize
`content_part.*` events or ignore them.
- The OR specification does not state how `function_call.arguments` stream
deltas; `response.function_call_arguments.*` should be treated as a compatible
extension if required.

View File

@ -35,11 +35,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1767116409,
"narHash": "sha256-5vKw92l1GyTnjoLzEagJy5V5mDFck72LiQWZSOnSicw=",
"lastModified": 1771369470,
"narHash": "sha256-0NBlEBKkN3lufyvFegY4TYv5mCNHbi5OmBDrzihbBMQ=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "cad22e7d996aea55ecab064e84834289143e44a0",
"rev": "0182a361324364ae3f436a63005877674cf45efb",
"type": "github"
},
"original": {

View File

@ -52,7 +52,7 @@ pub enum PostToolCallResult {
#[derive(Debug, Clone)]
pub enum OnTurnEndResult {
Finish,
ContinueWithMessages(Vec<crate::Message>),
ContinueWithMessages(Vec<crate::Item>),
Paused,
}
@ -83,12 +83,12 @@ pub struct PostToolCallContext {
}
impl HookEventKind for OnPromptSubmit {
type Input = crate::Message;
type Input = crate::Item;
type Output = OnPromptSubmitResult;
}
impl HookEventKind for PreLlmRequest {
type Input = Vec<crate::Message>;
type Input = Vec<crate::Item>;
type Output = PreLlmRequestResult;
}
@ -103,7 +103,7 @@ impl HookEventKind for PostToolCall {
}
impl HookEventKind for OnTurnEnd {
type Input = Vec<crate::Message>;
type Input = Vec<crate::Item>;
type Output = OnTurnEndResult;
}

View File

@ -12,7 +12,7 @@
//! # Quick Start
//!
//! ```ignore
//! use llm_worker::{Worker, Message};
//! use llm_worker::{Worker, Item};
//!
//! // Create a Worker
//! let mut worker = Worker::new(client)
@ -47,5 +47,5 @@ pub mod subscriber;
pub mod timeline;
pub mod tool;
pub use message::{ContentPart, Message, MessageContent, Role};
pub use message::{ContentPart, Item, Message, Role};
pub use worker::{ToolRegistryError, Worker, WorkerConfig, WorkerError, WorkerResult};

View File

@ -1,15 +1,17 @@
//! Anthropic リクエスト生成
//! Anthropic Request Builder
//!
//! Converts Open Responses native Item model to Anthropic Messages API format.
use serde::Serialize;
use crate::llm_client::{
types::{ContentPart, Item, Role, ToolDefinition},
Request,
types::{ContentPart, Message, MessageContent, Role, ToolDefinition},
};
use super::AnthropicScheme;
/// Anthropic APIへのリクエストボディ
/// Anthropic API request body
#[derive(Debug, Serialize)]
pub(crate) struct AnthropicRequest {
pub model: String,
@ -30,14 +32,14 @@ pub(crate) struct AnthropicRequest {
pub stream: bool,
}
/// Anthropic メッセージ
/// Anthropic message
#[derive(Debug, Serialize)]
pub(crate) struct AnthropicMessage {
pub role: String,
pub content: AnthropicContent,
}
/// Anthropic コンテンツ
/// Anthropic content
#[derive(Debug, Serialize)]
#[serde(untagged)]
pub(crate) enum AnthropicContent {
@ -45,7 +47,7 @@ pub(crate) enum AnthropicContent {
Parts(Vec<AnthropicContentPart>),
}
/// Anthropic コンテンツパーツ
/// Anthropic content part
#[derive(Debug, Serialize)]
#[serde(tag = "type")]
pub(crate) enum AnthropicContentPart {
@ -58,13 +60,10 @@ pub(crate) enum AnthropicContentPart {
input: serde_json::Value,
},
#[serde(rename = "tool_result")]
ToolResult {
tool_use_id: String,
content: String,
},
ToolResult { tool_use_id: String, content: String },
}
/// Anthropic ツール定義
/// Anthropic tool definition
#[derive(Debug, Serialize)]
pub(crate) struct AnthropicTool {
pub name: String,
@ -74,14 +73,9 @@ pub(crate) struct AnthropicTool {
}
impl AnthropicScheme {
/// RequestからAnthropicのリクエストボディを構築
/// Build Anthropic request from Request
pub(crate) fn build_request(&self, model: &str, request: &Request) -> AnthropicRequest {
let messages = request
.messages
.iter()
.map(|m| self.convert_message(m))
.collect();
let messages = self.convert_items_to_messages(&request.items);
let tools = request.tools.iter().map(|t| self.convert_tool(t)).collect();
AnthropicRequest {
@ -98,49 +92,160 @@ impl AnthropicScheme {
}
}
fn convert_message(&self, message: &Message) -> AnthropicMessage {
let role = match message.role {
Role::User => "user",
Role::Assistant => "assistant",
};
/// Convert Open Responses Items to Anthropic Messages
///
/// Anthropic uses a message-based model where:
/// - User messages have role "user"
/// - Assistant messages have role "assistant"
/// - Tool calls are content parts within assistant messages
/// - Tool results are content parts within user messages
fn convert_items_to_messages(&self, items: &[Item]) -> Vec<AnthropicMessage> {
let mut messages = Vec::new();
let mut pending_assistant_parts: Vec<AnthropicContentPart> = Vec::new();
let mut pending_user_parts: Vec<AnthropicContentPart> = Vec::new();
let content = match &message.content {
MessageContent::Text(text) => AnthropicContent::Text(text.clone()),
MessageContent::ToolResult {
tool_use_id,
content,
} => AnthropicContent::Parts(vec![AnthropicContentPart::ToolResult {
tool_use_id: tool_use_id.clone(),
content: content.clone(),
}]),
MessageContent::Parts(parts) => {
let converted: Vec<_> = parts
.iter()
.map(|p| match p {
ContentPart::Text { text } => {
AnthropicContentPart::Text { text: text.clone() }
for item in items {
match item {
Item::Message { role, content, .. } => {
// Flush pending parts before a new message
self.flush_pending_parts(
&mut messages,
&mut pending_assistant_parts,
&mut pending_user_parts,
);
let anthropic_role = match role {
Role::User => "user",
Role::Assistant => "assistant",
Role::System => continue, // Skip system role items
};
let parts: Vec<AnthropicContentPart> = content
.iter()
.map(|p| match p {
ContentPart::InputText { text } => {
AnthropicContentPart::Text { text: text.clone() }
}
ContentPart::OutputText { text } => {
AnthropicContentPart::Text { text: text.clone() }
}
ContentPart::Refusal { refusal } => {
AnthropicContentPart::Text {
text: refusal.clone(),
}
}
})
.collect();
if parts.len() == 1 {
if let AnthropicContentPart::Text { text } = &parts[0] {
messages.push(AnthropicMessage {
role: anthropic_role.to_string(),
content: AnthropicContent::Text(text.clone()),
});
} else {
messages.push(AnthropicMessage {
role: anthropic_role.to_string(),
content: AnthropicContent::Parts(parts),
});
}
ContentPart::ToolUse { id, name, input } => AnthropicContentPart::ToolUse {
id: id.clone(),
name: name.clone(),
input: input.clone(),
},
ContentPart::ToolResult {
tool_use_id,
content,
} => AnthropicContentPart::ToolResult {
tool_use_id: tool_use_id.clone(),
content: content.clone(),
},
})
.collect();
AnthropicContent::Parts(converted)
}
};
} else {
messages.push(AnthropicMessage {
role: anthropic_role.to_string(),
content: AnthropicContent::Parts(parts),
});
}
}
AnthropicMessage {
role: role.to_string(),
content,
Item::FunctionCall {
call_id,
name,
arguments,
..
} => {
// Flush pending user parts first
if !pending_user_parts.is_empty() {
messages.push(AnthropicMessage {
role: "user".to_string(),
content: AnthropicContent::Parts(std::mem::take(
&mut pending_user_parts,
)),
});
}
// Parse arguments JSON string to Value
let input = serde_json::from_str(arguments)
.unwrap_or_else(|_| serde_json::Value::Object(serde_json::Map::new()));
pending_assistant_parts.push(AnthropicContentPart::ToolUse {
id: call_id.clone(),
name: name.clone(),
input,
});
}
Item::FunctionCallOutput { call_id, output, .. } => {
// Flush pending assistant parts first
if !pending_assistant_parts.is_empty() {
messages.push(AnthropicMessage {
role: "assistant".to_string(),
content: AnthropicContent::Parts(std::mem::take(
&mut pending_assistant_parts,
)),
});
}
pending_user_parts.push(AnthropicContentPart::ToolResult {
tool_use_id: call_id.clone(),
content: output.clone(),
});
}
Item::Reasoning { text, .. } => {
// Flush pending user parts first
if !pending_user_parts.is_empty() {
messages.push(AnthropicMessage {
role: "user".to_string(),
content: AnthropicContent::Parts(std::mem::take(
&mut pending_user_parts,
)),
});
}
// Reasoning is treated as assistant text in Anthropic
// (actual thinking blocks are handled differently in streaming)
pending_assistant_parts.push(AnthropicContentPart::Text { text: text.clone() });
}
}
}
// Flush remaining pending parts
self.flush_pending_parts(
&mut messages,
&mut pending_assistant_parts,
&mut pending_user_parts,
);
messages
}
fn flush_pending_parts(
&self,
messages: &mut Vec<AnthropicMessage>,
pending_assistant_parts: &mut Vec<AnthropicContentPart>,
pending_user_parts: &mut Vec<AnthropicContentPart>,
) {
if !pending_assistant_parts.is_empty() {
messages.push(AnthropicMessage {
role: "assistant".to_string(),
content: AnthropicContent::Parts(std::mem::take(pending_assistant_parts)),
});
}
if !pending_user_parts.is_empty() {
messages.push(AnthropicMessage {
role: "user".to_string(),
content: AnthropicContent::Parts(std::mem::take(pending_user_parts)),
});
}
}
@ -195,4 +300,24 @@ mod tests {
assert_eq!(anthropic_req.tools.len(), 1);
assert_eq!(anthropic_req.tools[0].name, "get_weather");
}
#[test]
fn test_function_call_and_output() {
let scheme = AnthropicScheme::new();
let request = Request::new()
.user("What's the weather?")
.item(Item::function_call(
"call_123",
"get_weather",
r#"{"city":"Tokyo"}"#,
))
.item(Item::function_call_output("call_123", "Sunny, 25°C"));
let anthropic_req = scheme.build_request("claude-sonnet-4-20250514", &request);
assert_eq!(anthropic_req.messages.len(), 3);
assert_eq!(anthropic_req.messages[0].role, "user");
assert_eq!(anthropic_req.messages[1].role, "assistant");
assert_eq!(anthropic_req.messages[2].role, "user");
}
}

View File

@ -1,130 +1,130 @@
//! Gemini リクエスト生成
//! Gemini Request Builder
//!
//! Google Gemini APIへのリクエストボディを構築
//! Converts Open Responses native Item model to Google Gemini API format.
use serde::Serialize;
use serde_json::Value;
use crate::llm_client::{
types::{Item, Role, ToolDefinition},
Request,
types::{ContentPart, Message, MessageContent, Role, ToolDefinition},
};
use super::GeminiScheme;
/// Gemini APIへのリクエストボディ
/// Gemini API request body
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct GeminiRequest {
/// コンテンツ(会話履歴)
/// Contents (conversation history)
pub contents: Vec<GeminiContent>,
/// システム指示
/// System instruction
#[serde(skip_serializing_if = "Option::is_none")]
pub system_instruction: Option<GeminiContent>,
/// ツール定義
/// Tool definitions
#[serde(skip_serializing_if = "Vec::is_empty")]
pub tools: Vec<GeminiTool>,
/// ツール設定
/// Tool config
#[serde(skip_serializing_if = "Option::is_none")]
pub tool_config: Option<GeminiToolConfig>,
/// 生成設定
/// Generation config
#[serde(skip_serializing_if = "Option::is_none")]
pub generation_config: Option<GeminiGenerationConfig>,
}
/// Gemini コンテンツ
/// Gemini content
#[derive(Debug, Serialize)]
pub(crate) struct GeminiContent {
/// ロール
/// Role
pub role: String,
/// パーツ
/// Parts
pub parts: Vec<GeminiPart>,
}
/// Gemini パーツ
/// Gemini part
#[derive(Debug, Serialize)]
#[serde(untagged)]
pub(crate) enum GeminiPart {
/// テキストパーツ
/// Text part
Text { text: String },
/// 関数呼び出しパーツ
/// Function call part
FunctionCall {
#[serde(rename = "functionCall")]
function_call: GeminiFunctionCall,
},
/// 関数レスポンスパーツ
/// Function response part
FunctionResponse {
#[serde(rename = "functionResponse")]
function_response: GeminiFunctionResponse,
},
}
/// Gemini 関数呼び出し
/// Gemini function call
#[derive(Debug, Serialize)]
pub(crate) struct GeminiFunctionCall {
pub name: String,
pub args: Value,
}
/// Gemini 関数レスポンス
/// Gemini function response
#[derive(Debug, Serialize)]
pub(crate) struct GeminiFunctionResponse {
pub name: String,
pub response: GeminiFunctionResponseContent,
}
/// Gemini 関数レスポンス内容
/// Gemini function response content
#[derive(Debug, Serialize)]
pub(crate) struct GeminiFunctionResponseContent {
pub name: String,
pub content: Value,
}
/// Gemini ツール定義
/// Gemini tool definition
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct GeminiTool {
/// 関数宣言
/// Function declarations
pub function_declarations: Vec<GeminiFunctionDeclaration>,
}
/// Gemini 関数宣言
/// Gemini function declaration
#[derive(Debug, Serialize)]
pub(crate) struct GeminiFunctionDeclaration {
/// 関数名
/// Function name
pub name: String,
/// 説明
/// Description
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
/// パラメータスキーマ
/// Parameter schema
pub parameters: Value,
}
/// Gemini ツール設定
/// Gemini tool config
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct GeminiToolConfig {
/// 関数呼び出し設定
/// Function calling config
pub function_calling_config: GeminiFunctionCallingConfig,
}
/// Gemini 関数呼び出し設定
/// Gemini function calling config
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct GeminiFunctionCallingConfig {
/// モード: AUTO, ANY, NONE
/// Mode: AUTO, ANY, NONE
#[serde(skip_serializing_if = "Option::is_none")]
pub mode: Option<String>,
/// ストリーミング関数呼び出し引数を有効にするか
/// Enable streaming function call arguments
#[serde(skip_serializing_if = "Option::is_none")]
pub stream_function_call_arguments: Option<bool>,
}
/// Gemini 生成設定
/// Gemini generation config
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub(crate) struct GeminiGenerationConfig {
/// 最大出力トークン数
/// Max output tokens
#[serde(skip_serializing_if = "Option::is_none")]
pub max_output_tokens: Option<u32>,
/// Temperature
@ -136,27 +136,23 @@ pub(crate) struct GeminiGenerationConfig {
/// Top K
#[serde(skip_serializing_if = "Option::is_none")]
pub top_k: Option<u32>,
/// ストップシーケンス
/// Stop sequences
#[serde(skip_serializing_if = "Vec::is_empty")]
pub stop_sequences: Vec<String>,
}
impl GeminiScheme {
/// RequestからGeminiのリクエストボディを構築
/// Build Gemini request from Request
pub(crate) fn build_request(&self, request: &Request) -> GeminiRequest {
let mut contents = Vec::new();
let contents = self.convert_items_to_contents(&request.items);
for message in &request.messages {
contents.push(self.convert_message(message));
}
// システムプロンプト
// System prompt
let system_instruction = request.system_prompt.as_ref().map(|s| GeminiContent {
role: "user".to_string(), // system_instructionではroleは"user"か省略
role: "user".to_string(),
parts: vec![GeminiPart::Text { text: s.clone() }],
});
// ツール
// Tools
let tools = if request.tools.is_empty() {
vec![]
} else {
@ -165,7 +161,7 @@ impl GeminiScheme {
}]
};
// ツール設定
// Tool config
let tool_config = if !request.tools.is_empty() {
Some(GeminiToolConfig {
function_calling_config: GeminiFunctionCallingConfig {
@ -181,7 +177,7 @@ impl GeminiScheme {
None
};
// 生成設定
// Generation config
let generation_config = Some(GeminiGenerationConfig {
max_output_tokens: request.config.max_tokens,
temperature: request.config.temperature,
@ -199,58 +195,126 @@ impl GeminiScheme {
}
}
fn convert_message(&self, message: &Message) -> GeminiContent {
let role = match message.role {
Role::User => "user",
Role::Assistant => "model",
};
/// Convert Open Responses Items to Gemini Contents
///
/// Gemini uses:
/// - role "user" for user messages and function responses
/// - role "model" for assistant messages and function calls
fn convert_items_to_contents(&self, items: &[Item]) -> Vec<GeminiContent> {
let mut contents = Vec::new();
let mut pending_model_parts: Vec<GeminiPart> = Vec::new();
let mut pending_user_parts: Vec<GeminiPart> = Vec::new();
let parts = match &message.content {
MessageContent::Text(text) => vec![GeminiPart::Text { text: text.clone() }],
MessageContent::ToolResult {
tool_use_id,
content,
} => {
// Geminiでは関数レスポンスとしてマップ
vec![GeminiPart::FunctionResponse {
function_response: GeminiFunctionResponse {
name: tool_use_id.clone(),
response: GeminiFunctionResponseContent {
name: tool_use_id.clone(),
content: serde_json::Value::String(content.clone()),
},
},
}]
}
MessageContent::Parts(parts) => parts
.iter()
.map(|p| match p {
ContentPart::Text { text } => GeminiPart::Text { text: text.clone() },
ContentPart::ToolUse { id: _, name, input } => GeminiPart::FunctionCall {
for item in items {
match item {
Item::Message { role, content, .. } => {
// Flush pending parts
self.flush_pending_parts(
&mut contents,
&mut pending_model_parts,
&mut pending_user_parts,
);
let gemini_role = match role {
Role::User => "user",
Role::Assistant => "model",
Role::System => continue, // Skip system role items
};
let parts: Vec<GeminiPart> = content
.iter()
.map(|p| GeminiPart::Text {
text: p.as_text().to_string(),
})
.collect();
contents.push(GeminiContent {
role: gemini_role.to_string(),
parts,
});
}
Item::FunctionCall {
name, arguments, ..
} => {
// Flush pending user parts first
if !pending_user_parts.is_empty() {
contents.push(GeminiContent {
role: "user".to_string(),
parts: std::mem::take(&mut pending_user_parts),
});
}
// Parse arguments
let args = serde_json::from_str(arguments)
.unwrap_or_else(|_| Value::Object(serde_json::Map::new()));
pending_model_parts.push(GeminiPart::FunctionCall {
function_call: GeminiFunctionCall {
name: name.clone(),
args: input.clone(),
args,
},
},
ContentPart::ToolResult {
tool_use_id,
content,
} => GeminiPart::FunctionResponse {
});
}
Item::FunctionCallOutput { call_id, output, .. } => {
// Flush pending model parts first
if !pending_model_parts.is_empty() {
contents.push(GeminiContent {
role: "model".to_string(),
parts: std::mem::take(&mut pending_model_parts),
});
}
pending_user_parts.push(GeminiPart::FunctionResponse {
function_response: GeminiFunctionResponse {
name: tool_use_id.clone(),
name: call_id.clone(),
response: GeminiFunctionResponseContent {
name: tool_use_id.clone(),
content: serde_json::Value::String(content.clone()),
name: call_id.clone(),
content: Value::String(output.clone()),
},
},
},
})
.collect(),
};
});
}
GeminiContent {
role: role.to_string(),
parts,
Item::Reasoning { text, .. } => {
// Flush pending user parts first
if !pending_user_parts.is_empty() {
contents.push(GeminiContent {
role: "user".to_string(),
parts: std::mem::take(&mut pending_user_parts),
});
}
// Reasoning is treated as model text in Gemini
pending_model_parts.push(GeminiPart::Text { text: text.clone() });
}
}
}
// Flush remaining pending parts
self.flush_pending_parts(&mut contents, &mut pending_model_parts, &mut pending_user_parts);
contents
}
fn flush_pending_parts(
&self,
contents: &mut Vec<GeminiContent>,
pending_model_parts: &mut Vec<GeminiPart>,
pending_user_parts: &mut Vec<GeminiPart>,
) {
if !pending_model_parts.is_empty() {
contents.push(GeminiContent {
role: "model".to_string(),
parts: std::mem::take(pending_model_parts),
});
}
if !pending_user_parts.is_empty() {
contents.push(GeminiContent {
role: "user".to_string(),
parts: std::mem::take(pending_user_parts),
});
}
}
@ -318,4 +382,24 @@ mod tests {
assert_eq!(gemini_req.contents[0].role, "user");
assert_eq!(gemini_req.contents[1].role, "model");
}
#[test]
fn test_function_call_and_output() {
let scheme = GeminiScheme::new();
let request = Request::new()
.user("What's the weather?")
.item(Item::function_call(
"call_123",
"get_weather",
r#"{"city":"Tokyo"}"#,
))
.item(Item::function_call_output("call_123", "Sunny, 25°C"));
let gemini_req = scheme.build_request(&request);
assert_eq!(gemini_req.contents.len(), 3);
assert_eq!(gemini_req.contents[0].role, "user");
assert_eq!(gemini_req.contents[1].role, "model");
assert_eq!(gemini_req.contents[2].role, "user");
}
}

View File

@ -1,21 +1,23 @@
//! OpenAI リクエスト生成
//! OpenAI Request Builder
//!
//! Converts Open Responses native Item model to OpenAI Chat Completions API format.
use serde::Serialize;
use serde_json::Value;
use crate::llm_client::{
types::{Item, Role, ToolDefinition},
Request,
types::{ContentPart, Message, MessageContent, Role, ToolDefinition},
};
use super::OpenAIScheme;
/// OpenAI APIへのリクエストボディ
/// OpenAI API request body
#[derive(Debug, Serialize)]
pub(crate) struct OpenAIRequest {
pub model: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub max_completion_tokens: Option<u32>, // max_tokens is deprecated for newer models, generally max_completion_tokens is preferred
pub max_completion_tokens: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
pub max_tokens: Option<u32>, // Legacy field for compatibility (e.g. Ollama)
#[serde(skip_serializing_if = "Option::is_none")]
@ -31,7 +33,7 @@ pub(crate) struct OpenAIRequest {
#[serde(skip_serializing_if = "Vec::is_empty")]
pub tools: Vec<OpenAITool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub tool_choice: Option<String>, // "auto", "none", or specific
pub tool_choice: Option<String>,
}
#[derive(Debug, Serialize)]
@ -39,20 +41,21 @@ pub(crate) struct StreamOptions {
pub include_usage: bool,
}
/// OpenAI メッセージ
/// OpenAI message
#[derive(Debug, Serialize)]
pub(crate) struct OpenAIMessage {
pub role: String,
pub content: Option<OpenAIContent>, // Optional for assistant tool calls
pub content: Option<OpenAIContent>,
#[serde(skip_serializing_if = "Vec::is_empty")]
pub tool_calls: Vec<OpenAIToolCall>,
#[serde(skip_serializing_if = "Option::is_none")]
pub tool_call_id: Option<String>, // For tool_result (role: tool)
pub tool_call_id: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub name: Option<String>, // Optional name
pub name: Option<String>,
}
/// OpenAI コンテンツ
/// OpenAI content
#[allow(dead_code)]
#[derive(Debug, Serialize)]
#[serde(untagged)]
pub(crate) enum OpenAIContent {
@ -60,7 +63,7 @@ pub(crate) enum OpenAIContent {
Parts(Vec<OpenAIContentPart>),
}
/// OpenAI コンテンツパーツ
/// OpenAI content part
#[allow(dead_code)]
#[derive(Debug, Serialize)]
#[serde(tag = "type")]
@ -76,7 +79,7 @@ pub(crate) struct ImageUrl {
pub url: String,
}
/// OpenAI ツール定義
/// OpenAI tool definition
#[derive(Debug, Serialize)]
pub(crate) struct OpenAITool {
pub r#type: String,
@ -91,7 +94,7 @@ pub(crate) struct OpenAIToolFunction {
pub parameters: Value,
}
/// OpenAI ツール呼び出し(メッセージ内)
/// OpenAI tool call in message
#[derive(Debug, Serialize)]
pub(crate) struct OpenAIToolCall {
pub id: String,
@ -106,10 +109,11 @@ pub(crate) struct OpenAIToolCallFunction {
}
impl OpenAIScheme {
/// RequestからOpenAIのリクエストボディを構築
/// Build OpenAI request from Request
pub(crate) fn build_request(&self, model: &str, request: &Request) -> OpenAIRequest {
let mut messages = Vec::new();
// Add system message if present
if let Some(system) = &request.system_prompt {
messages.push(OpenAIMessage {
role: "system".to_string(),
@ -120,7 +124,8 @@ impl OpenAIScheme {
});
}
messages.extend(request.messages.iter().map(|m| self.convert_message(m)));
// Convert items to messages
messages.extend(self.convert_items_to_messages(&request.items));
let tools = request.tools.iter().map(|t| self.convert_tool(t)).collect();
@ -143,107 +148,123 @@ impl OpenAIScheme {
}),
messages,
tools,
tool_choice: None, // Default to auto if tools are present? Or let API decide (which is auto)
tool_choice: None,
}
}
fn convert_message(&self, message: &Message) -> OpenAIMessage {
match &message.content {
MessageContent::ToolResult {
tool_use_id,
content,
} => OpenAIMessage {
role: "tool".to_string(),
content: Some(OpenAIContent::Text(content.clone())),
tool_calls: vec![],
tool_call_id: Some(tool_use_id.clone()),
name: None,
},
MessageContent::Text(text) => {
let role = match message.role {
Role::User => "user",
Role::Assistant => "assistant",
};
OpenAIMessage {
role: role.to_string(),
content: Some(OpenAIContent::Text(text.clone())),
tool_calls: vec![],
tool_call_id: None,
name: None,
}
}
MessageContent::Parts(parts) => {
let role = match message.role {
Role::User => "user",
Role::Assistant => "assistant",
};
/// Convert Open Responses Items to OpenAI Messages
///
/// OpenAI uses a message-based model where:
/// - User messages have role "user"
/// - Assistant messages have role "assistant"
/// - Tool calls are within assistant messages as tool_calls array
/// - Tool results have role "tool" with tool_call_id
fn convert_items_to_messages(&self, items: &[Item]) -> Vec<OpenAIMessage> {
let mut messages = Vec::new();
let mut pending_tool_calls: Vec<OpenAIToolCall> = Vec::new();
let mut pending_assistant_text: Option<String> = None;
let mut content_parts = Vec::new();
let mut tool_calls = Vec::new();
let mut is_tool_result = false;
let mut tool_result_id = None;
let mut tool_result_content = String::new();
for item in items {
match item {
Item::Message { role, content, .. } => {
// Flush pending tool calls
self.flush_pending_assistant(
&mut messages,
&mut pending_tool_calls,
&mut pending_assistant_text,
);
for part in parts {
match part {
ContentPart::Text { text } => {
content_parts.push(OpenAIContentPart::Text { text: text.clone() });
}
ContentPart::ToolUse { id, name, input } => {
tool_calls.push(OpenAIToolCall {
id: id.clone(),
r#type: "function".to_string(),
function: OpenAIToolCallFunction {
name: name.clone(),
arguments: input.to_string(),
},
});
}
ContentPart::ToolResult {
tool_use_id,
content,
} => {
// OpenAI doesn't support mixed content with ToolResult in the same message easily if not careful
// But strictly speaking, a Message with ToolResult should be its own message with role "tool"
is_tool_result = true;
tool_result_id = Some(tool_use_id.clone());
tool_result_content = content.clone();
}
}
}
if is_tool_result {
OpenAIMessage {
role: "tool".to_string(),
content: Some(OpenAIContent::Text(tool_result_content)),
tool_calls: vec![],
tool_call_id: tool_result_id,
name: None,
}
} else {
let content = if content_parts.is_empty() {
None
} else if content_parts.len() == 1 {
// Simplify single text part to just Text content if preferred, or keep as Parts
if let OpenAIContentPart::Text { text } = &content_parts[0] {
Some(OpenAIContent::Text(text.clone()))
} else {
Some(OpenAIContent::Parts(content_parts))
}
} else {
Some(OpenAIContent::Parts(content_parts))
let openai_role = match role {
Role::User => "user",
Role::Assistant => "assistant",
Role::System => "system",
};
OpenAIMessage {
role: role.to_string(),
content,
tool_calls,
let text_content: String = content
.iter()
.map(|p| p.as_text())
.collect::<Vec<_>>()
.join("");
messages.push(OpenAIMessage {
role: openai_role.to_string(),
content: Some(OpenAIContent::Text(text_content)),
tool_calls: vec![],
tool_call_id: None,
name: None,
});
}
Item::FunctionCall {
call_id,
name,
arguments,
..
} => {
pending_tool_calls.push(OpenAIToolCall {
id: call_id.clone(),
r#type: "function".to_string(),
function: OpenAIToolCallFunction {
name: name.clone(),
arguments: arguments.clone(),
},
});
}
Item::FunctionCallOutput { call_id, output, .. } => {
// Flush pending tool calls before tool result
self.flush_pending_assistant(
&mut messages,
&mut pending_tool_calls,
&mut pending_assistant_text,
);
messages.push(OpenAIMessage {
role: "tool".to_string(),
content: Some(OpenAIContent::Text(output.clone())),
tool_calls: vec![],
tool_call_id: Some(call_id.clone()),
name: None,
});
}
Item::Reasoning { text, .. } => {
// Reasoning is treated as assistant text in OpenAI
// (OpenAI doesn't have native reasoning support like Claude)
if let Some(ref mut existing) = pending_assistant_text {
existing.push_str(text);
} else {
pending_assistant_text = Some(text.clone());
}
}
}
}
// Flush remaining pending items
self.flush_pending_assistant(
&mut messages,
&mut pending_tool_calls,
&mut pending_assistant_text,
);
messages
}
fn flush_pending_assistant(
&self,
messages: &mut Vec<OpenAIMessage>,
pending_tool_calls: &mut Vec<OpenAIToolCall>,
pending_assistant_text: &mut Option<String>,
) {
if !pending_tool_calls.is_empty() || pending_assistant_text.is_some() {
messages.push(OpenAIMessage {
role: "assistant".to_string(),
content: pending_assistant_text.take().map(OpenAIContent::Text),
tool_calls: std::mem::take(pending_tool_calls),
tool_call_id: None,
name: None,
});
}
}
fn convert_tool(&self, tool: &ToolDefinition) -> OpenAITool {
@ -274,7 +295,6 @@ mod tests {
assert_eq!(body.messages[0].role, "system");
assert_eq!(body.messages[1].role, "user");
// Check system content
if let Some(OpenAIContent::Text(text)) = &body.messages[0].content {
assert_eq!(text, "System prompt");
} else {
@ -301,20 +321,39 @@ mod tests {
let body = scheme.build_request("llama3", &request);
// max_tokens should be set, max_completion_tokens should be None
assert_eq!(body.max_tokens, Some(100));
assert!(body.max_completion_tokens.is_none());
}
#[test]
fn test_build_request_modern_max_tokens() {
let scheme = OpenAIScheme::new(); // Default matches modern (legacy=false)
let scheme = OpenAIScheme::new();
let request = Request::new().user("Hello").max_tokens(100);
let body = scheme.build_request("gpt-4o", &request);
// max_completion_tokens should be set, max_tokens should be None
assert_eq!(body.max_completion_tokens, Some(100));
assert!(body.max_tokens.is_none());
}
#[test]
fn test_function_call_and_output() {
let scheme = OpenAIScheme::new();
let request = Request::new()
.user("Check weather")
.item(Item::function_call(
"call_123",
"get_weather",
r#"{"city":"Tokyo"}"#,
))
.item(Item::function_call_output("call_123", "Sunny, 25°C"));
let body = scheme.build_request("gpt-4o", &request);
assert_eq!(body.messages.len(), 3);
assert_eq!(body.messages[0].role, "user");
assert_eq!(body.messages[1].role, "assistant");
assert_eq!(body.messages[1].tool_calls.len(), 1);
assert_eq!(body.messages[2].role, "tool");
}
}

View File

@ -0,0 +1,494 @@
//! Open Responses Event Parser
//!
//! Parses SSE events from the Open Responses API into internal Event types.
use serde::Deserialize;
use crate::llm_client::{
event::{
BlockMetadata, BlockStart, BlockStop, DeltaContent, ErrorEvent, Event, ResponseStatus,
StatusEvent, StopReason, UsageEvent,
},
ClientError,
};
// =============================================================================
// Open Responses SSE Event Types
// =============================================================================
/// Response created event
#[derive(Debug, Deserialize)]
pub struct ResponseCreatedEvent {
pub response: ResponseObject,
}
/// Response object
#[derive(Debug, Deserialize)]
pub struct ResponseObject {
pub id: String,
pub status: String,
#[serde(default)]
pub output: Vec<OutputItem>,
pub usage: Option<UsageObject>,
}
/// Output item in response
#[derive(Debug, Deserialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum OutputItem {
Message {
id: String,
role: String,
#[serde(default)]
content: Vec<ContentPartObject>,
},
FunctionCall {
id: String,
call_id: String,
name: String,
arguments: String,
},
Reasoning {
id: String,
#[serde(default)]
text: String,
},
}
/// Content part object
#[derive(Debug, Deserialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ContentPartObject {
OutputText { text: String },
InputText { text: String },
Refusal { refusal: String },
}
/// Usage object
#[derive(Debug, Deserialize)]
pub struct UsageObject {
pub input_tokens: Option<u64>,
pub output_tokens: Option<u64>,
pub total_tokens: Option<u64>,
}
/// Output item added event
#[derive(Debug, Deserialize)]
pub struct OutputItemAddedEvent {
pub output_index: usize,
pub item: OutputItem,
}
/// Text delta event
#[derive(Debug, Deserialize)]
pub struct TextDeltaEvent {
pub output_index: usize,
pub content_index: usize,
pub delta: String,
}
/// Text done event
#[derive(Debug, Deserialize)]
pub struct TextDoneEvent {
pub output_index: usize,
pub content_index: usize,
pub text: String,
}
/// Function call arguments delta event
#[derive(Debug, Deserialize)]
pub struct FunctionCallArgumentsDeltaEvent {
pub output_index: usize,
pub call_id: String,
pub delta: String,
}
/// Function call arguments done event
#[derive(Debug, Deserialize)]
pub struct FunctionCallArgumentsDoneEvent {
pub output_index: usize,
pub call_id: String,
pub arguments: String,
}
/// Reasoning delta event
#[derive(Debug, Deserialize)]
pub struct ReasoningDeltaEvent {
pub output_index: usize,
pub delta: String,
}
/// Reasoning done event
#[derive(Debug, Deserialize)]
pub struct ReasoningDoneEvent {
pub output_index: usize,
pub text: String,
}
/// Content part done event
#[derive(Debug, Deserialize)]
pub struct ContentPartDoneEvent {
pub output_index: usize,
pub content_index: usize,
pub part: ContentPartObject,
}
/// Output item done event
#[derive(Debug, Deserialize)]
pub struct OutputItemDoneEvent {
pub output_index: usize,
pub item: OutputItem,
}
/// Response done event
#[derive(Debug, Deserialize)]
pub struct ResponseDoneEvent {
pub response: ResponseObject,
}
/// Error event from API
#[derive(Debug, Deserialize)]
pub struct ApiErrorEvent {
pub error: ApiError,
}
/// API error details
#[derive(Debug, Deserialize)]
pub struct ApiError {
pub code: Option<String>,
pub message: String,
}
// =============================================================================
// Event Parsing
// =============================================================================
/// Parse SSE event into internal Event(s)
///
/// Returns `Ok(None)` for events that should be ignored (e.g., heartbeats)
/// Returns `Ok(Some(vec))` for events that produce one or more internal Events
pub fn parse_event(event_type: &str, data: &str) -> Result<Option<Vec<Event>>, ClientError> {
// Skip empty data
if data.is_empty() || data == "[DONE]" {
return Ok(None);
}
let events = match event_type {
// Response lifecycle
"response.created" => {
let _event: ResponseCreatedEvent = parse_json(data)?;
Some(vec![Event::Status(StatusEvent {
status: ResponseStatus::Started,
})])
}
"response.in_progress" => {
// Just a status update, no action needed
None
}
"response.completed" | "response.done" => {
let event: ResponseDoneEvent = parse_json(data)?;
let mut events = Vec::new();
// Emit usage if present
if let Some(usage) = event.response.usage {
events.push(Event::Usage(UsageEvent {
input_tokens: usage.input_tokens,
output_tokens: usage.output_tokens,
total_tokens: usage.total_tokens,
cache_read_input_tokens: None,
cache_creation_input_tokens: None,
}));
}
events.push(Event::Status(StatusEvent {
status: ResponseStatus::Completed,
}));
Some(events)
}
"response.failed" => {
// Try to parse error
if let Ok(error_event) = parse_json::<ApiErrorEvent>(data) {
Some(vec![
Event::Error(ErrorEvent {
code: error_event.error.code,
message: error_event.error.message,
}),
Event::Status(StatusEvent {
status: ResponseStatus::Failed,
}),
])
} else {
Some(vec![Event::Status(StatusEvent {
status: ResponseStatus::Failed,
})])
}
}
// Output item events
"response.output_item.added" => {
let event: OutputItemAddedEvent = parse_json(data)?;
Some(vec![convert_item_added(&event)])
}
"response.output_item.done" => {
let event: OutputItemDoneEvent = parse_json(data)?;
Some(vec![convert_item_done(&event)])
}
// Text content events
"response.output_text.delta" => {
let event: TextDeltaEvent = parse_json(data)?;
Some(vec![Event::text_delta(event.output_index, &event.delta)])
}
"response.output_text.done" => {
// Text done - we'll handle stop in output_item.done
let _event: TextDoneEvent = parse_json(data)?;
None
}
// Content part events
"response.content_part.added" => {
// Content part added - we handle this via output_item.added
None
}
"response.content_part.done" => {
// Content part done - we handle stop in output_item.done
None
}
// Function call events
"response.function_call_arguments.delta" => {
let event: FunctionCallArgumentsDeltaEvent = parse_json(data)?;
Some(vec![Event::BlockDelta(crate::llm_client::event::BlockDelta {
index: event.output_index,
delta: DeltaContent::InputJson(event.delta),
})])
}
"response.function_call_arguments.done" => {
// Arguments done - we handle stop in output_item.done
let _event: FunctionCallArgumentsDoneEvent = parse_json(data)?;
None
}
// Reasoning events
"response.reasoning.delta" | "response.reasoning_summary_text.delta" => {
let event: ReasoningDeltaEvent = parse_json(data)?;
Some(vec![Event::BlockDelta(crate::llm_client::event::BlockDelta {
index: event.output_index,
delta: DeltaContent::Thinking(event.delta),
})])
}
"response.reasoning.done" | "response.reasoning_summary_text.done" => {
// Reasoning done - we handle stop in output_item.done
let _event: ReasoningDoneEvent = parse_json(data)?;
None
}
// Error event
"error" => {
let event: ApiErrorEvent = parse_json(data)?;
Some(vec![Event::Error(ErrorEvent {
code: event.error.code,
message: event.error.message,
})])
}
// Unknown event type - ignore
_ => {
tracing::debug!(event_type = event_type, "Unknown Open Responses event type");
None
}
};
Ok(events)
}
fn parse_json<T: serde::de::DeserializeOwned>(data: &str) -> Result<T, ClientError> {
serde_json::from_str(data).map_err(|e| ClientError::Parse(e.to_string()))
}
fn convert_item_added(event: &OutputItemAddedEvent) -> Event {
match &event.item {
OutputItem::Message { id, role: _, content: _ } => Event::BlockStart(BlockStart {
index: event.output_index,
block_type: crate::llm_client::event::BlockType::Text,
metadata: BlockMetadata::Text,
}),
OutputItem::FunctionCall {
id,
call_id,
name,
arguments: _,
} => Event::BlockStart(BlockStart {
index: event.output_index,
block_type: crate::llm_client::event::BlockType::ToolUse,
metadata: BlockMetadata::ToolUse {
id: call_id.clone(),
name: name.clone(),
},
}),
OutputItem::Reasoning { id, text: _ } => Event::BlockStart(BlockStart {
index: event.output_index,
block_type: crate::llm_client::event::BlockType::Thinking,
metadata: BlockMetadata::Thinking,
}),
}
}
fn convert_item_done(event: &OutputItemDoneEvent) -> Event {
let stop_reason = match &event.item {
OutputItem::FunctionCall { .. } => Some(StopReason::ToolUse),
_ => Some(StopReason::EndTurn),
};
Event::BlockStop(BlockStop {
index: event.output_index,
stop_reason,
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_response_created() {
let data = r#"{"response":{"id":"resp_123","status":"in_progress","output":[]}}"#;
let events = parse_event("response.created", data).unwrap().unwrap();
assert_eq!(events.len(), 1);
assert!(matches!(
events[0],
Event::Status(StatusEvent {
status: ResponseStatus::Started
})
));
}
#[test]
fn test_parse_text_delta() {
let data = r#"{"output_index":0,"content_index":0,"delta":"Hello"}"#;
let events = parse_event("response.output_text.delta", data)
.unwrap()
.unwrap();
assert_eq!(events.len(), 1);
if let Event::BlockDelta(delta) = &events[0] {
assert_eq!(delta.index, 0);
assert!(matches!(&delta.delta, DeltaContent::Text(t) if t == "Hello"));
} else {
panic!("Expected BlockDelta");
}
}
#[test]
fn test_parse_output_item_added_message() {
let data = r#"{"output_index":0,"item":{"type":"message","id":"msg_123","role":"assistant","content":[]}}"#;
let events = parse_event("response.output_item.added", data)
.unwrap()
.unwrap();
assert_eq!(events.len(), 1);
if let Event::BlockStart(start) = &events[0] {
assert_eq!(start.index, 0);
assert!(matches!(
start.block_type,
crate::llm_client::event::BlockType::Text
));
} else {
panic!("Expected BlockStart");
}
}
#[test]
fn test_parse_output_item_added_function_call() {
let data = r#"{"output_index":1,"item":{"type":"function_call","id":"fc_123","call_id":"call_456","name":"get_weather","arguments":""}}"#;
let events = parse_event("response.output_item.added", data)
.unwrap()
.unwrap();
assert_eq!(events.len(), 1);
if let Event::BlockStart(start) = &events[0] {
assert_eq!(start.index, 1);
assert!(matches!(
start.block_type,
crate::llm_client::event::BlockType::ToolUse
));
if let BlockMetadata::ToolUse { id, name } = &start.metadata {
assert_eq!(id, "call_456");
assert_eq!(name, "get_weather");
} else {
panic!("Expected ToolUse metadata");
}
} else {
panic!("Expected BlockStart");
}
}
#[test]
fn test_parse_function_call_arguments_delta() {
let data = r#"{"output_index":1,"call_id":"call_456","delta":"{\"city\":"}"#;
let events = parse_event("response.function_call_arguments.delta", data)
.unwrap()
.unwrap();
assert_eq!(events.len(), 1);
if let Event::BlockDelta(delta) = &events[0] {
assert_eq!(delta.index, 1);
assert!(matches!(
&delta.delta,
DeltaContent::InputJson(s) if s == "{\"city\":"
));
} else {
panic!("Expected BlockDelta");
}
}
#[test]
fn test_parse_response_completed() {
let data = r#"{"response":{"id":"resp_123","status":"completed","output":[],"usage":{"input_tokens":10,"output_tokens":20,"total_tokens":30}}}"#;
let events = parse_event("response.completed", data).unwrap().unwrap();
assert_eq!(events.len(), 2);
// First event should be usage
if let Event::Usage(usage) = &events[0] {
assert_eq!(usage.input_tokens, Some(10));
assert_eq!(usage.output_tokens, Some(20));
assert_eq!(usage.total_tokens, Some(30));
} else {
panic!("Expected Usage event");
}
// Second event should be status
assert!(matches!(
events[1],
Event::Status(StatusEvent {
status: ResponseStatus::Completed
})
));
}
#[test]
fn test_parse_error() {
let data = r#"{"error":{"code":"rate_limit","message":"Too many requests"}}"#;
let events = parse_event("error", data).unwrap().unwrap();
assert_eq!(events.len(), 1);
if let Event::Error(err) = &events[0] {
assert_eq!(err.code, Some("rate_limit".to_string()));
assert_eq!(err.message, "Too many requests");
} else {
panic!("Expected Error event");
}
}
#[test]
fn test_parse_unknown_event() {
let data = r#"{}"#;
let events = parse_event("some.unknown.event", data).unwrap();
assert!(events.is_none());
}
}

View File

@ -0,0 +1,49 @@
//! Open Responses Scheme
//!
//! Handles request/response conversion for the Open Responses API.
//! Since our internal types are already Open Responses native, this scheme
//! primarily passes through data with minimal transformation.
mod events;
mod request;
use crate::llm_client::{ClientError, Request};
pub use events::*;
pub use request::*;
/// Open Responses Scheme
///
/// Handles conversion between internal types and the Open Responses wire format.
#[derive(Debug, Clone, Default)]
pub struct OpenResponsesScheme {
/// Optional model override
pub model: Option<String>,
}
impl OpenResponsesScheme {
/// Create a new OpenResponsesScheme
pub fn new() -> Self {
Self::default()
}
/// Set the model
pub fn with_model(mut self, model: impl Into<String>) -> Self {
self.model = Some(model.into());
self
}
/// Build Open Responses request from internal Request
pub fn build_request(&self, model: &str, request: &Request) -> OpenResponsesRequest {
build_request(model, request)
}
/// Parse SSE event data into internal Event(s)
pub fn parse_event(
&self,
event_type: &str,
data: &str,
) -> Result<Option<Vec<crate::llm_client::Event>>, ClientError> {
parse_event(event_type, data)
}
}

View File

@ -0,0 +1,285 @@
//! Open Responses Request Builder
//!
//! Converts internal Request/Item types to Open Responses API format.
//! Since our internal types are already Open Responses native, this is
//! mostly a direct serialization with some field renaming.
use serde::Serialize;
use serde_json::Value;
use crate::llm_client::{types::Item, Request, ToolDefinition};
/// Open Responses API request body
#[derive(Debug, Serialize)]
pub struct OpenResponsesRequest {
/// Model identifier
pub model: String,
/// Input items (conversation history)
pub input: Vec<OpenResponsesItem>,
/// System instructions
#[serde(skip_serializing_if = "Option::is_none")]
pub instructions: Option<String>,
/// Tool definitions
#[serde(skip_serializing_if = "Vec::is_empty")]
pub tools: Vec<OpenResponsesTool>,
/// Enable streaming
pub stream: bool,
/// Maximum output tokens
#[serde(skip_serializing_if = "Option::is_none")]
pub max_output_tokens: Option<u32>,
/// Temperature
#[serde(skip_serializing_if = "Option::is_none")]
pub temperature: Option<f32>,
/// Top P (nucleus sampling)
#[serde(skip_serializing_if = "Option::is_none")]
pub top_p: Option<f32>,
}
/// Open Responses input item
#[derive(Debug, Serialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum OpenResponsesItem {
/// Message item
Message {
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<String>,
role: String,
content: Vec<OpenResponsesContentPart>,
},
/// Function call item
FunctionCall {
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<String>,
call_id: String,
name: String,
arguments: String,
},
/// Function call output item
FunctionCallOutput {
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<String>,
call_id: String,
output: String,
},
/// Reasoning item
Reasoning {
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<String>,
text: String,
},
}
/// Open Responses content part
#[derive(Debug, Serialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum OpenResponsesContentPart {
/// Input text (for user messages)
InputText { text: String },
/// Output text (for assistant messages)
OutputText { text: String },
/// Refusal
Refusal { refusal: String },
}
/// Open Responses tool definition
#[derive(Debug, Serialize)]
pub struct OpenResponsesTool {
/// Tool type (always "function")
pub r#type: String,
/// Function definition
pub name: String,
/// Description
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
/// Parameters schema
pub parameters: Value,
}
/// Build Open Responses request from internal Request
pub fn build_request(model: &str, request: &Request) -> OpenResponsesRequest {
let input = request.items.iter().map(convert_item).collect();
let tools = request.tools.iter().map(convert_tool).collect();
OpenResponsesRequest {
model: model.to_string(),
input,
instructions: request.system_prompt.clone(),
tools,
stream: true,
max_output_tokens: request.config.max_tokens,
temperature: request.config.temperature,
top_p: request.config.top_p,
}
}
fn convert_item(item: &Item) -> OpenResponsesItem {
match item {
Item::Message {
id,
role,
content,
status: _,
} => {
let role_str = match role {
crate::llm_client::types::Role::User => "user",
crate::llm_client::types::Role::Assistant => "assistant",
crate::llm_client::types::Role::System => "system",
};
let parts = content
.iter()
.map(|p| match p {
crate::llm_client::types::ContentPart::InputText { text } => {
OpenResponsesContentPart::InputText { text: text.clone() }
}
crate::llm_client::types::ContentPart::OutputText { text } => {
OpenResponsesContentPart::OutputText { text: text.clone() }
}
crate::llm_client::types::ContentPart::Refusal { refusal } => {
OpenResponsesContentPart::Refusal {
refusal: refusal.clone(),
}
}
})
.collect();
OpenResponsesItem::Message {
id: id.clone(),
role: role_str.to_string(),
content: parts,
}
}
Item::FunctionCall {
id,
call_id,
name,
arguments,
status: _,
} => OpenResponsesItem::FunctionCall {
id: id.clone(),
call_id: call_id.clone(),
name: name.clone(),
arguments: arguments.clone(),
},
Item::FunctionCallOutput {
id,
call_id,
output,
} => OpenResponsesItem::FunctionCallOutput {
id: id.clone(),
call_id: call_id.clone(),
output: output.clone(),
},
Item::Reasoning {
id,
text,
status: _,
} => OpenResponsesItem::Reasoning {
id: id.clone(),
text: text.clone(),
},
}
}
fn convert_tool(tool: &ToolDefinition) -> OpenResponsesTool {
OpenResponsesTool {
r#type: "function".to_string(),
name: tool.name.clone(),
description: tool.description.clone(),
parameters: tool.input_schema.clone(),
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::llm_client::types::Item;
#[test]
fn test_build_simple_request() {
let request = Request::new()
.system("You are a helpful assistant.")
.user("Hello!");
let or_req = build_request("gpt-4o", &request);
assert_eq!(or_req.model, "gpt-4o");
assert_eq!(
or_req.instructions,
Some("You are a helpful assistant.".to_string())
);
assert_eq!(or_req.input.len(), 1);
assert!(or_req.stream);
}
#[test]
fn test_build_request_with_tool() {
let request = Request::new().user("What's the weather?").tool(
ToolDefinition::new("get_weather")
.description("Get current weather")
.input_schema(serde_json::json!({
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
})),
);
let or_req = build_request("gpt-4o", &request);
assert_eq!(or_req.tools.len(), 1);
assert_eq!(or_req.tools[0].name, "get_weather");
assert_eq!(or_req.tools[0].r#type, "function");
}
#[test]
fn test_function_call_and_output() {
let request = Request::new()
.user("What's the weather?")
.item(Item::function_call(
"call_123",
"get_weather",
r#"{"city":"Tokyo"}"#,
))
.item(Item::function_call_output("call_123", "Sunny, 25°C"));
let or_req = build_request("gpt-4o", &request);
assert_eq!(or_req.input.len(), 3);
// Check function call
if let OpenResponsesItem::FunctionCall { call_id, name, .. } = &or_req.input[1] {
assert_eq!(call_id, "call_123");
assert_eq!(name, "get_weather");
} else {
panic!("Expected FunctionCall");
}
// Check function call output
if let OpenResponsesItem::FunctionCallOutput { call_id, output, .. } = &or_req.input[2] {
assert_eq!(call_id, "call_123");
assert_eq!(output, "Sunny, 25°C");
} else {
panic!("Expected FunctionCallOutput");
}
}
}

View File

@ -1,189 +1,491 @@
//! LLMクライアント共通型定義
//! LLM Client Common Types - Open Responses Native
//!
//! This module defines types that are natively aligned with the Open Responses specification.
//! The core abstraction is `Item` which represents different types of conversation elements:
//! - Message items (user/assistant messages with content parts)
//! - FunctionCall items (tool invocations)
//! - FunctionCallOutput items (tool results)
//! - Reasoning items (extended thinking)
use serde::{Deserialize, Serialize};
/// リクエスト構造体
// ============================================================================
// Item - The core unit of conversation
// ============================================================================
/// Item ID type for tracking items in a conversation
pub type ItemId = String;
/// Call ID type for linking function calls to their outputs
pub type CallId = String;
/// Conversation item - the primary unit in Open Responses
///
/// Items represent discrete elements in a conversation. Unlike traditional
/// message-based APIs, Open Responses treats tool calls and reasoning as
/// first-class items rather than parts of messages.
///
/// # Examples
///
/// ```ignore
/// use llm_worker::Item;
///
/// // User message
/// let user_item = Item::user_message("Hello!");
///
/// // Assistant message
/// let assistant_item = Item::assistant_message("Hi there!");
///
/// // Function call
/// let call = Item::function_call("call_123", "get_weather", json!({"city": "Tokyo"}));
///
/// // Function call output
/// let result = Item::function_call_output("call_123", "Sunny, 25°C");
/// ```
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum Item {
/// User or assistant message with content parts
Message {
/// Optional item ID
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<ItemId>,
/// Message role
role: Role,
/// Content parts
content: Vec<ContentPart>,
/// Item status
#[serde(skip_serializing_if = "Option::is_none")]
status: Option<ItemStatus>,
},
/// Function (tool) call from the assistant
FunctionCall {
/// Optional item ID
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<ItemId>,
/// Call ID for linking to output
call_id: CallId,
/// Function name
name: String,
/// Function arguments as JSON string
arguments: String,
/// Item status
#[serde(skip_serializing_if = "Option::is_none")]
status: Option<ItemStatus>,
},
/// Function (tool) call output/result
FunctionCallOutput {
/// Optional item ID
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<ItemId>,
/// Call ID linking to the function call
call_id: CallId,
/// Output content
output: String,
},
/// Reasoning/thinking item
Reasoning {
/// Optional item ID
#[serde(skip_serializing_if = "Option::is_none")]
id: Option<ItemId>,
/// Reasoning text
text: String,
/// Item status
#[serde(skip_serializing_if = "Option::is_none")]
status: Option<ItemStatus>,
},
}
impl Item {
// ========================================================================
// Message constructors
// ========================================================================
/// Create a user message item with text content
pub fn user_message(text: impl Into<String>) -> Self {
Self::Message {
id: None,
role: Role::User,
content: vec![ContentPart::InputText {
text: text.into(),
}],
status: None,
}
}
/// Create a user message item with multiple content parts
pub fn user_message_parts(parts: Vec<ContentPart>) -> Self {
Self::Message {
id: None,
role: Role::User,
content: parts,
status: None,
}
}
/// Create an assistant message item with text content
pub fn assistant_message(text: impl Into<String>) -> Self {
Self::Message {
id: None,
role: Role::Assistant,
content: vec![ContentPart::OutputText {
text: text.into(),
}],
status: None,
}
}
/// Create an assistant message item with multiple content parts
pub fn assistant_message_parts(parts: Vec<ContentPart>) -> Self {
Self::Message {
id: None,
role: Role::Assistant,
content: parts,
status: None,
}
}
// ========================================================================
// Function call constructors
// ========================================================================
/// Create a function call item
pub fn function_call(
call_id: impl Into<String>,
name: impl Into<String>,
arguments: impl Into<String>,
) -> Self {
Self::FunctionCall {
id: None,
call_id: call_id.into(),
name: name.into(),
arguments: arguments.into(),
status: None,
}
}
/// Create a function call item from a JSON value
pub fn function_call_json(
call_id: impl Into<String>,
name: impl Into<String>,
arguments: serde_json::Value,
) -> Self {
Self::function_call(call_id, name, arguments.to_string())
}
/// Create a function call output item
pub fn function_call_output(call_id: impl Into<String>, output: impl Into<String>) -> Self {
Self::FunctionCallOutput {
id: None,
call_id: call_id.into(),
output: output.into(),
}
}
// ========================================================================
// Reasoning constructors
// ========================================================================
/// Create a reasoning item
pub fn reasoning(text: impl Into<String>) -> Self {
Self::Reasoning {
id: None,
text: text.into(),
status: None,
}
}
// ========================================================================
// Builder methods
// ========================================================================
/// Set the item ID
pub fn with_id(mut self, id: impl Into<String>) -> Self {
match &mut self {
Self::Message { id: item_id, .. } => *item_id = Some(id.into()),
Self::FunctionCall { id: item_id, .. } => *item_id = Some(id.into()),
Self::FunctionCallOutput { id: item_id, .. } => *item_id = Some(id.into()),
Self::Reasoning { id: item_id, .. } => *item_id = Some(id.into()),
}
self
}
/// Set the item status
pub fn with_status(mut self, new_status: ItemStatus) -> Self {
match &mut self {
Self::Message { status, .. } => *status = Some(new_status),
Self::FunctionCall { status, .. } => *status = Some(new_status),
Self::FunctionCallOutput { .. } => {} // Output items don't have status
Self::Reasoning { status, .. } => *status = Some(new_status),
}
self
}
// ========================================================================
// Accessors
// ========================================================================
/// Get the item ID if set
pub fn id(&self) -> Option<&str> {
match self {
Self::Message { id, .. } => id.as_deref(),
Self::FunctionCall { id, .. } => id.as_deref(),
Self::FunctionCallOutput { id, .. } => id.as_deref(),
Self::Reasoning { id, .. } => id.as_deref(),
}
}
/// Get the item type as a string
pub fn item_type(&self) -> &'static str {
match self {
Self::Message { .. } => "message",
Self::FunctionCall { .. } => "function_call",
Self::FunctionCallOutput { .. } => "function_call_output",
Self::Reasoning { .. } => "reasoning",
}
}
/// Check if this is a user message
pub fn is_user_message(&self) -> bool {
matches!(self, Self::Message { role: Role::User, .. })
}
/// Check if this is an assistant message
pub fn is_assistant_message(&self) -> bool {
matches!(self, Self::Message { role: Role::Assistant, .. })
}
/// Check if this is a function call
pub fn is_function_call(&self) -> bool {
matches!(self, Self::FunctionCall { .. })
}
/// Check if this is a function call output
pub fn is_function_call_output(&self) -> bool {
matches!(self, Self::FunctionCallOutput { .. })
}
/// Check if this is a reasoning item
pub fn is_reasoning(&self) -> bool {
matches!(self, Self::Reasoning { .. })
}
/// Get text content if this is a simple text message
pub fn as_text(&self) -> Option<&str> {
match self {
Self::Message { content, .. } if content.len() == 1 => match &content[0] {
ContentPart::InputText { text } => Some(text),
ContentPart::OutputText { text } => Some(text),
_ => None,
},
_ => None,
}
}
}
// ============================================================================
// Content Parts - Components within message items
// ============================================================================
/// Content part within a message item
///
/// Open Responses distinguishes between input and output content types.
/// Input types are used in user messages, output types in assistant messages.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ContentPart {
/// Input text (for user messages)
InputText {
/// The text content
text: String,
},
/// Output text (for assistant messages)
OutputText {
/// The text content
text: String,
},
/// Refusal content (for assistant messages)
Refusal {
/// The refusal message
refusal: String,
},
// Future: InputAudio, OutputAudio, etc.
}
impl ContentPart {
/// Create an input text part
pub fn input_text(text: impl Into<String>) -> Self {
Self::InputText { text: text.into() }
}
/// Create an output text part
pub fn output_text(text: impl Into<String>) -> Self {
Self::OutputText { text: text.into() }
}
/// Create a refusal part
pub fn refusal(refusal: impl Into<String>) -> Self {
Self::Refusal {
refusal: refusal.into(),
}
}
/// Get the text content regardless of type
pub fn as_text(&self) -> &str {
match self {
Self::InputText { text } => text,
Self::OutputText { text } => text,
Self::Refusal { refusal } => refusal,
}
}
}
// ============================================================================
// Role and Status
// ============================================================================
/// Message role
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum Role {
/// User
User,
/// Assistant
Assistant,
/// System (for system prompts, not typically used in items)
System,
}
/// Item status
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum ItemStatus {
/// Item is being generated
InProgress,
/// Item completed successfully
Completed,
/// Item was truncated (e.g., max tokens)
Incomplete,
}
// ============================================================================
// Request Types
// ============================================================================
/// LLM Request
#[derive(Debug, Clone, Default)]
pub struct Request {
/// システムプロンプト
/// System prompt (instructions)
pub system_prompt: Option<String>,
/// メッセージ履歴
pub messages: Vec<Message>,
/// ツール定義
/// Input items (conversation history)
pub items: Vec<Item>,
/// Tool definitions
pub tools: Vec<ToolDefinition>,
/// リクエスト設定
/// Request configuration
pub config: RequestConfig,
}
impl Request {
/// 新しいリクエストを作成
/// Create a new empty request
pub fn new() -> Self {
Self::default()
}
/// システムプロンプトを設定
/// Set the system prompt
pub fn system(mut self, prompt: impl Into<String>) -> Self {
self.system_prompt = Some(prompt.into());
self
}
/// ユーザーメッセージを追加
/// Add a user message
pub fn user(mut self, content: impl Into<String>) -> Self {
self.messages.push(Message::user(content));
self.items.push(Item::user_message(content));
self
}
/// アシスタントメッセージを追加
/// Add an assistant message
pub fn assistant(mut self, content: impl Into<String>) -> Self {
self.messages.push(Message::assistant(content));
self.items.push(Item::assistant_message(content));
self
}
/// メッセージを追加
pub fn message(mut self, message: Message) -> Self {
self.messages.push(message);
/// Add an item
pub fn item(mut self, item: Item) -> Self {
self.items.push(item);
self
}
/// ツールを追加
/// Add multiple items
pub fn items(mut self, items: impl IntoIterator<Item = Item>) -> Self {
self.items.extend(items);
self
}
/// Add a tool definition
pub fn tool(mut self, tool: ToolDefinition) -> Self {
self.tools.push(tool);
self
}
/// 設定を適用
/// Set the request config
pub fn config(mut self, config: RequestConfig) -> Self {
self.config = config;
self
}
/// max_tokensを設定
/// Set max tokens
pub fn max_tokens(mut self, max_tokens: u32) -> Self {
self.config.max_tokens = Some(max_tokens);
self
}
/// temperatureを設定
/// Set temperature
pub fn temperature(mut self, temperature: f32) -> Self {
self.config.temperature = Some(temperature);
self
}
/// top_pを設定
/// Set top_p
pub fn top_p(mut self, top_p: f32) -> Self {
self.config.top_p = Some(top_p);
self
}
/// top_kを設定
/// Set top_k
pub fn top_k(mut self, top_k: u32) -> Self {
self.config.top_k = Some(top_k);
self
}
/// ストップシーケンスを追加
/// Add a stop sequence
pub fn stop_sequence(mut self, sequence: impl Into<String>) -> Self {
self.config.stop_sequences.push(sequence.into());
self
}
}
/// メッセージ
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Message {
/// ロール
pub role: Role,
/// コンテンツ
pub content: MessageContent,
}
// ============================================================================
// Tool Definition
// ============================================================================
impl Message {
/// ユーザーメッセージを作成
pub fn user(content: impl Into<String>) -> Self {
Self {
role: Role::User,
content: MessageContent::Text(content.into()),
}
}
/// アシスタントメッセージを作成
pub fn assistant(content: impl Into<String>) -> Self {
Self {
role: Role::Assistant,
content: MessageContent::Text(content.into()),
}
}
/// ツール結果メッセージを作成
pub fn tool_result(tool_use_id: impl Into<String>, content: impl Into<String>) -> Self {
Self {
role: Role::User,
content: MessageContent::ToolResult {
tool_use_id: tool_use_id.into(),
content: content.into(),
},
}
}
}
/// ロール
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum Role {
User,
Assistant,
}
/// メッセージコンテンツ
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum MessageContent {
/// テキストコンテンツ
Text(String),
/// ツール結果
ToolResult {
tool_use_id: String,
content: String,
},
/// 複合コンテンツ (テキスト + ツール使用等)
Parts(Vec<ContentPart>),
}
/// コンテンツパーツ
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum ContentPart {
/// テキスト
#[serde(rename = "text")]
Text { text: String },
/// ツール使用
#[serde(rename = "tool_use")]
ToolUse {
id: String,
name: String,
input: serde_json::Value,
},
/// ツール結果
#[serde(rename = "tool_result")]
ToolResult {
tool_use_id: String,
content: String,
},
}
/// ツール定義
/// Tool (function) definition
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ToolDefinition {
/// ツール名
/// Tool name
pub name: String,
/// 説明
/// Tool description
pub description: Option<String>,
/// 入力スキーマ (JSON Schema)
/// Input schema (JSON Schema)
pub input_schema: serde_json::Value,
}
impl ToolDefinition {
/// 新しいツール定義を作成
/// Create a new tool definition
pub fn new(name: impl Into<String>) -> Self {
Self {
name: name.into(),
@ -195,65 +497,69 @@ impl ToolDefinition {
}
}
/// 説明を設定
/// Set the description
pub fn description(mut self, desc: impl Into<String>) -> Self {
self.description = Some(desc.into());
self
}
/// 入力スキーマを設定
/// Set the input schema
pub fn input_schema(mut self, schema: serde_json::Value) -> Self {
self.input_schema = schema;
self
}
}
/// リクエスト設定
// ============================================================================
// Request Config
// ============================================================================
/// Request configuration
#[derive(Debug, Clone, Default)]
pub struct RequestConfig {
/// 最大トークン数
/// Maximum tokens to generate
pub max_tokens: Option<u32>,
/// Temperature
/// Temperature (randomness)
pub temperature: Option<f32>,
/// Top P (nucleus sampling)
pub top_p: Option<f32>,
/// Top K
pub top_k: Option<u32>,
/// ストップシーケンス
/// Stop sequences
pub stop_sequences: Vec<String>,
}
impl RequestConfig {
/// 新しいデフォルト設定を作成
/// Create a new default config
pub fn new() -> Self {
Self::default()
}
/// 最大トークン数を設定
/// Set max tokens
pub fn with_max_tokens(mut self, max_tokens: u32) -> Self {
self.max_tokens = Some(max_tokens);
self
}
/// temperatureを設定
/// Set temperature
pub fn with_temperature(mut self, temperature: f32) -> Self {
self.temperature = Some(temperature);
self
}
/// top_pを設定
/// Set top_p
pub fn with_top_p(mut self, top_p: f32) -> Self {
self.top_p = Some(top_p);
self
}
/// top_kを設定
/// Set top_k
pub fn with_top_k(mut self, top_k: u32) -> Self {
self.top_k = Some(top_k);
self
}
/// ストップシーケンスを追加
/// Add a stop sequence
pub fn with_stop_sequence(mut self, sequence: impl Into<String>) -> Self {
self.stop_sequences.push(sequence.into());
self

View File

@ -1,116 +1,16 @@
//! Message Types
//! Message and Item Types
//!
//! Message structure used in conversations with LLM.
//! Can be easily created using [`Message::user`] or [`Message::assistant`].
//! This module provides the core types for representing conversation items
//! in the Open Responses format.
//!
//! The primary type is [`Item`], which represents different kinds of conversation
//! elements: messages, function calls, function call outputs, and reasoning.
use serde::{Deserialize, Serialize};
// Re-export all types from llm_client::types
pub use crate::llm_client::types::{ContentPart, Item, Role};
/// Message role
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum Role {
/// User
User,
/// Assistant
Assistant,
}
/// Conversation message
/// Convenience alias for backward compatibility
///
/// # Examples
///
/// ```ignore
/// use llm_worker::Message;
///
/// // User message
/// let user_msg = Message::user("Hello!");
///
/// // Assistant message
/// let assistant_msg = Message::assistant("Hi there!");
/// ```
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Message {
/// Role
pub role: Role,
/// Content
pub content: MessageContent,
}
/// Message content
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum MessageContent {
/// Text content
Text(String),
/// Tool result
ToolResult {
tool_use_id: String,
content: String,
},
/// Composite content (text + tool use, etc.)
Parts(Vec<ContentPart>),
}
/// Content part
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum ContentPart {
/// Text
#[serde(rename = "text")]
Text { text: String },
/// Tool use
#[serde(rename = "tool_use")]
ToolUse {
id: String,
name: String,
input: serde_json::Value,
},
/// Tool result
#[serde(rename = "tool_result")]
ToolResult {
tool_use_id: String,
content: String,
},
}
impl Message {
/// Create a user message
///
/// # Examples
///
/// ```ignore
/// use llm_worker::Message;
/// let msg = Message::user("Hello");
/// ```
pub fn user(content: impl Into<String>) -> Self {
Self {
role: Role::User,
content: MessageContent::Text(content.into()),
}
}
/// Create an assistant message
///
/// Usually auto-generated inside Worker,
/// but can be manually created for history initialization, etc.
pub fn assistant(content: impl Into<String>) -> Self {
Self {
role: Role::Assistant,
content: MessageContent::Text(content.into()),
}
}
/// Create a tool result message
///
/// Auto-generated inside Worker after tool execution.
/// Usually no need to create manually.
pub fn tool_result(tool_use_id: impl Into<String>, content: impl Into<String>) -> Self {
Self {
role: Role::User,
content: MessageContent::ToolResult {
tool_use_id: tool_use_id.into(),
content: content.into(),
},
}
}
}
/// In the Open Responses model, messages are just one type of Item.
/// This alias allows code that expects a "Message" type to continue working.
pub type Message = Item;

View File

@ -7,23 +7,20 @@ use tokio::sync::mpsc;
use tracing::{debug, info, trace, warn};
use crate::{
ContentPart, Message, MessageContent, Role,
Item,
hook::{
Hook, HookError, HookRegistry, OnAbort, OnPromptSubmit, OnPromptSubmitResult, OnTurnEnd,
OnTurnEndResult, PostToolCall, PostToolCallContext, PostToolCallResult, PreLlmRequest,
PreLlmRequestResult, PreToolCall, PreToolCallResult, ToolCall, ToolCallContext, ToolResult,
},
llm_client::{
ClientError, ConfigWarning, LlmClient, Request, RequestConfig,
ToolDefinition as LlmToolDefinition,
},
llm_client::{ClientError, ConfigWarning, LlmClient, Request, RequestConfig, ToolDefinition},
state::{CacheLocked, Mutable, WorkerState},
subscriber::{
ErrorSubscriberAdapter, StatusSubscriberAdapter, TextBlockSubscriberAdapter,
ToolUseBlockSubscriberAdapter, UsageSubscriberAdapter, WorkerSubscriber,
},
timeline::{TextBlockCollector, Timeline, ToolCallCollector},
tool::{Tool, ToolDefinition, ToolError, ToolMeta},
tool::{Tool, ToolDefinition as WorkerToolDefinition, ToolError, ToolMeta},
};
// =============================================================================
@ -136,7 +133,7 @@ impl<S: WorkerSubscriber + 'static> TurnNotifier for SubscriberTurnNotifier<S> {
/// # Examples
///
/// ```ignore
/// use llm_worker::{Worker, Message};
/// use llm_worker::{Worker, Item};
///
/// // Create a Worker and register tools
/// let mut worker = Worker::new(client)
@ -172,8 +169,8 @@ pub struct Worker<C: LlmClient, S: WorkerState = Mutable> {
hooks: HookRegistry,
/// System prompt
system_prompt: Option<String>,
/// Message history (owned by Worker)
history: Vec<Message>,
/// Item history (owned by Worker)
history: Vec<Item>,
/// History length at lock time (only meaningful in CacheLocked state)
locked_prefix_len: usize,
/// Turn count
@ -210,8 +207,8 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
) -> Result<WorkerResult, WorkerError> {
self.reset_interruption_state();
// Hook: on_prompt_submit
let mut user_message = Message::user(user_input);
let result = self.run_on_prompt_submit_hooks(&mut user_message).await;
let mut user_item = Item::user_message(user_input);
let result = self.run_on_prompt_submit_hooks(&mut user_item).await;
let result = match result {
Ok(value) => value,
Err(err) => return self.finalize_interruption(Err(err)).await,
@ -223,7 +220,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
}
OnPromptSubmitResult::Continue => {}
}
self.history.push(user_message);
self.history.push(user_item);
let result = self.run_turn_loop().await;
self.finalize_interruption(result).await
}
@ -318,7 +315,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
/// });
/// worker.register_tool(def)?;
/// ```
pub fn register_tool(&mut self, factory: ToolDefinition) -> Result<(), ToolRegistryError> {
pub fn register_tool(&mut self, factory: WorkerToolDefinition) -> Result<(), ToolRegistryError> {
let (meta, instance) = factory();
if self.tools.contains_key(&meta.name) {
return Err(ToolRegistryError::DuplicateName(meta.name.clone()));
@ -330,7 +327,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
/// Register multiple tools
pub fn register_tools(
&mut self,
factories: impl IntoIterator<Item = ToolDefinition>,
factories: impl IntoIterator<Item = WorkerToolDefinition>,
) -> Result<(), ToolRegistryError> {
for factory in factories {
self.register_tool(factory)?;
@ -378,7 +375,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
}
/// Get a reference to the history
pub fn history(&self) -> &[Message] {
pub fn history(&self) -> &[Item] {
&self.history
}
@ -510,64 +507,48 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
}
/// Generate list of ToolDefinitions for LLM from registered tools
fn build_tool_definitions(&self) -> Vec<LlmToolDefinition> {
fn build_tool_definitions(&self) -> Vec<ToolDefinition> {
self.tools
.values()
.map(|(meta, _)| {
LlmToolDefinition::new(&meta.name)
ToolDefinition::new(&meta.name)
.description(&meta.description)
.input_schema(meta.input_schema.clone())
})
.collect()
}
/// Build assistant message from text blocks and tool calls
fn build_assistant_message(
/// Build assistant response items from text blocks and tool calls
fn build_assistant_items(
&self,
text_blocks: &[String],
tool_calls: &[ToolCall],
) -> Option<Message> {
// Return None if no text or tool calls
if text_blocks.is_empty() && tool_calls.is_empty() {
return None;
) -> Vec<Item> {
let mut items = Vec::new();
// Add text as assistant message if present
let text = text_blocks.join("");
if !text.is_empty() {
items.push(Item::assistant_message(text));
}
// Simple text message if text only
if tool_calls.is_empty() {
let text = text_blocks.join("");
return Some(Message::assistant(text));
}
// Build as Parts if tool calls are present
let mut parts = Vec::new();
// Add text parts
for text in text_blocks {
if !text.is_empty() {
parts.push(ContentPart::Text { text: text.clone() });
}
}
// Add tool call parts
// Add tool calls as FunctionCall items
for call in tool_calls {
parts.push(ContentPart::ToolUse {
id: call.id.clone(),
name: call.name.clone(),
input: call.input.clone(),
});
items.push(Item::function_call_json(
&call.id,
&call.name,
call.input.clone(),
));
}
Some(Message {
role: Role::Assistant,
content: MessageContent::Parts(parts),
})
items
}
/// Build a request
fn build_request(
&self,
tool_definitions: &[LlmToolDefinition],
context: &[Message],
tool_definitions: &[ToolDefinition],
context: &[Item],
) -> Request {
let mut request = Request::new();
@ -576,50 +557,8 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
request = request.system(system);
}
// Add messages
for msg in context {
// Convert Message to llm_client::Message
request = request.message(crate::llm_client::Message {
role: match msg.role {
Role::User => crate::llm_client::Role::User,
Role::Assistant => crate::llm_client::Role::Assistant,
},
content: match &msg.content {
MessageContent::Text(t) => crate::llm_client::MessageContent::Text(t.clone()),
MessageContent::ToolResult {
tool_use_id,
content,
} => crate::llm_client::MessageContent::ToolResult {
tool_use_id: tool_use_id.clone(),
content: content.clone(),
},
MessageContent::Parts(parts) => crate::llm_client::MessageContent::Parts(
parts
.iter()
.map(|p| match p {
ContentPart::Text { text } => {
crate::llm_client::ContentPart::Text { text: text.clone() }
}
ContentPart::ToolUse { id, name, input } => {
crate::llm_client::ContentPart::ToolUse {
id: id.clone(),
name: name.clone(),
input: input.clone(),
}
}
ContentPart::ToolResult {
tool_use_id,
content,
} => crate::llm_client::ContentPart::ToolResult {
tool_use_id: tool_use_id.clone(),
content: content.clone(),
},
})
.collect(),
),
},
});
}
// Add items directly (Request now uses Items natively)
request = request.items(context.iter().cloned());
// Add tool definitions
for tool_def in tool_definitions {
@ -637,10 +576,10 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
/// Called immediately after receiving a user message in `run()` (first time only).
async fn run_on_prompt_submit_hooks(
&self,
message: &mut Message,
item: &mut Item,
) -> Result<OnPromptSubmitResult, WorkerError> {
for hook in &self.hooks.on_prompt_submit {
let result = hook.call(message).await?;
let result = hook.call(item).await?;
match result {
OnPromptSubmitResult::Continue => continue,
OnPromptSubmitResult::Cancel(reason) => {
@ -656,7 +595,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
/// Called before sending an LLM request for each turn.
async fn run_pre_llm_request_hooks(
&self,
) -> Result<(PreLlmRequestResult, Vec<Message>), WorkerError> {
) -> Result<(PreLlmRequestResult, Vec<Item>), WorkerError> {
let mut temp_context = self.history.clone();
for hook in &self.hooks.pre_llm_request {
let result = hook.call(&mut temp_context).await?;
@ -672,13 +611,13 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
/// Hooks: on_turn_end
async fn run_on_turn_end_hooks(&self) -> Result<OnTurnEndResult, WorkerError> {
let mut temp_messages = self.history.clone();
let mut temp_items = self.history.clone();
for hook in &self.hooks.on_turn_end {
let result = hook.call(&mut temp_messages).await?;
let result = hook.call(&mut temp_items).await?;
match result {
OnTurnEndResult::Finish => continue,
OnTurnEndResult::ContinueWithMessages(msgs) => {
return Ok(OnTurnEndResult::ContinueWithMessages(msgs));
OnTurnEndResult::ContinueWithMessages(items) => {
return Ok(OnTurnEndResult::ContinueWithMessages(items));
}
OnTurnEndResult::Paused => return Ok(OnTurnEndResult::Paused),
}
@ -719,25 +658,43 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
/// Check for pending tool calls (for resuming from Pause)
fn get_pending_tool_calls(&self) -> Option<Vec<ToolCall>> {
let last_msg = self.history.last()?;
if last_msg.role != Role::Assistant {
return None;
// Find the last FunctionCall items that don't have corresponding FunctionCallOutput
let mut pending_calls = Vec::new();
let mut answered_call_ids = std::collections::HashSet::new();
// First pass: collect all answered call IDs
for item in &self.history {
if let Item::FunctionCallOutput { call_id, .. } = item {
answered_call_ids.insert(call_id.clone());
}
}
let mut calls = Vec::new();
if let MessageContent::Parts(parts) = &last_msg.content {
for part in parts {
if let ContentPart::ToolUse { id, name, input } = part {
calls.push(ToolCall {
id: id.clone(),
// Second pass: find unanswered function calls
for item in &self.history {
if let Item::FunctionCall {
call_id,
name,
arguments,
..
} = item
{
if !answered_call_ids.contains(call_id) {
let input = serde_json::from_str(arguments)
.unwrap_or_else(|_| serde_json::Value::Object(serde_json::Map::new()));
pending_calls.push(ToolCall {
id: call_id.clone(),
name: name.clone(),
input: input.clone(),
input,
});
}
}
}
if calls.is_empty() { None } else { Some(calls) }
if pending_calls.is_empty() {
None
} else {
Some(pending_calls)
}
}
/// Execute tools in parallel
@ -882,7 +839,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
let tool_definitions = self.build_tool_definitions();
info!(
message_count = self.history.len(),
item_count = self.history.len(),
tool_count = tool_definitions.len(),
"Starting worker run"
);
@ -898,7 +855,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
Ok(ToolExecutionResult::Completed(results)) => {
for result in results {
self.history
.push(Message::tool_result(&result.tool_use_id, &result.content));
.push(Item::function_call_output(&result.tool_use_id, &result.content));
}
// Continue to loop
}
@ -945,7 +902,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
// Build request
let request = self.build_request(&tool_definitions, &request_context);
debug!(
message_count = request.messages.len(),
item_count = request.items.len(),
tool_count = request.tools.len(),
has_system = request.system_prompt.is_some(),
"Sending request to LLM"
@ -1015,11 +972,9 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
let text_blocks = self.text_block_collector.take_collected();
let tool_calls = self.tool_call_collector.take_collected();
// Add assistant message to history
let assistant_message = self.build_assistant_message(&text_blocks, &tool_calls);
if let Some(msg) = assistant_message {
self.history.push(msg);
}
// Add assistant items to history
let assistant_items = self.build_assistant_items(&text_blocks, &tool_calls);
self.history.extend(assistant_items);
if tool_calls.is_empty() {
// No tool calls → determine turn end
@ -1052,7 +1007,7 @@ impl<C: LlmClient, S: WorkerState> Worker<C, S> {
Ok(ToolExecutionResult::Completed(results)) => {
for result in results {
self.history
.push(Message::tool_result(&result.tool_use_id, &result.content));
.push(Item::function_call_output(&result.tool_use_id, &result.content));
}
}
Err(err) => {
@ -1212,35 +1167,35 @@ impl<C: LlmClient> Worker<C, Mutable> {
/// Get a mutable reference to history
///
/// Available only in Mutable state. History can be freely edited.
pub fn history_mut(&mut self) -> &mut Vec<Message> {
pub fn history_mut(&mut self) -> &mut Vec<Item> {
&mut self.history
}
/// Set history
pub fn set_history(&mut self, messages: Vec<Message>) {
self.history = messages;
pub fn set_history(&mut self, items: Vec<Item>) {
self.history = items;
}
/// Add a message to history (builder pattern)
pub fn with_message(mut self, message: Message) -> Self {
self.history.push(message);
/// Add an item to history (builder pattern)
pub fn with_item(mut self, item: Item) -> Self {
self.history.push(item);
self
}
/// Add a message to history
pub fn push_message(&mut self, message: Message) {
self.history.push(message);
/// Add an item to history
pub fn push_item(&mut self, item: Item) {
self.history.push(item);
}
/// Add multiple messages to history (builder pattern)
pub fn with_messages(mut self, messages: impl IntoIterator<Item = Message>) -> Self {
self.history.extend(messages);
/// Add multiple items to history (builder pattern)
pub fn with_items(mut self, items: impl IntoIterator<Item = Item>) -> Self {
self.history.extend(items);
self
}
/// Add multiple messages to history
pub fn extend_history(&mut self, messages: impl IntoIterator<Item = Message>) {
self.history.extend(messages);
/// Add multiple items to history
pub fn extend_history(&mut self, items: impl IntoIterator<Item = Item>) {
self.history.extend(items);
}
/// Clear history
@ -1279,7 +1234,6 @@ impl<C: LlmClient> Worker<C, Mutable> {
_state: PhantomData,
}
}
}
// =============================================================================

View File

@ -8,7 +8,7 @@ mod common;
use common::MockLlmClient;
use llm_worker::Worker;
use llm_worker::llm_client::event::{Event, ResponseStatus, StatusEvent};
use llm_worker::{Message, MessageContent};
use llm_worker::Item;
// =============================================================================
// Mutable State Tests
@ -39,12 +39,12 @@ fn test_mutable_history_manipulation() {
assert!(worker.history().is_empty());
// Add to history
worker.push_message(Message::user("Hello"));
worker.push_message(Message::assistant("Hi there!"));
worker.push_item(Item::user_message("Hello"));
worker.push_item(Item::assistant_message("Hi there!"));
assert_eq!(worker.history().len(), 2);
// Mutable access to history
worker.history_mut().push(Message::user("How are you?"));
worker.history_mut().push(Item::user_message("How are you?"));
assert_eq!(worker.history().len(), 3);
// Clear history
@ -52,8 +52,8 @@ fn test_mutable_history_manipulation() {
assert!(worker.history().is_empty());
// Set history
let messages = vec![Message::user("Test"), Message::assistant("Response")];
worker.set_history(messages);
let items = vec![Item::user_message("Test"), Item::assistant_message("Response")];
worker.set_history(items);
assert_eq!(worker.history().len(), 2);
}
@ -63,29 +63,29 @@ fn test_mutable_builder_pattern() {
let client = MockLlmClient::new(vec![]);
let worker = Worker::new(client)
.system_prompt("System prompt")
.with_message(Message::user("Hello"))
.with_message(Message::assistant("Hi!"))
.with_messages(vec![
Message::user("How are you?"),
Message::assistant("I'm fine!"),
.with_item(Item::user_message("Hello"))
.with_item(Item::assistant_message("Hi!"))
.with_items(vec![
Item::user_message("How are you?"),
Item::assistant_message("I'm fine!"),
]);
assert_eq!(worker.get_system_prompt(), Some("System prompt"));
assert_eq!(worker.history().len(), 4);
}
/// Verify that multiple messages can be added with extend_history
/// Verify that multiple items can be added with extend_history
#[test]
fn test_mutable_extend_history() {
let client = MockLlmClient::new(vec![]);
let mut worker = Worker::new(client);
worker.push_message(Message::user("First"));
worker.push_item(Item::user_message("First"));
worker.extend_history(vec![
Message::assistant("Response 1"),
Message::user("Second"),
Message::assistant("Response 2"),
Item::assistant_message("Response 1"),
Item::user_message("Second"),
Item::assistant_message("Response 2"),
]);
assert_eq!(worker.history().len(), 4);
@ -102,8 +102,8 @@ fn test_lock_transition() {
let mut worker = Worker::new(client);
worker.set_system_prompt("System");
worker.push_message(Message::user("Hello"));
worker.push_message(Message::assistant("Hi"));
worker.push_item(Item::user_message("Hello"));
worker.push_item(Item::assistant_message("Hi"));
// Lock
let locked_worker = worker.lock();
@ -120,14 +120,14 @@ fn test_unlock_transition() {
let client = MockLlmClient::new(vec![]);
let mut worker = Worker::new(client);
worker.push_message(Message::user("Hello"));
worker.push_item(Item::user_message("Hello"));
let locked_worker = worker.lock();
// Unlock
let mut worker = locked_worker.unlock();
// History operations are available again in Mutable state
worker.push_message(Message::assistant("Hi"));
worker.push_item(Item::assistant_message("Hi"));
worker.clear_history();
assert!(worker.history().is_empty());
}
@ -160,16 +160,10 @@ async fn test_mutable_run_updates_history() {
assert_eq!(history.len(), 2); // user + assistant
// User message
assert!(matches!(
&history[0].content,
MessageContent::Text(t) if t == "Hi there"
));
assert_eq!(history[0].as_text(), Some("Hi there"));
// Assistant message
assert!(matches!(
&history[1].content,
MessageContent::Text(t) if t == "Hello, I'm an assistant!"
));
assert_eq!(history[1].as_text(), Some("Hello, I'm an assistant!"));
}
/// Verify that history accumulates correctly over multiple turns in CacheLocked state
@ -201,7 +195,7 @@ async fn test_locked_multi_turn_history_accumulation() {
// Lock (after setting system prompt)
let mut locked_worker = worker.lock();
assert_eq!(locked_worker.locked_prefix_len(), 0); // No messages yet
assert_eq!(locked_worker.locked_prefix_len(), 0); // No items yet
// Turn 1
let result1 = locked_worker.run("Hello!").await;
@ -217,16 +211,16 @@ async fn test_locked_multi_turn_history_accumulation() {
let history = locked_worker.history();
// Turn 1 user message
assert!(matches!(&history[0].content, MessageContent::Text(t) if t == "Hello!"));
assert_eq!(history[0].as_text(), Some("Hello!"));
// Turn 1 assistant message
assert!(matches!(&history[1].content, MessageContent::Text(t) if t == "Nice to meet you!"));
assert_eq!(history[1].as_text(), Some("Nice to meet you!"));
// Turn 2 user message
assert!(matches!(&history[2].content, MessageContent::Text(t) if t == "Can you help me?"));
assert_eq!(history[2].as_text(), Some("Can you help me?"));
// Turn 2 assistant message
assert!(matches!(&history[3].content, MessageContent::Text(t) if t == "I can help with that."));
assert_eq!(history[3].as_text(), Some("I can help with that."));
}
/// Verify that locked_prefix_len correctly records history length at lock time
@ -253,15 +247,15 @@ async fn test_locked_prefix_len_tracking() {
let mut worker = Worker::new(client);
// Add messages beforehand
worker.push_message(Message::user("Pre-existing message 1"));
worker.push_message(Message::assistant("Pre-existing response 1"));
// Add items beforehand
worker.push_item(Item::user_message("Pre-existing message 1"));
worker.push_item(Item::assistant_message("Pre-existing response 1"));
assert_eq!(worker.history().len(), 2);
// Lock
let mut locked_worker = worker.lock();
assert_eq!(locked_worker.locked_prefix_len(), 2); // 2 messages at lock time
assert_eq!(locked_worker.locked_prefix_len(), 2); // 2 items at lock time
// Execute turn
locked_worker.run("New message").await.unwrap();
@ -317,8 +311,8 @@ async fn test_unlock_edit_relock() {
]]);
let worker = Worker::new(client)
.with_message(Message::user("Hello"))
.with_message(Message::assistant("Hi"));
.with_item(Item::user_message("Hello"))
.with_item(Item::assistant_message("Hi"));
// Lock -> Unlock
let locked = worker.lock();
@ -328,7 +322,7 @@ async fn test_unlock_edit_relock() {
// Edit history
unlocked.clear_history();
unlocked.push_message(Message::user("Fresh start"));
unlocked.push_item(Item::user_message("Fresh start"));
// Re-lock
let relocked = unlocked.lock();