Quickstart
1. Pick your language
cargo add forge-sdk --git https://github.com/l1feai/forge-rs --branch main
npm install @l1feai/forge
# or: bun add @l1feai/forge
go get github.com/l1feai/forge-go
pip install forge-sdk
# Package.swift
.package(url: "https://github.com/l1feai/forge-swift", from: "0.1.0")
implementation("ai.l1fe:forge-kt:0.1.0")
2. Configure a provider
Forge has zero hard-coded provider URLs. You configure a provider once and the runtime contract is identical thereafter.
use forge::prelude::*;
let provider = AnthropicConfig::from_env()?
.with_default_model("claude-sonnet-4-5-20250929");
let model: Arc<dyn LanguageModel> = Arc::new(AnthropicModel::new(provider)?);
import { Anthropic } from '@l1feai/forge';
const model = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY! })
.model('claude-sonnet-4-5-20250929');
The same shape works for OpenAI, Google, Bedrock, Azure, Cohere, LiteLLM, Foundry (the L1fe inference gateway), and Codex / Claude Code CLI shims.
3. Bind an agent identity
Every agent gets a did:oas identity rooted in a
human (HMR). Pick for_dev_only for prototyping or pass a 32-byte seed for
deterministic production identities.
use forge::identity::create_hmr_with_seed;
let seed: [u8; 32] = std::env::var("FORGE_HMR_SEED")
.map(|s| /* hex decode */)?;
let identity = create_hmr_with_seed("acme", "research-agent", &seed)?;
The seed produces an Ed25519 keypair via HKDF-SHA256 with the kind, namespace, and identifier baked into the info string — same seed + same triple = same DID, every time.
4. Wire the tool loop
use forge::prelude::*;
let registry = ToolRegistry::new();
registry.register(FnToolExecutor::new("get_weather", get_weather_handler));
let agent = StreamingToolLoopAgent::new(
AgentConfig::new("research", "anthropic:claude-sonnet-4-5-20250929")
.with_identity(identity)
.with_system_prompt("You are a careful research analyst.")
.with_tool_registry(®istry),
StreamingLoopConfig::default(),
model,
registry,
AutoApprove::shared(),
);
5. Run it
let output = agent.run("What is the weather in Mountain View?").await?;
println!("{}", output.text());
for record in output.tool_invocations() {
println!(
"{:?} {} ({}ms) status={:?}",
record.invocation_id, record.tool_name, record.duration_ms, record.status,
);
}
6. Stream it
For real per-token output, drive stream_chunks directly:
use forge::prelude::*;
use futures_util::StreamExt;
let mut stream = agent
.model()
.stream_chunks(&messages, &[], &GenerateOptions::default())
.await?;
while let Some(chunk) = stream.next().await {
match chunk? {
StreamChunk::TextDelta { text } => print!("{text}"),
StreamChunk::Done { usage, .. } => {
println!("\n[{} prompt + {} completion]", usage.prompt_tokens, usage.completion_tokens);
break;
}
_ => {}
}
}
7. Drop it into a terminal
use std::sync::Arc;
use harness_sdk_forge::prelude::*;
let adapter: Arc<dyn AgentAdapter> = Arc::new(ForgeAdapter::new(Arc::new(agent)));
AgentApp::builder().adapter(adapter).build()?.run().await?;
You now have a Codex-style TUI with a working Forge agent, real per-token streaming, and tool-call rendering — all matching the ANVIL contract.
What just happened
You built an agent with:
- Identity — verifiable
did:oas:acme:agent:research-agentlineage - Capability — Arsenal-scoped tool authority (
AutoApprovefor dev only) - Streaming — token-level output from the upstream wire
- Telemetry — every invocation records
ToolInvocationRecordwith timings - Portability — the same code compiles to native, WASM, or Sigil execution
Now read the Architecture overview to see how the pieces fit together, or jump to the ANVIL contract.