Quickstart
Connect an LLM
Give your AI agent access to a Vibengine sandbox
Connect an LLM
Vibengine sandboxes are designed to be the execution environment for AI agents. Here's how to connect your LLM to a sandbox for code execution.
Basic Pattern
The typical workflow is:
- Your LLM generates code or commands
- Your application sends them to a Vibengine sandbox
- The sandbox executes and returns results
- Results are fed back to the LLM
import { Sandbox } from 'vibengine'
// Create a long-running sandbox for the agent session
const sandbox = await Sandbox.create({ timeoutMs: 600_000 }) // 10 min
async function executeCode(code) {
// Write the code to a file
await sandbox.files.write('/home/user/script.py', code)
// Execute it
const result = await sandbox.commands.run('python3 /home/user/script.py')
return {
stdout: result.stdout,
stderr: result.stderr,
exitCode: result.exitCode,
}
}
// Example: Execute LLM-generated code
const code = `
import math
print(f"Pi is approximately {math.pi:.10f}")
`
const result = await executeCode(code)
console.log(result.stdout) // "Pi is approximately 3.1415926536"
await sandbox.kill()from vibengine import Sandbox
# Create a long-running sandbox for the agent session
sandbox = Sandbox(timeout=600) # 10 min
def execute_code(code: str) -> dict:
# Write the code to a file
sandbox.files.write('/home/user/script.py', code)
# Execute it
result = sandbox.commands.run('python3 /home/user/script.py')
return {
'stdout': result.stdout,
'stderr': result.stderr,
'exit_code': result.exit_code,
}
# Example: Execute LLM-generated code
code = """
import math
print(f"Pi is approximately {math.pi:.10f}")
"""
result = execute_code(code)
print(result['stdout']) # "Pi is approximately 3.1415926536"
sandbox.kill()Using with OpenAI
import { Sandbox } from 'vibengine'
import OpenAI from 'openai'
const openai = new OpenAI()
const sandbox = await Sandbox.create({ timeoutMs: 600_000 })
const tools = [{
type: 'function',
function: {
name: 'execute_code',
description: 'Execute Python code in a sandbox',
parameters: {
type: 'object',
properties: {
code: { type: 'string', description: 'Python code to execute' }
},
required: ['code']
}
}
}]
// Chat with tool calling
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Calculate the first 10 fibonacci numbers' }],
tools,
})
// Handle tool calls
for (const toolCall of response.choices[0].message.tool_calls || []) {
if (toolCall.function.name === 'execute_code') {
const { code } = JSON.parse(toolCall.function.arguments)
await sandbox.files.write('/home/user/script.py', code)
const result = await sandbox.commands.run('python3 /home/user/script.py')
console.log('Output:', result.stdout)
}
}
await sandbox.kill()from vibengine import Sandbox
from openai import OpenAI
import json
client = OpenAI()
sandbox = Sandbox(timeout=600)
tools = [{
"type": "function",
"function": {
"name": "execute_code",
"description": "Execute Python code in a sandbox",
"parameters": {
"type": "object",
"properties": {
"code": {"type": "string", "description": "Python code to execute"}
},
"required": ["code"]
}
}
}]
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Calculate the first 10 fibonacci numbers"}],
tools=tools,
)
for tool_call in response.choices[0].message.tool_calls or []:
if tool_call.function.name == "execute_code":
args = json.loads(tool_call.function.arguments)
sandbox.files.write('/home/user/script.py', args['code'])
result = sandbox.commands.run('python3 /home/user/script.py')
print('Output:', result.stdout)
sandbox.kill()The same pattern works with any LLM provider — Claude, Gemini, Llama, etc. The key is to use Vibengine as the execution backend for tool calls.