Quickstart
Get Maev running in your agent in under 2 minutes.
Step 1: Install the SDK
pip install maev-sdkStep 2: Get your API key
Go to your Maev dashboard (opens in a new tab), open Settings, and copy your API key. Keys look like vl_ followed by 64 characters.
Step 3: Add one line to your agent
import maev
maev.init(api_key="vl_your_key_here")
# Your agent code continues unchanged belowCall maev.init() once, at the top of your script, before any LLM calls. That is all the setup required.
Step 4: Name your agent (recommended)
If you run multiple agents, give each one a name so they show up clearly in the dashboard:
import maev
maev.init(api_key="vl_your_key_here", agent_name="Sales Outreach Agent")Full working example
import maev
from openai import OpenAI
maev.init(api_key="vl_your_key_here", agent_name="Support Agent")
client = OpenAI()
def run_agent(user_message: str):
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful support agent."},
{"role": "user", "content": user_message},
],
)
return response.choices[0].message.content
result = run_agent("How do I reset my password?")
print(result)Maev automatically captures the LLM call, tracks the session, and closes it when the script exits. You will see this session in your dashboard within seconds.
Running in a serverless function?
If your agent runs inside a serverless function (AWS Lambda, Google Cloud Functions, Vercel, etc.), call maev.flush() before returning. The process gets frozen the moment your handler returns — without flush(), buffered telemetry is dropped and the session never closes in your dashboard.
import maev
from openai import OpenAI
maev.init(api_key="vl_your_key_here", agent_name="Support Agent")
client = OpenAI()
def handler(event, context):
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": event["message"]}],
)
result = response.choices[0].message.content
maev.flush() # always call before returning in serverless
return {"result": result}maev.flush() is safe to call multiple times — only the first call does anything. See the Python SDK reference for full details.
What happens next
- Your agent appears in the Agents tab
- Each run creates a Session with a full event timeline
- If a failure is detected, it gets classified automatically
- You receive an alert via Slack or email (if configured)
Maev works with OpenAI, Anthropic, LangChain, LlamaIndex, and dozens of other LLM libraries. No extra configuration needed for any of them.