A passive, audit-grade observability layer for your AI applications. PII is redacted at source (IBAN-BE, RRN, BTW), per-request cost and tokens are tracked, and you can download a signed evidence pack for any period — exactly what AI Act article 12 and NIS2 ask for.
monsys.ai doesn't run prompts, doesn't take actions, never blocks inline. It's an observability layer — after-the-fact evidence, not a control plane.
Belgian IBAN, RRN, BTW, KBO, emails and phones are detected with checksum validation before storage and replaced with a hash token. Raw content never reaches the hub.
One click per period → gzipped tarball with manifest + signature. A standalone Python script verifies offline, with no monsys account. Built for auditors and regulators.
Common confusion: 'does this monitor our employees' Copilot or ChatGPT usage?' No. We only see what your application sends to OpenAI/Anthropic/Mistral — if you place our SDK in your code.
The same place where you would put a log line next to your LLM call today. Three typical placements:
# Voorbeeld plaatsing in een Python service
def antwoord_op_klant_vraag(vraag: str) -> str:
with tracer.trace("klantenservice.vraag") as t: # ← hier
with t.span("openai.chat",
provider="openai",
model="gpt-4o") as s: # ← hier
s.prompt = vraag
resp = openai_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": vraag}],
)
s.completion = resp.choices[0].message.content
s.input_tokens = resp.usage.prompt_tokens
s.output_tokens = resp.usage.completion_tokens
return s.completionFor AI data sources outside your own LLM code. Separate products alongside AI observability, with their own pricing — same Ed25519 signing chain, same verifier, same Stripe invoice.
Separate module that reads GitHub Copilot Business / Enterprise admin API (seats, audit-log) and produces Ed25519-signed evidence packs from it. For AI Act art. 26 (deployer obligations) and NIS2 employee audits. €1/seat/month. Read more →
Separate module that reads the OpenAI Platform admin API (users, projects, API keys, audit log) and produces Ed25519-signed evidence packs from it. Detects stale API keys (>90d unused) as a security baseline. €1/user/month + €5/project/month. Read more →
What we DON'T cover today but is planned. No committed ETA unless a date is shown below.
On top of OpenAI Audit: ChatGPT browser conversation metadata (counts + models + timestamps, not content), custom GPT inventory, memory state. Requires the customer to activate Compliance API access with OpenAI sales (2-4 weeks lead time). €2 extra per seat/month on top of OpenAI Audit.
Equivalent for Anthropic Console: workspace users, API keys, usage. Pricing TBD. Starts when at least 3 beta customers ask.
No commitment. Demand-driven; if Vertex AI enterprise customers concretely ask, we plan it.
You run a GPT-4 bot for balance questions. The FSMA asks: prove that the bot never mentioned another customer's IBAN in an answer. Without observability: months of code audit. With monsys: one click on 'Evidence pack', your auditor runs our offline script, exit 0 → proven.
You use Claude to pre-screen CVs. A candidate files a complaint. AI Act art. 14 requires you to reconstruct every decision. Filter traces by user_session_hash, click 'unlock content' → TOTP → read the exact prompt and completion. Complaint substantiated or refuted in 5 minutes.
A dev pushes a new RAG prompt Thursday evening that accidentally passes 50KB of context. Without monsys: you only see this Friday on the invoice. With monsys: cost-spike alert (cost_per_minute > €1) fires at 22:14, ntfy push on your phone, rolled back in 10 min. Damage: €4 instead of €2,000.
OpenAI ships a model update; your system_msg no longer matches well. Refusals go from 2% to 18%. Normally you hear this 3 days later via customer support. With monsys: refusal_rate alert within 15 min, you adjust the prompt before breaking your SLA.
End user pastes their Belgian RRN in a prompt. Your app sends it to OpenAI. Under GDPR you must log the data transfer to the US. With monsys: span has pii_hits_count=1, redacted as [RRN]. Monthly report: 47 RRN mentions redacted; raw content not at monsys; OpenAI did receive the raw → DPA addendum.
Same envelope format across Python, Node and Go. No pip install, no npm dependency. Failures log, never throw.
Mint an aiv_… token via dashboard → AI → Apps → New application. Shown once. Then:
from monsys_ai import Tracer
tracer = Tracer()
with tracer.trace("rag.chat") as t:
with t.span("openai.chat",
provider="openai",
model="gpt-4o") as s:
s.prompt = user_msg
r = openai.chat.completions.create(...)
s.completion = r.choices[0].message.content
s.input_tokens = r.usage.prompt_tokens
s.output_tokens = r.usage.completion_tokensimport { Tracer } from "./monsys-ai";
const tracer = new Tracer();
await tracer.trace("rag.chat", async (t) => {
await t.span("openai.chat",
{ provider: "openai", model: "gpt-4o" },
async (s) => {
const r = await openai.chat.completions.create({...});
s.record({
prompt: userMsg,
completion: r.choices[0].message.content!,
inputTokens: r.usage!.prompt_tokens,
outputTokens: r.usage!.completion_tokens,
});
});
});tracer, _ := monsysai.New(monsysai.Options{})
err := tracer.Trace(ctx, "rag.chat", func(t *monsysai.Trace) error {
return t.Span(ctx, "openai.chat",
monsysai.SpanOpts{Provider: "openai", Model: "gpt-4o"},
func(s *monsysai.Span) error {
s.Prompt = userMsg
// call your LLM...
s.Completion = resp
in, out := inTok, outTok
s.InputTokens = &in
s.OutputTokens = &out
return nil
})
})Server monitoring (€3/agent from #6) and AI observability are separate modules. First AI app free per tenant — same way your first 5 agents stay free.
Dashboard → AI → Apps → New app. Token is shown once (only SHA256 is kept).
Python, Node or Go — ~150 LOC, no dependencies beyond the stdlib. One `with tracer.trace(...)` block per request.
Live span tree with provider/model/cost/PII hits. A month later: one click → signed evidence pack ready for the DPO.
Langfuse is a great developer tool: prompt management, evals, playground. monsys.ai takes the other slice: passive audit layer with BE/EU-specific PII detection and signed evidence. Read the honest comparison →