Monsys watches your Linux and Windows servers, scans every npm/pip/composer/go dependency against OSV.dev, fingerprints every running binary, and lays down honeypots that light up the moment someone touches them. On a typical environment we surface hundreds of CVE matches within 90 seconds — often several CRITICAL the operator didn't know were running in production. Detection to fix in minutes, not weeks.
Three differences that end up in your customer's audit report.
npm, pip, composer, go and gem lockfiles per project against OSV.dev. Container images via Trivy hub-side — no extra software on the customer host. Process DNA detects when a running binary is silently swapped.
Thresholds with duration ('CPU >85% for 1h'), per-container rules, maintenance windows for upgrades. Honeypot canaries (fake AWS keys, GPG, Kube config, GitLab runner tokens) trigger an instant Critical. No alert spam.
Hub hosted in Belgium by GoTrust BV. Detection runs on the host — only aggregated signals leave, never raw log lines. Self-hosting available: one Docker-compose stack, 5 MB Rust agent.
Generate a token in the hub, paste the line in your terminal, and the agent registers itself. No ports to open, no YAML files.
Try free →One Rust binary, four categories of signals. All processed locally; only aggregates reach the hub.
CPU, RAM, disk, network, load every 15s into TimescaleDB. 90-day retention by default. Per-NIC traffic sparklines, per-disk growth + 'days until full' projection.
Packages + services + containers + listening ports + local users + sudo rules + SSH keys (fingerprint only) + password policy from /etc/shadow + lsblk tree + DMI hardware + NICs + filesystem findings (SUID/SGID/world-writable/orphaned).
10+ canary paths with realistic credentials: AWS, SSH, Kube, Docker auth, GitLab runner, Postgres pgpass, GPG private key, Grafana token, WordPress config. One read action = instant Critical via inotify.
SHA256 fingerprint of every top-process binary. TOFU baseline on first observation; deviation = Critical. Manifest-aware rebaseline: legitimate auto-updates don't trigger false positives.
One 5 MB statically-linked Rust binary (musl libc, zero glibc dependencies → runs on RHEL/Alma/Rocky 8+9, Fedora, Debian 11+12, Ubuntu 18.04+ through 24.04, Alpine 3.10+, SUSE/SLES, Oracle Linux, Amazon Linux). systemd service or StartServiceCtrlDispatcher (Windows). Scope-locked sudoers for emergency + self-update only. Ed25519 payload signing with TOFU pinning on the hub.
Postgres Row Level Security per tenant, explicit tenant_id WHERE on every query (defence-in-depth). Owner / Admin / Operator / Viewer roles. Every write hits audit_log with user/IP/payload (secrets scrubbed).
OS packages, container images, and application dependencies — all automatically scanned, linked to your servers, with fix version and risk score.
apt, rpm, winget and wmic are read by the agent. The hub matches against NVD + EPSS (exploit likelihood) and assigns a risk score. Internet-exposed processes get a 1.5× boost.
Agents report image names. The hub runs Trivy server-side against every unique image. Customer host needs no Trivy, no root access — just `docker ps` output is enough.
package-lock.json, yarn.lock, requirements.txt, composer.lock, Gemfile.lock and go.sum are parsed. Hub batch-queries OSV.dev (free, EU-friendly) per (package, version). 14 CRITICAL and 1,984 other CVEs found in our test fleet.
We do NOT send lockfile contents to our backend. Only parsed (package, version) tuples — the same info `npm ls` gives anyone with shell access.
On a recently-monitored environment (Belgian MSP, ~10 production hosts running Node.js + Go services) the pipeline went live at 16:47. Within 90 seconds OSV.dev reported back 1,998 matches — including 14 CRITICAL nobody knew about. All patched before 17:00.
T+45s
OSV.dev batch-query → 1,000 (package, version) tuples per request, severity normalised by the hub
T+5min
10,854 dependencies across 4 ecosystems (npm / Go / Packagist / PyPI) — linked to project_path + risk_score
T+13min
Operator saw the fix version next to each CVE; one `go get` + redeploy cycle. Auto-update rolls the rest.
No six-week-old detection report. No "we'll get to it next sprint". The fix version sits next to the CVE, the playbook runs in one click.
Twelve building blocks we've already built. Every feature below can be turned on for your tenant today.
Thresholds with duration (CPU>85% for 1h), per-container rules, fleet aggregates (≥3 prod agents offline). Maintenance windows silence ALL alerts during upgrades. 8 quick-start templates.
Per server: linear regression over 14 days of CPU/mem/network + per-mount disk growth. 'Days until ceiling' projection with date. No ML pretension — just honest regression.
Z-score on last 15 min vs 7-day baseline per agent per metric. |z| > 2.5 = outlier. Classic statistics, no black-box ML — the operator sees what's out of line immediately.
Every POST/PUT/PATCH/DELETE captured with user/IP/method/path/resource. Read-only 'viewer' role blocks writes. Useful filter: /audit?resource_type=playbook, scoped per tenant.
JSON action specs, admin-approved, operator-triggered from an alert. Hub signs an Ed25519 emergency token → agent verifies + executes. Every run logged in /audit with the nonce.
SHA256 hash of /proc/<pid>/exe for every top process. Deviation = Critical — unless the new hash matches our auto-update manifest. False positives on release deploys eliminated.
Fake AWS keys, SSH, Kube config, Docker auth, GitLab runner token, Postgres pgpass, GPG private key, Grafana token, WordPress config. One read action = instant Critical.
Agent + hub each get a release manifest. Hub-side: 6h poll, curl download, sha256 verify, sudo installer, atomic swap, systemd restart. Manifest-aware Process DNA accepts the new hash automatically.
Logo, accent colour, display name — set in /settings → Branding, customers see their own logo in the topbar. Backend validates https-only for logo URLs (no javascript: XSS).
Browser terminal over WebSocket — no SSH or RDP port needed. Linux: `bash --restricted` as low-priv `monsys-console` user, no sudoers entry. Windows: PowerShell via ConPTY inside a JEA endpoint with ~60 whitelisted IR cmdlets (no Invoke-Expression, no Remove-Item, no privilege escalation). 15-min hard limit, TOTP + reason ≥20 chars required. Every keystroke logged immutably with SHA256 seal.
Auto-detected graph (nodes + edges + zones). Generator → PNG/SVG/PDF with 4 layout algorithms, Mermaid export. Alert overlay: red = critical, yellow = warn, on each node.
One click on alert/log — local llama3.1:8b explains what it means. NL/FR/EN replies, first reply typically <4s on CPU. No external AI vendor, so no GDPR questions.
Connect your AWS, Azure, GCP, Hetzner, Proxmox, DigitalOcean, Scaleway, OVH or IONOS account. monsys.ai discovers every resource every 4 hours, matches them to your existing agents, and flags whatever isn't being monitored.
EC2 · RDS · S3 · VPC · ELB · IAM
VMs · SQL · Storage · NSGs · VNets
GCE · Cloud SQL · GCS · Firewalls
Servers · Networks · Firewalls · Volumes
Self-hosted: VMs · LXC · Storage · Nodes
Droplets · Volumes · LBs · Managed DBs
Instances · SGs · Volumes · RDB
Instances · Networks · Failover IPs
Datacenters · Servers · Volumes · LANs
Honest tradeoffs — no marketing gloss. For each tool we show both when monsys is the better choice and when the opposite is true.
A local llama3.1:8b explains what the log means and whether action is needed. No external AI provider — your data stays on our backend.
No YAML jungle, no per-OS agent templates — one command per host.
Email + password. No credit card. You immediately get a tenant and a dashboard login.
In Settings → Agents type a hostname. We mint a token and give you a one-liner. On the target server: one curl command as root.
Within 30s you see CPU/RAM/disk/network of the host in the Overview screen. Alerts fire on threshold breaches from the first measurement.
Monitoring five servers costs zero euros. Forever. From your sixth server: €3 per server per month — via Stripe or PayPal, cancel any month.