LGTM Stack vs Victoria Stack
Canonical comparison between the two leading open-source full-stack observability platforms.
TL;DR
|
LGTM Stack |
Victoria Stack |
| Philosophy |
Composable microservices, object-storage-first, vendor-integrated |
Performance-first, zero-dependency, single-binary simplicity |
| Best for |
Large orgs with platform teams, deep cross-signal correlation needs |
Cost-conscious teams wanting maximum efficiency with minimal ops |
| License |
AGPL-3.0 (copyleft) |
Apache 2.0 (permissive) |
Versions & Compatibility (as of April 2026)
Latest Versions
| Component |
LGTM Version |
Victoria Equivalent |
Victoria Version |
| Metrics |
Grafana Mimir 3.0.5 |
VictoriaMetrics |
v1.139.0 |
| Logs |
Grafana Loki 3.7.1 |
VictoriaLogs |
v1.49.0 |
| Traces |
Grafana Tempo 2.10.1 (3.0 in dev) |
VictoriaTraces |
v0.8.0 |
| Profiles |
Grafana Pyroscope 1.20.2 |
❌ None |
— |
| Visualization |
Grafana 12.4.2 |
Grafana + VMUI |
Same Grafana |
| Collection |
Grafana Alloy 1.15.0 |
vmagent |
v1.139.0 |
| Routing/Auth |
— (Ingress-based) |
vmauth |
v1.139.0 |
| K8s Operator |
— (Helm/Jsonnet) |
vmoperator |
v0.68.4 |
Maturity Assessment
| Component |
LGTM Maturity |
Victoria Maturity |
| Metrics |
⭐⭐⭐⭐⭐ Production (v3.0, years of Cortex lineage) |
⭐⭐⭐⭐⭐ Production (v1.139, battle-tested at Roblox/CERN) |
| Logs |
⭐⭐⭐⭐⭐ Production (v3.7, widely adopted) |
⭐⭐⭐⭐ Production (v1.49, rapidly maturing) |
| Traces |
⭐⭐⭐⭐ Production (v2.10, v3.0 coming) |
⭐⭐⭐ Beta/Early Production (v0.8.0) |
| Profiles |
⭐⭐⭐⭐ Production (v1.20) |
❌ Not available |
Ingestion Protocol Compatibility
| Protocol |
LGTM Support |
Victoria Support |
| OTLP gRPC |
✅ All backends via Alloy |
✅ VM (metrics), VL (logs), VT (traces) |
| OTLP HTTP |
✅ All backends via Alloy |
✅ VM (metrics), VL (logs), VT (traces) |
| Prometheus remote_write |
✅ Mimir |
✅ VictoriaMetrics + vmagent |
| Prometheus scrape |
✅ Alloy, Prometheus |
✅ vmagent (drop-in replacement) |
| InfluxDB line protocol |
❌ |
✅ VictoriaMetrics |
| Graphite plaintext |
❌ |
✅ VictoriaMetrics |
| Datadog API |
❌ |
✅ VictoriaMetrics (/datadog/api/v2/series) |
| NewRelic API |
❌ |
✅ VictoriaMetrics |
| Loki Push API |
✅ Loki |
✅ VictoriaLogs (Loki wire-compatible) |
| Elasticsearch Bulk API |
❌ |
✅ VictoriaLogs |
| Syslog |
⚠️ Via Alloy/Promtail |
✅ VictoriaLogs (native) |
| Fluentbit JSON Lines |
⚠️ Via Alloy pipeline |
✅ VictoriaLogs (native /insert/jsonline) |
| Jaeger (Thrift/gRPC) |
✅ Tempo |
✅ VictoriaTraces |
| Zipkin |
✅ Tempo |
✅ VictoriaTraces |
Verdict: Victoria accepts significantly more ingestion protocols natively (InfluxDB, Graphite, Datadog, NewRelic, ES Bulk). LGTM routes everything through Alloy/OTel Collector, which is more standardized but adds a translation layer.
Query API Compatibility
| API |
LGTM |
Victoria |
Prometheus Query API (/api/v1/query) |
✅ Mimir |
✅ VictoriaMetrics (MetricsQL superset) |
| Prometheus Remote Read |
✅ Mimir |
✅ VictoriaMetrics |
Loki Query API (/loki/api/v1/query) |
✅ Loki |
❌ (VL uses /select/logsql/query) |
Jaeger Query API (/api/traces) |
⚠️ Via Jaeger DS in Grafana |
✅ VictoriaTraces (native) |
| Tempo Query API |
✅ Tempo (native) |
⚠️ VT v0.8.0 adds experimental Tempo DS support |
| TraceQL |
✅ Tempo |
❌ |
| Grafana data source |
Native DS for each backend |
Prometheus DS (metrics), Jaeger DS (traces), VL plugin (logs) |
| Requirement |
LGTM |
Victoria |
| Kubernetes (recommended) |
v1.25+ |
v1.25+ |
| Docker |
✅ |
✅ |
| Bare metal / VM |
✅ (Linux) |
✅ (Linux, macOS) |
| CPU architecture |
amd64, arm64 |
amd64, arm64 |
| Go version |
1.25.x (as of latest releases) |
1.25.x |
| Object storage |
Required (S3, GCS, Azure Blob, MinIO) |
Not required (optional for backups) |
| PostgreSQL |
Required for Grafana metadata |
Not required |
| Redis |
Recommended for Grafana sessions |
Not required |
| Memcached |
Strongly recommended for query performance |
Not required (built-in cache) |
| Kafka |
⚠️ Mimir 3.0 preferred ingest path |
Not required |
| Local SSD |
Minimal (ingester WAL only) |
Required for all storage nodes |
| Min RAM (dev) |
~2 GB (all-in-one grafana/otel-lgtm) |
~256 MB per component |
| Min RAM (prod, 1M series) |
~24–48 GB (across all components) |
~6–12 GB (across all components) |
Object Storage Compatibility (LGTM Only)
| Provider |
Mimir |
Loki |
Tempo |
| AWS S3 |
✅ |
✅ |
✅ |
| Google Cloud Storage |
✅ |
✅ |
✅ |
| Azure Blob Storage |
✅ |
✅ |
✅ |
| MinIO (S3-compatible) |
✅ |
✅ |
✅ |
| OpenStack Swift |
⚠️ Community |
⚠️ Community |
❌ |
Upgrade Considerations (April 2026)
| Component |
Key Upgrade Notes |
| Mimir 3.0 |
Major architecture change — Kafka-based ingest path is now preferred; legacy gRPC path deprecated but still works. Requires Kafka or Kafka-compatible service in production. |
| Loki 3.7 |
Helm chart migrated to grafana-community/helm-charts. Promtail deprecated in favor of Alloy. BoltDB storage backend deprecated. |
| Tempo 3.0 (upcoming) |
New block-builder + live-store architecture replacing ingesters. vParquet2 encoding removed — must migrate to vParquet3+. |
| Pyroscope 1.20 |
Stable, incremental improvements. |
| VictoriaMetrics 1.139 |
Stable rolling releases. LTS line (v1.122.x) available for conservative environments. |
| VictoriaLogs 1.49 |
Maturing rapidly. New LogsQL functions (stddev), UI improvements. |
| VictoriaTraces 0.8 |
Pre-1.0 — experimental Tempo DS API support added. |
Component Mapping
| Signal |
LGTM |
Victoria |
Notes |
| Metrics |
Grafana Mimir |
VictoriaMetrics |
Both PromQL-compatible; VM adds MetricsQL extensions |
| Logs |
Grafana Loki |
VictoriaLogs |
Loki: label-only index; VL: bloom filters + free-text search |
| Traces |
Grafana Tempo |
VictoriaTraces |
Tempo: object storage + Parquet; VT: local disk + VL engine. VT v0.8+ adds experimental Tempo DS API |
| Profiles |
Grafana Pyroscope |
❌ None |
Victoria has no profiling component |
| Visualization |
Grafana (built-in) |
Grafana (external) + VMUI |
Both use Grafana; VM also ships a lightweight built-in UI |
| Collection |
Grafana Alloy |
vmagent + OTel Collector |
Alloy is an OTel distribution; vmagent is Prometheus-native |
| Alerting |
Grafana Unified Alerting |
vmalert + Alertmanager |
Both support PromQL-based rules; Grafana has richer UI |
| Routing/Auth |
Ingress + per-component auth |
vmauth (unified proxy) |
vmauth routes all signals through one proxy |
| Backup |
Object storage (native) |
vmbackup → S3/GCS |
LGTM data lives in object storage; VM requires explicit backup |
| K8s Operator |
Helm charts (no operator) |
vmoperator (CRDs) |
VM has a mature operator; LGTM uses Helm/Jsonnet |
Architecture Comparison
flowchart LR
subgraph LGTM["LGTM Stack"]
direction TB
LA["Alloy<br/>(OTel Collector)"]
LM["Mimir<br/>📊 Metrics"]
LL["Loki<br/>📝 Logs"]
LT["Tempo<br/>🔍 Traces"]
LP["Pyroscope<br/>🔥 Profiles"]
LOS["Object Storage<br/>(S3/GCS)"]
LG["Grafana"]
LA --> LM & LL & LT
LM & LL & LT & LP --> LOS
LG -.-> LM & LL & LT & LP
end
subgraph VM["Victoria Stack"]
direction TB
VA["vmagent<br/>(Prometheus-native)"]
VAuth["vmauth<br/>(unified proxy)"]
VMet["VictoriaMetrics<br/>📊 Metrics"]
VL["VictoriaLogs<br/>📝 Logs"]
VT["VictoriaTraces<br/>🔍 Traces"]
VSSD["Local SSDs"]
VG["Grafana"]
VA --> VAuth
VAuth --> VMet & VL & VT
VMet & VL & VT --> VSSD
VG -.-> VAuth
end
style LGTM fill:#1a1d2e,color:#fff
style VM fill:#0d1117,color:#fff
Key Architectural Differences
| Dimension |
LGTM |
Victoria |
| Storage backend |
Object storage (S3/GCS/Azure) — mandatory |
Local SSDs — no external deps |
| Cluster design |
Microservices (7+ pod types per backend) |
Shared-nothing (3 pod types per backend) |
| Component communication |
gRPC + hash rings + gossip (memberlist) |
Consistent hashing, no inter-storage communication |
| Caching |
Memcached required for production performance |
Built-in caching, no external cache needed |
| Deployment complexity |
High (6+ Helm charts, object storage, PostgreSQL, Redis) |
Low (1–3 binaries, SSDs, optional vmauth) |
| External dependencies |
PostgreSQL, Redis, Memcached, Object Storage |
None (zero external deps) |
Feature-by-Feature Comparison
Metrics
| Feature |
Mimir (LGTM) |
VictoriaMetrics |
| Query language |
PromQL (standard) |
MetricsQL (PromQL superset) |
| Multi-tenancy |
Native (X-Scope-OrgID header) |
Native (account IDs in URL path) |
| Replication |
RF=3 default, ingester-level |
RF configurable on vminsert |
| Deduplication |
Built-in for HA Prometheus pairs |
-dedup.minScrapeInterval flag |
| Long-term storage |
Object storage (infinite) |
Local disk (bounded by SSD size) |
| Downsampling |
Compactor-based |
Enterprise-only feature |
| Recording rules |
Mimir Ruler (built-in) |
vmalert (separate binary) |
| Scale ceiling |
1B+ active series documented |
Billions of active series (Roblox) |
| RAM efficiency |
~8–12 GB per 1M series |
~2 GB per 1M series (5–10x better) |
| Disk efficiency |
~1.2–1.5 bytes/sample |
~0.4–1.0 bytes/sample (2x better) |
Verdict: VM wins on resource efficiency. Mimir wins on infinite retention (object storage) and enterprise multi-tenancy.
Logs
| Feature |
Loki (LGTM) |
VictoriaLogs |
| Indexing strategy |
Label-only (no log content indexing) |
Bloom filters (lightweight content matching) |
| Full-text search |
❌ Requires label selector first |
✅ Free-text search without labels |
| Query language |
LogQL |
LogsQL |
| Query requirement |
Must start with {label="value"} |
Can search _time:5m AND error directly |
| Structured log parsing |
Pipeline stages (\| json \| line_format) |
Built-in unpack_json, extract pipes |
| Multi-tenancy |
Native (X-Scope-OrgID) |
Cluster mode (account IDs) |
| Storage |
Object storage (chunks + index) |
Local disk (daily partitions) |
| Compression |
10–20:1 (Snappy/GZIP) |
10–30:1 (ZSTD, columnar) |
| RAM usage |
Low |
Very low (bloom filters vs index) |
| Ingestion APIs |
Loki Push API, OTLP |
Loki API, ES Bulk, Syslog, OTLP, JSON Lines |
Verdict: VictoriaLogs wins on resource efficiency and flexibility (free-text search, more ingestion APIs). Loki wins on ecosystem maturity and object storage durability.
Traces
| Feature |
Tempo (LGTM) |
VictoriaTraces |
| Storage |
Object storage (Parquet columnar) |
Local disk (VL engine) |
| Index |
None (bloom filters + Parquet columns) |
Bloom filters |
| Query language |
TraceQL (rich, structural queries) |
Jaeger Query API + experimental Tempo API (v0.8+) |
| Tempo DS compatibility |
✅ Native |
⚠️ Experimental (v0.8+): /tags, /search, /v2/traces/* |
| TraceQL support |
✅ Full |
❌ Not yet (only basic search via Tempo API) |
| Span metrics |
Metrics Generator (→ Mimir) |
Not built-in |
| Service graph |
Yes (auto-generated) |
No |
| Ingestion |
OTLP, Jaeger, Zipkin |
OTLP, Jaeger, Zipkin |
| Visualization |
Native Grafana Tempo DS |
Grafana Tempo DS (experimental) or Jaeger DS |
| Maturity |
Production-proven at scale |
Pre-1.0, rapidly evolving |
| External deps |
S3/GCS required |
None |
| Drop-in for Tempo? |
— |
⚠️ Partial — basic /search, /tags, /v2/traces work; TraceQL metrics and pipelines not yet available |
Verdict: Tempo wins on query power (TraceQL), span metrics, service graphs, and maturity. VictoriaTraces wins on simplicity and zero dependencies. Key development: VT v0.8.0 adds experimental Tempo datasource API support, meaning it can be used with Grafana's native Tempo datasource for basic trace lookup and search — a significant step toward drop-in compatibility. However, full TraceQL support is not yet available.
Profiles
| Feature |
Pyroscope (LGTM) |
Victoria Stack |
| Continuous profiling |
✅ Full flame graph support |
❌ No profiling component |
| FlameQL |
PromQL-like profile queries |
N/A |
| Trace-to-profiles |
Integrated with Tempo |
N/A |
Verdict: LGTM has a clear lead — Victoria Stack has no profiling solution.
Cross-Signal Correlation
| Correlation |
LGTM |
Victoria |
| Metrics → Traces (Exemplars) |
✅ Native (Mimir exemplars → Tempo) |
⚠️ Manual Grafana config only |
| Traces → Logs |
✅ Native (Tempo trace-to-logs) |
⚠️ Manual Grafana config only |
| Logs → Traces (Derived Fields) |
✅ Native (Loki derived fields → Tempo) |
⚠️ Manual Grafana config only |
| Traces → Metrics |
✅ Native (Tempo trace-to-metrics) |
❌ Not built-in |
| Traces → Profiles |
✅ Native (Tempo trace-to-profiles) |
❌ No profiling |
| Span Metrics |
✅ Automatic (Tempo Metrics Generator) |
❌ Not available |
Verdict: LGTM has dramatically better cross-signal correlation. The integrated data source configuration in Grafana is purpose-built for the LGTM stack. Victoria requires manual Grafana config and misses span metrics and profiling integration entirely.
Operational Comparison
| Dimension |
LGTM |
Victoria |
| Minimum production pods |
~20+ (across all backends) |
~5–10 (simpler topology) |
| External dependencies |
S3/GCS, PostgreSQL, Redis, Memcached |
None |
| Time to production |
Days–weeks (complex setup) |
Hours–days (single binaries) |
| Helm charts required |
6+ (per component) |
1–3 (operator manages rest) |
| Kubernetes operator |
No official operator (Helm + Jsonnet) |
vmoperator with CRDs |
| Configuration complexity |
High (per-component YAML, multiple Helm values) |
Low (CLI flags, single vmauth config) |
| Upgrade strategy |
Rolling updates per component |
Rolling updates, LTS release line |
| Backup strategy |
Data lives in object storage (inherent durability) |
vmbackup → S3 (must be scheduled) |
| Monitoring the monitoring |
Grafana mixins (mimir-mixin, loki-mixin, tempo-mixin) |
Self-scrape /metrics endpoint |
Ingestion Throughput
| Signal |
LGTM |
Victoria |
Winner |
| Metrics (single-node) |
N/A (Mimir is distributed-only) |
~1M samples/sec |
Victoria |
| Metrics (cluster) |
30M+ samples/sec (Mimir) |
100M+ samples/sec |
Victoria |
| Logs |
1 TB+/day (Loki) |
1 TB+/day (VictoriaLogs) |
Comparable |
| Traces |
100M+ spans/day (Tempo) |
High (benchmarks limited) |
LGTM (proven at scale) |
| Profiles |
Millions/hour (Pyroscope) |
N/A |
LGTM (no competitor) |
| Query Type |
LGTM |
Victoria |
Notes |
| Simple PromQL |
< 200ms |
< 100ms |
VM's columnar format + no object storage latency |
| Complex aggregation (1h range) |
200ms–2s |
100ms–1s |
VM benefits from local SSD; LGTM hits object storage cache |
| Long-range query (30d) |
1–10s (object storage) |
1–5s (local disk) |
LGTM depends on Memcached hit rate |
| High-cardinality query (>100k series) |
2–30s |
1–15s |
VM more resilient to cardinality due to lower per-series RAM |
| Log search (label-filtered) |
Sub-second (Loki) |
Sub-second (VL) |
Comparable when labels narrow scope |
| Log search (full-text) |
❌ Not supported without labels |
1–30s (bloom filter scan) |
Victoria only |
| Trace ID lookup |
< 200ms (Tempo) |
< 200ms (VT) |
Comparable |
| Trace search (attribute query) |
1–30s (TraceQL, Parquet scan) |
Bloom-filter dependent |
Tempo has richer query capability |
Resource Efficiency
| Resource |
LGTM (per 1M active series) |
Victoria (per 1M active series) |
Ratio |
| RAM |
~8–12 GB |
~2 GB |
VM uses 4–6x less |
| Disk (per sample) |
~1.2–1.5 bytes |
~0.4–1.0 bytes |
VM uses ~2x less |
| CPU (ingestion) |
Higher (microservices overhead, gRPC, hash rings) |
Lower (simpler binary, less coordination) |
VM uses ~2x less |
| CPU (query) |
Comparable |
Comparable |
— |
Verdict: Victoria Stack consistently wins on raw resource efficiency. LGTM's overhead comes from microservices coordination, object storage round-trips, and external caching. Victoria benefits from local SSD I/O and simpler architecture.
Reliability Comparison
High Availability
| Dimension |
LGTM |
Victoria |
| Metrics replication |
RF=3 by default (ingester-level), automatic |
RF configurable on vminsert (-replicationFactor=N), manual |
| Data durability |
Object storage (11 nines durability on S3) |
Local disk durability + vmbackup to S3 (manual) |
| Ingester failure (metrics) |
Other ingesters cover; query from replicas |
Partial results returned; may have gaps if RF=1 |
| Storage node failure |
Object storage + cache — no data loss |
If vmstorage node dies with RF=1, that shard's data is lost until restore |
| Network partition |
Backends continue serving from object storage |
Shared-nothing — each vmstorage serves its own shard independently |
| Graceful degradation |
Query-frontend splits/retries; returns partial results |
vmselect returns partial results transparently |
| WAL protection |
Ingester WAL replays on restart |
vmstorage WAL replays on restart |
Disaster Recovery
| Dimension |
LGTM |
Victoria |
| RPO (Recovery Point Objective) |
Near-zero (object storage is persistent) |
Depends on vmbackup schedule (hours–daily) |
| RTO (Recovery Time Objective) |
Minutes (spin up new pods, data in S3) |
Minutes–hours (restore from vmbackup) |
| Cross-region DR |
Object storage replication (S3 CRR) |
vmbackup to remote S3 bucket |
| Backup complexity |
None (data already in object storage) |
Must schedule vmbackup on every vmstorage node |
Failure Mode Comparison
| Scenario |
LGTM Impact |
Victoria Impact |
| 1 ingester dies |
No data loss (RF=3 covers), brief write latency spike |
Some data loss if RF=1; partial results if RF≥2 |
| Object storage outage |
⚠️ Historical queries fail, recent data still in ingesters |
✅ No impact (doesn't use object storage) |
| Cache (Memcached) down |
Query latency degrades significantly |
✅ No impact (built-in caching) |
| PostgreSQL down |
⚠️ Grafana metadata unavailable (dashboards, users) |
✅ No impact (no PostgreSQL dependency) |
| Full AZ outage |
Partial — data in other AZs via object storage |
⚠️ Significant — local SSD data in that AZ unavailable |
Verdict: LGTM wins on data durability (object storage is inherently more durable than local SSDs). Victoria wins on operational resilience (fewer external dependencies that can fail). LGTM's weakness is its dependency chain — if Memcached, PostgreSQL, or object storage has issues, the stack degrades. Victoria's weakness is local disk durability — requires diligent vmbackup scheduling.
Scalability Comparison
| Dimension |
LGTM |
Victoria |
| Horizontal scaling |
Per-component (queriers, ingesters, store-gateways independently) |
Per-component (vminsert, vmselect, vmstorage independently) |
| Vertical scaling |
Limited (microservices prefer horizontal) |
Excellent (single-node handles ~1M samples/sec) |
| Scale ceiling |
1B+ active series (Grafana Labs documented), limited only by object storage |
Billions of active series (Roblox production), limited by aggregate SSD capacity |
| Storage capacity |
Infinite (object storage) |
Bounded by total SSD capacity across vmstorage nodes |
| Scaling granularity |
Fine-grained (7+ component types per backend) |
Coarser (3 component types per backend) |
| Scale-to-zero |
❌ (always needs baseline infrastructure) |
Single-node can run on minimal resources |
| Tenant isolation |
Strong (X-Scope-OrgID header, per-tenant limits) |
Moderate (account IDs in URL path, per-tenant overrides) |
| Auto-scaling |
HPA on CPU/memory per component |
HPA on CPU/memory, simpler due to fewer components |
| Adding capacity |
Add pods + object storage scales automatically |
Add vmstorage nodes + rebalance hash ring |
Verdict: Both scale to billions of series. LGTM has infinite storage capacity via object storage. Victoria scales more easily (fewer moving parts) but is bounded by SSD capacity. LGTM offers finer-grained scaling but requires more operational expertise.
Security Comparison
| Dimension |
LGTM |
Victoria |
| Authentication |
Per-component (Mimir, Loki, Tempo each support auth headers) |
Centralized (vmauth handles all auth) |
| Authorization / RBAC |
Grafana Enterprise RBAC (dashboard, folder, data source level) |
No built-in RBAC (relies on vmauth URL routing) |
| Multi-tenancy isolation |
Strong — X-Scope-OrgID enforced at backend level |
Moderate — account IDs in URL paths, enforced at vmauth |
| Encryption at rest |
Object storage encryption (SSE-S3, SSE-KMS) |
OS-level disk encryption (LUKS, dm-crypt) |
| Encryption in transit |
mTLS between components (configurable) |
mTLS between components (configurable) |
| Network policies |
Kubernetes NetworkPolicies, per-component isolation |
Kubernetes NetworkPolicies, simpler (fewer components) |
| SSO / OIDC |
Grafana native SSO (Google, GitHub, LDAP, SAML, OIDC) |
vmauth Enterprise SSO |
| Audit logging |
Grafana audit logs (Enterprise) |
Not built-in |
| Secrets management |
Standard K8s secrets / Vault integration |
Standard K8s secrets / Vault integration |
Verdict: LGTM wins on enterprise security features (RBAC, SSO, audit logging) thanks to Grafana's mature access control layer. Victoria's centralized vmauth is simpler but less granular.
Developer Experience Comparison
Query Languages
| Aspect |
LGTM |
Victoria |
| Metrics query |
PromQL (industry standard) |
MetricsQL (PromQL superset — fixes extrapolation, auto-window, keep_metric_names) |
| Logs query |
LogQL (mandatory label selector, pipeline stages) |
LogsQL (free-text search, pipes, no mandatory selector) |
| Traces query |
TraceQL (rich structural queries: >>, ~, duration filters) |
Jaeger Query API (basic attribute filters) |
| Profiles query |
FlameQL |
N/A |
| Learning curve |
4 distinct query languages to learn |
2 distinct query languages + standard Jaeger API |
| Query builder UI |
Yes (Grafana visual query builder for all data sources) |
Limited (VMUI for metrics, Grafana for visualization) |
SDK & Instrumentation
| Aspect |
LGTM |
Victoria |
| Recommended collector |
Grafana Alloy (OTel Collector distribution) |
vmagent (Prometheus-native) + OTel Collector |
| Auto-instrumentation |
OTel Java Agent, Python opentelemetry-instrument, eBPF |
Same OTel auto-instrumentation works |
| SDK integration |
All OTel SDKs → OTLP → Alloy → backends |
All OTel SDKs → OTLP → backends (or vmagent for metrics) |
| Protocol support |
OTLP, Prometheus, Jaeger, Zipkin |
OTLP, Prometheus, InfluxDB, Graphite, Datadog, NewRelic, ES Bulk |
| Dashboard portability |
Grafana JSON — fully portable |
Same Grafana JSON — fully portable |
Documentation & Onboarding
| Aspect |
LGTM |
Victoria |
| Documentation quality |
Excellent — comprehensive, well-organized, per-component |
Good — thorough CLI flag docs, fewer architectural guides |
| Getting started |
docker run grafana/otel-lgtm (all-in-one dev image) |
docker run victoriametrics/victoria-metrics (per-component) |
| Interactive playground |
play.grafana.org |
No public playground |
| Tutorials & guides |
Extensive (official + community) |
Moderate (growing) |
| GrafanaCON / conferences |
Annual GrafanaCON with hundreds of talks |
Smaller conference presence |
Verdict: LGTM wins on documentation, onboarding (all-in-one Docker image), and query power (TraceQL is significantly richer). Victoria wins on protocol breadth (accepts more ingestion formats) and MetricsQL's practical improvements over PromQL.
| Dimension |
LGTM |
Victoria |
| Combined GitHub stars |
~120k+ (Grafana 73k, Loki 28k, Mimir 5k, Tempo 4k, Pyroscope 10k) |
~16.7k (monorepo) |
| Contributors |
Thousands across repos |
Hundreds (smaller core team) |
| Release cadence |
Monthly minors, quarterly majors (per component) |
Frequent releases + LTS line |
| Plugin ecosystem |
100+ Grafana plugins (data sources, panels, apps) |
Works with Grafana plugins (not its own ecosystem) |
| Commercial backing |
Grafana Labs ($6B+ valuation, 800+ employees) |
VictoriaMetrics, Inc. (smaller, focused team) |
| Enterprise SLA |
Yes (Grafana Enterprise, Grafana Cloud) |
Yes (VM Enterprise, VM Cloud) |
| Community forums |
community.grafana.com, Slack, Twitter/X |
Slack, GitHub issues, blog |
| Hiring ease |
Easy — "Grafana" is an industry keyword |
Moderate — less well-known brand |
| Managed cloud |
Grafana Cloud (full stack, generous free tier) |
VM Cloud (metrics + logs, starting $225/mo) |
| Third-party integrations |
Massive (Terraform, Ansible, Pulumi, every K8s distro) |
Growing (Terraform, Helm, vmoperator) |
| Production case studies |
Maersk, DHL, Salesforce, Dutch Tax Office, thousands more |
Roblox, Spotify, CERN, Grammarly, DreamHost, Adidas |
Verdict: LGTM wins decisively on ecosystem breadth, community size, commercial backing, and hiring. Victoria wins on focused engineering velocity and responsive core maintainers.
Scorecard Summary
| Aspect |
LGTM |
Victoria |
Winner |
| Performance (throughput) |
⭐⭐⭐⭐ |
⭐⭐⭐⭐⭐ |
Victoria |
| Performance (query speed) |
⭐⭐⭐⭐ |
⭐⭐⭐⭐⭐ |
Victoria |
| Resource efficiency |
⭐⭐⭐ |
⭐⭐⭐⭐⭐ |
Victoria |
| Reliability (data durability) |
⭐⭐⭐⭐⭐ |
⭐⭐⭐ |
LGTM |
| Reliability (operational resilience) |
⭐⭐⭐ |
⭐⭐⭐⭐⭐ |
Victoria |
| Scalability |
⭐⭐⭐⭐⭐ |
⭐⭐⭐⭐ |
LGTM |
| Cross-signal correlation |
⭐⭐⭐⭐⭐ |
⭐⭐ |
LGTM |
| Security & RBAC |
⭐⭐⭐⭐⭐ |
⭐⭐⭐ |
LGTM |
| Operational simplicity |
⭐⭐ |
⭐⭐⭐⭐⭐ |
Victoria |
| Cost |
⭐⭐⭐ |
⭐⭐⭐⭐⭐ |
Victoria |
| Developer experience |
⭐⭐⭐⭐⭐ |
⭐⭐⭐⭐ |
LGTM |
| Community & ecosystem |
⭐⭐⭐⭐⭐ |
⭐⭐⭐ |
LGTM |
| Licensing flexibility |
⭐⭐⭐ |
⭐⭐⭐⭐⭐ |
Victoria |
| Signal coverage |
⭐⭐⭐⭐⭐ (4 pillars + profiles) |
⭐⭐⭐⭐ (3 pillars, no profiles) |
LGTM |
LGTM: 7 wins — durability, scalability, correlation, security, DevEx, community, signal coverage
Victoria: 7 wins — throughput, query speed, efficiency, operational resilience, simplicity, cost, licensing
Cost Comparison
At 1M Active Series + 100 GB/day Logs + 50M Spans/day
| Factor |
LGTM (Self-Hosted) |
Victoria (Self-Hosted) |
| Compute |
~$800–2,000/mo (20+ pods) |
~$300–800/mo (5–10 pods) |
| Object storage |
~$200–500/mo (S3/GCS) |
$0 (local disk) |
| Supporting infra |
~$200–500/mo (PostgreSQL, Redis, Memcached) |
$0 (no dependencies) |
| SSD storage |
Minimal (mostly object storage) |
~$200–500/mo (local SSDs) |
| Network (cross-AZ) |
~$100–300/mo |
~$50–100/mo (less cross-component traffic) |
| TOTAL |
$1,300–3,300/mo |
$550–1,400/mo |
Victoria is typically ~2–3x cheaper than LGTM due to:
1. Fewer pods (lower compute)
2. No object storage costs
3. No supporting infrastructure (PostgreSQL, Redis, Memcached)
4. Lower RAM footprint per component
Managed Service Comparison
|
Grafana Cloud Pro |
VM Cloud (Cluster) |
| Starting price |
$19/mo + usage |
~$1,300/mo |
| Free tier |
✅ Generous (10k series, 50 GB logs/traces) |
❌ No free tier |
| Metrics pricing |
Per active series |
Per instance resources |
| Logs pricing |
Per GB ingested |
Per instance resources |
| All signals included |
✅ (metrics, logs, traces, profiles) |
⚠️ Metrics + Logs (traces TBD) |
Licensing Comparison
|
LGTM |
Victoria |
| Core license |
AGPL-3.0 |
Apache 2.0 |
| SaaS implications |
If you modify source and offer as SaaS, must release under AGPL |
No restrictions — can build proprietary SaaS on top |
| Enterprise features |
Grafana Enterprise (paid) |
VM Enterprise (paid) |
| Collection agents |
Alloy: Apache 2.0 |
vmagent: Apache 2.0 |
| Impact for self-hosting |
None (unmodified use is fine) |
None |
| Impact for SaaS builders |
⚠️ AGPL copyleft trigger risk |
✅ No copyleft concerns |
Migration Paths
From LGTM → Victoria
| Signal |
Migration Strategy |
Difficulty |
| Metrics |
Add remote_write to VM alongside Mimir, dual-write, then cut over |
Easy |
| Logs |
Switch Alloy/Promtail destination from Loki to VictoriaLogs (accepts Loki API) |
Easy |
| Traces |
Re-point OTel Collector OTLP exporter to VictoriaTraces |
Easy |
| Dashboards |
Grafana dashboards work unchanged (same Prometheus DS) |
Zero effort |
| Alerts |
Port Mimir Ruler rules to vmalert (same PromQL format) |
Low effort |
| Profiles |
No Victoria equivalent — keep Pyroscope or drop |
N/A |
From Victoria → LGTM
| Signal |
Migration Strategy |
Difficulty |
| Metrics |
Add Prometheus remote_write to Mimir, dual-write |
Easy |
| Logs |
Switch Fluentbit/Promtail to push to Loki |
Easy |
| Traces |
Re-point OTel Collector to Tempo |
Easy |
| Dashboards |
MetricsQL-only queries need conversion to PromQL |
Low–Medium |
| Alerts |
vmalert rules are mostly PromQL-compatible |
Low effort |
Decision Framework
flowchart TB
Start["Need full-stack observability?"]
Start -->|"Do you need continuous profiling?"| Prof
Prof -->|Yes| LGTM["Choose LGTM<br/>(Pyroscope has no competitor)"]
Prof -->|No| Budget
Budget["Is infrastructure cost the top priority?"]
Budget -->|Yes| VM["Choose Victoria Stack<br/>(2–3x cheaper)"]
Budget -->|No| Ops
Ops["Do you have a platform engineering team?"]
Ops -->|"Yes, large team"| Corr
Ops -->|"No, small team"| VM
Corr["Is deep cross-signal correlation critical?"]
Corr -->|Yes| LGTM
Corr -->|"Nice-to-have"| License
License["Does AGPL-3.0 cause compliance issues?"]
License -->|Yes| VM
License -->|No| Scale
Scale["Do you need infinite retention on object storage?"]
Scale -->|Yes| LGTM
Scale -->|"No, SSD retention is fine"| VM
style LGTM fill:#ff6600,color:#fff
style VM fill:#2a7de1,color:#fff
Choose LGTM When:
- You need continuous profiling (Pyroscope)
- Deep cross-signal correlation (exemplars, trace-to-logs, span metrics) is critical
- You want infinite retention via object storage
- You have a platform engineering team to manage the complexity
- Industry standardization matters (LGTM is the default in the Grafana ecosystem)
- You want a managed cloud option with a generous free tier
Choose Victoria Stack When:
- Cost efficiency is the top priority (2–3x cheaper)
- You want operational simplicity (fewer pods, no external deps)
- Apache 2.0 licensing is important (building SaaS)
- You need extreme RAM/disk efficiency (IoT, high-cardinality workloads)
- Your team is small and wants low operational burden
- You want full-text log search without mandatory label selectors
- You want a Kubernetes operator with CRD-based management
Hybrid Approach
Many organizations mix and match:
- VictoriaMetrics for metrics (efficiency) + Loki for logs (ecosystem) + Tempo for traces (TraceQL)
- VictoriaMetrics + VictoriaLogs + Tempo (best traces query language)
- All visualization through Grafana regardless of backend choice
Sources
| URL |
Source Kind |
Authority |
Date |
| https://docs.victoriametrics.com/ |
docs |
primary |
2026-04-10 |
| https://grafana.com/docs/ |
docs |
primary |
2026-04-10 |
| https://victoriametrics.com/case-studies/ |
case study |
primary |
2026-04-10 |
| https://grafana.com/blog/customers/ |
case study |
primary |
2026-04-10 |