Skip to content

Monitoring

[logging]
level = "info" # trace, debug, info, warn, error
format = "json" # "json" or "pretty"
output = "stdout" # "stdout", "stderr", or "none"
per-request-spans = false # Per-request tracing spans (~13% CPU overhead)
# file-path = "/var/log/frogdb/frogdb.log"
# [logging.rotation]
# max-size-mb = 100
# frequency = "daily" # "daily", "hourly", or "never"
# max-files = 5

Runtime change: CONFIG SET loglevel debug

Endpoint: GET http://<host>:9090/metrics

[http]
enabled = true
bind = "127.0.0.1"
port = 9090

See Metrics Reference for the complete list of all exported metrics.

MetricAlert When
frogdb_memory_used_bytes / frogdb_memory_maxmemory_bytes> 80% for 5 min
histogram_quantile(0.99, frogdb_commands_duration_seconds)> 100ms for 5 min
frogdb_persistence_errors_totalAny increase
frogdb_connections_rejected_totalAny increase
max(frogdb_shard_keys) / avg(frogdb_shard_keys)> 2.0 (hot shard)

FrogDB supports OpenTelemetry (OTLP):

[tracing]
enabled = true
otlp-endpoint = "http://localhost:4317"
sampling-rate = 0.1
service-name = "frogdb"
EndpointResponse
GET /health/live200 OK if process is running
GET /health/ready200 OK if accepting commands
PING (Redis protocol)PONG if healthy

Kubernetes probes:

livenessProbe:
httpGet:
path: /health/live
port: 9090
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /health/ready
port: 9090
initialDelaySeconds: 5

Comprehensive server and shard status:

STATUS JSON
GET /status/json # HTTP endpoint

Per-shard traffic analysis to detect imbalanced key distribution:

DEBUG HOTSHARDS [PERIOD <seconds>]

Standard Redis INFO sections plus FrogDB extensions:

INFO server|clients|memory|persistence|stats|replication|cpu|keyspace|hotshards|frogdb