Skip to content

Consistency Model

Consistency guarantees for single-node operation, clustered deployments, and transactions.

A client sees its own writes on the same connection to the same node, subject to failover limitations.

ScenarioRead-Your-Writes
Same connection, no failoverGuaranteed
Reconnect to same nodeGuaranteed (data persisted)
Failover to replica (async)NOT guaranteed - may lose unreplicated writes
Failover to replica (sync)Guaranteed if write was acknowledged

Mitigation: For critical writes that must survive failover, use min_replicas_to_write = 1.

Once a client reads a value, it will never see an older value for that key on the same connection.

All operations on a single key are totally ordered. Concurrent writes from different clients are serialized.

Within a single internal shard, operations are linearizable.

Multi-key commands spanning multiple hash slots are rejected with a CROSSSLOT error before execution. This matches Redis Cluster behavior.

Using Hash Tags: Use hash tags {tag} to colocate keys on the same slot:

MSET {user:123}name Alice {user:123}email alice@example.com

In standalone mode only, FrogDB optionally supports atomic cross-shard operations via VLL:

ModeCross-Shard BehaviorAtomicity
allow_cross_slot_standalone = false (default)-CROSSSLOT errorN/A
allow_cross_slot_standalone = trueVLL coordinationAtomic
Cluster mode (any setting)-CROSSSLOT errorN/A

ModeWrite Acknowledged WhenData at Risk on Crash
asyncWritten to memoryAll unflushed writes
periodicWritten to memoryUp to sync interval
syncfsync completesNone

  • Asynchronous replication by default
  • Replicas receive WAL stream from primary
  • Typical lag: milliseconds

Acknowledged writes eventually appear on all replicas. During failover, writes may be permanently lost (not just delayed):

ScenarioData Fate
Write replicated before failoverPreserved on new primary
Write acknowledged but not replicatedLost - bounded by replication lag
Write to old primary during split-brainDiscarded

During failover, there is a window (up to fencing_timeout_ms, default 10s) where both old and new primary may accept writes. Old primary’s divergent writes are discarded when it receives demotion topology.

Reducing Split-Brain Risk:

  1. Lower fencing_timeout_ms
  2. Use min_replicas_to_write = 1
  3. Client-side epoch validation
  • Linearizability: Cross-node operations are not linearizable
  • Snapshot isolation: No point-in-time consistency across keys
  • Causal consistency: Causally related operations may be seen out of order by different clients

Guarantees:

  • All commands execute or none execute
  • Commands execute in order
  • No interleaving with other clients’ commands on same keys

Limitations:

  • Keys must hash to same shard (use hash tags)
  • Cross-shard transactions not supported

Transaction written as single RocksDB WriteBatch (atomic at storage level).

Durability ModeTransaction Behavior
asyncEXEC returns immediately, may lose on crash
periodicEXEC returns immediately, durable at next sync
syncEXEC blocks until fsync, then returns

WATCH provides optimistic locking. EXEC fails if watched key modified by another client. Watched keys must be on the same internal shard as transaction keys.


  • Strongly consistent, always sees latest committed writes
  • May return older value than primary
  • Staleness bounded by replication lag

  • Operations execute in order sent
  • Pipelining preserves order
  • Responses return in order
  • No ordering guarantee between different clients
  • Messages delivered in publish order per channel
  • No ordering across channels
  • At-most-once delivery (messages may be lost on reconnect)