KUMA
Entity-Native Storage Engine for the BEAM v0.1
One entity. One actor. One writer. No conflicts. Ever.
Event Sourced
Every state change is an immutable event. Full audit trail. Time-travel debugging. Replay from any point.
Actor Per Entity
Each entity gets its own OTP process. Single writer guarantees zero conflicts. Pure message passing.
SQLite Shards
Entities shard across multiple SQLite databases by consistent hashing. Parallel writes. No bottleneck.
GDPR Native
Entity-scoped storage makes right-to-erasure trivial. Delete one entity, all events gone. Crypto-shredding built in.
Clustering
Built on BEAM distribution. Entities auto-migrate between nodes. Self-healing. Scale by adding nodes.
Projections
Subscribe to event streams. Build read-optimized views. Eventual consistency by design. Rebuild anytime.
import kuma
import kuma/entity
pub fn main() {
// Boot the storage engine
let store = kuma.start(
"./data",
shards: 16,
)
// Spawn an entity actor
let user = entity.spawn(
store,
id: "user_0x9A4F",
)
// Append events (single writer)
entity.append(user, UserCreated(
name: "Dwighson",
role: "architect",
))
// Read current state (<1ms)
let state = entity.read(user)
// => User("Dwighson", "architect")
} You have a running 3-node cluster. You want to add 2 more nodes. Same task, three databases:
// You have a 3-node cluster.
// Add nodes 4 and 5.
//
// On each NEW node, point it
// at any existing peer:
kuma.start(KumaOptions(
context: "payments",
shards: 16,
data_dir: "/data",
cluster: Clustered(
seed_nodes: ["kuma@10.0.0.1"],
),
))
// That's it. The new nodes
// join, the consistent-hash
// ring updates, 1/N of
// entities migrate live.
// Zero downtime, no repair,
// no nodetool, no certs. # On EACH new node, edit
# /etc/cassandra/cassandra.yaml:
cluster_name: payments
listen_address: 10.0.0.4
rpc_address: 10.0.0.4
seed_provider:
- parameters:
- seeds: "10.0.0.1,10.0.0.2"
auto_bootstrap: true
# Bring node up — ONE AT A TIME
# (parallel = token collision)
systemctl start cassandra
nodetool status # wait UN
# Repeat for node 5, then:
nodetool cleanup # on every
# OLD node
# Hours of streaming.
# Pray no compaction storm.
# If RF needs bumping:
# ALTER KEYSPACE payments ...
# nodetool repair -full # Provision certs for new nodes
cockroach cert create-node \
10.0.0.4 localhost \
--certs-dir=certs \
--ca-key=ca.key
cockroach cert create-node \
10.0.0.5 localhost \
--certs-dir=certs \
--ca-key=ca.key
# Copy certs to each new node,
# then on each:
cockroach start \
--certs-dir=certs \
--advertise-addr=10.0.0.4 \
--join=10.0.0.1,10.0.0.2,10.0.0.3 \
--cache=.25 --max-sql-memory=.25
# Cluster auto-rebalances ranges.
# Watch it for hours:
cockroach node status \
--certs-dir=certs
# Re-check zone configs,
# lease preferences, and pay
# the ~2ms Raft tax per write Why it's this simple: Kuma has no Raft, no Paxos, no quorum, no replication factor. Each entity has exactly one writer (its actor), and entities are routed deterministically by consistent hashing. Adding a node means reshuffling 1/N of entities — handled by kuma.migrate() with zero downtime. There's no consistency level to pick because there's no concurrent writer to disagree with.
Real-Time Systems
Chat applications, live collaboration, multiplayer game state. Each user is an entity with instant reads.
Financial Ledgers
Account balances, transaction history, audit compliance. Immutable event log with crypto-shredding for privacy.
IoT & Edge
Device state management at scale. Each sensor is an entity. SQLite shards keep data local and fast.
Sustained 1M-Event Write Benchmark
8-core M-series · 16GB RAM · APFS SSD · 1,000,000 events · 200 concurrent writers · batch=100 · median of 3 runs
Key insight: Kuma is 2× faster than Cassandra and 3.5× faster than CockroachDB on a single laptop — same hardware, same workload, no cherry-picking. Three architectural decisions get us there: in-process writes (no TCP, no serialization), no distributed consensus on the write path (one actor per entity, no Raft/Paxos), and sharded SQLite with dedicated per-shard writers. Cockroach pays a ~2ms Raft tax per write, even on a single node.
Horizontal Scaling
Single-writer-per-entity means zero coordination on the write path. Adding nodes is linear — no Raft quorum tax, no ring rebalancing storms.
| Throughput | Writes/day | Kuma | Cassandra | Cockroach |
|---|---|---|---|---|
| 50K ev/s | 4.3B | 1 node $3.70/day | 3 nodes * $11.10/day | 3 nodes * $11.10/day |
| 100K ev/s | 8.6B | 2 nodes $7.40/day | 4 nodes $14.80/day | 6 nodes $22.20/day |
| 250K ev/s | 21.6B | 4 nodes $14.80/day | 9 nodes $33.30/day | 14 nodes $51.80/day |
| 500K ev/s | 43.2B | 8 nodes $29.60/day | 17 nodes $62.90/day | 28 nodes $103.60/day |
| 1M ev/s | 86.4B | 16 nodes $59.20/day | 34 nodes $125.80/day | 56 nodes $207.20/day |
Throughput math: Kuma ~63K ev/s/node · Cassandra ~30K ev/s/node · Cockroach ~18K ev/s/node, scaled linearly from the same-hardware benchmark above. * Cassandra and Cockroach require a 3-node minimum cluster for any production deployment (RF=3, quorum reads). At very large clusters (20+ nodes) Cassandra/Cockroach amortize overhead and approach ~100K / ~83K ev/s/node in published cluster benchmarks — narrowing the gap, but Kuma stays linear from one node.
| Throughput | Kuma (Hetzner) | Spanner | Cockroach Cloud | DynamoDB | Keyspaces | Firestore |
|---|---|---|---|---|---|---|
| 50K ev/s | $3.70 | $108 | $365 | $5,400 | $6,264 | $7,776 |
| 100K ev/s | $7.40 | $216 | $730 | $10,800 | $12,528 | $15,552 |
| 250K ev/s | $14.80 | $540 | $1,825 | $27,000 | $31,320 | $38,880 |
| 500K ev/s | $29.60 | $1,080 | $3,650 | $54,000 | $62,640 | $77,760 |
| 1M ev/s | $59.20 | $2,160 | $7,300 | $108,000 | $125,280 | $155,520 |
All values USD/day. Per-write rates: DynamoDB on-demand $1.25/M WRU, Keyspaces $1.45/M WRU, Firestore $1.80/M (us-east-1, ≤1KB writes, list price Apr 2026). Spanner sized at ~10K writes/sec/node × $0.90/node-hr. Cockroach Cloud Dedicated estimated at ~$0.30/vCPU-hr × cluster size. At 1M ev/s, Kuma costs less in a year ($21,608) than Firestore costs in a single day ($155,520).