ENTITY.PROTOCOL // 0xBEAM

KUMA

Entity-Native Storage Engine for the BEAM v0.1

One entity. One actor. One writer. No conflicts. Ever.

CLUSTER.MONITOR // 0xLIVE
KUMA://CLUSTER.MONITOR
● LIVE
Events/sec
0
Active Entities
0
Shards Online
48/48
Avg Latency
0.00ms
Memory/Node
0GB / 8GB
Cluster Nodes
3/3
node-1
node-2
node-3
Uptime
00d 00h 00m 00s
CORE.MODULES // 0xFEAT

Event Sourced

Every state change is an immutable event. Full audit trail. Time-travel debugging. Replay from any point.

Actor Per Entity

Each entity gets its own OTP process. Single writer guarantees zero conflicts. Pure message passing.

SQLite Shards

Entities shard across multiple SQLite databases by consistent hashing. Parallel writes. No bottleneck.

GDPR Native

Entity-scoped storage makes right-to-erasure trivial. Delete one entity, all events gone. Crypto-shredding built in.

Clustering

Built on BEAM distribution. Entities auto-migrate between nodes. Self-healing. Scale by adding nodes.

Projections

Subscribe to event streams. Build read-optimized views. Eventual consistency by design. Rebuild anytime.

CODE.SAMPLE // 0xGLEAM
app.gleam
import kuma
import kuma/entity

pub fn main() {
  // Boot the storage engine
  let store = kuma.start(
    "./data",
    shards: 16,
  )

  // Spawn an entity actor
  let user = entity.spawn(
    store,
    id: "user_0x9A4F",
  )

  // Append events (single writer)
  entity.append(user, UserCreated(
    name: "Dwighson",
    role: "architect",
  ))

  // Read current state (<1ms)
  let state = entity.read(user)
  // => User("Dwighson", "architect")
}
architecture
Press enter or space to select a node. You can then use the arrow keys to move the node around. Press delete to remove it and escape to cancel.
Press enter or space to select an edge. You can then press delete to remove it or escape to cancel.
SCALE.OUT // 0xCLUSTER

You have a running 3-node cluster. You want to add 2 more nodes. Same task, three databases:

kuma.gleam 2 lines
// You have a 3-node cluster.
// Add nodes 4 and 5.
//
// On each NEW node, point it
// at any existing peer:

kuma.start(KumaOptions(
  context: "payments",
  shards: 16,
  data_dir: "/data",
  cluster: Clustered(
    seed_nodes: ["kuma@10.0.0.1"],
  ),
))

// That's it. The new nodes
// join, the consistent-hash
// ring updates, 1/N of
// entities migrate live.
// Zero downtime, no repair,
// no nodetool, no certs.
cassandra.yaml ~30 lines + ops
# On EACH new node, edit
# /etc/cassandra/cassandra.yaml:
cluster_name: payments
listen_address: 10.0.0.4
rpc_address: 10.0.0.4
seed_provider:
  - parameters:
    - seeds: "10.0.0.1,10.0.0.2"
auto_bootstrap: true

# Bring node up — ONE AT A TIME
# (parallel = token collision)
systemctl start cassandra
nodetool status     # wait UN

# Repeat for node 5, then:
nodetool cleanup    # on every
                    # OLD node
# Hours of streaming.
# Pray no compaction storm.

# If RF needs bumping:
# ALTER KEYSPACE payments ...
# nodetool repair -full
cockroach.sh 3 commands + tuning
# Provision certs for new nodes
cockroach cert create-node \
  10.0.0.4 localhost \
  --certs-dir=certs \
  --ca-key=ca.key

cockroach cert create-node \
  10.0.0.5 localhost \
  --certs-dir=certs \
  --ca-key=ca.key

# Copy certs to each new node,
# then on each:
cockroach start \
  --certs-dir=certs \
  --advertise-addr=10.0.0.4 \
  --join=10.0.0.1,10.0.0.2,10.0.0.3 \
  --cache=.25 --max-sql-memory=.25

# Cluster auto-rebalances ranges.
# Watch it for hours:
cockroach node status \
  --certs-dir=certs

# Re-check zone configs,
# lease preferences, and pay
# the ~2ms Raft tax per write

Why it's this simple: Kuma has no Raft, no Paxos, no quorum, no replication factor. Each entity has exactly one writer (its actor), and entities are routed deterministically by consistent hashing. Adding a node means reshuffling 1/N of entities — handled by kuma.migrate() with zero downtime. There's no consistency level to pick because there's no concurrent writer to disagree with.

USE.CASES // 0xAPPS

Real-Time Systems

Chat applications, live collaboration, multiplayer game state. Each user is an entity with instant reads.

🏦

Financial Ledgers

Account balances, transaction history, audit compliance. Immutable event log with crypto-shredding for privacy.

🌐

IoT & Edge

Device state management at scale. Each sensor is an entity. SQLite shards keep data local and fast.

BENCHMARKS // 0xPERF

Sustained 1M-Event Write Benchmark

8-core M-series · 16GB RAM · APFS SSD · 1,000,000 events · 200 concurrent writers · batch=100 · median of 3 runs

Kuma v0.1
63,000 events/sec
1M events in 16s · in-process · 1.00×
Cassandra 4.1
30,000 events/sec
1M events in 33s · single-node · 0.48×
CockroachDB v23.2
18,000 events/sec
1M events in 56s · single-node · 0.29×

Key insight: Kuma is 2× faster than Cassandra and 3.5× faster than CockroachDB on a single laptop — same hardware, same workload, no cherry-picking. Three architectural decisions get us there: in-process writes (no TCP, no serialization), no distributed consensus on the write path (one actor per entity, no Raft/Paxos), and sharded SQLite with dedicated per-shard writers. Cockroach pays a ~2ms Raft tax per write, even on a single node.

One €104/mo Hetzner AX102
~500M events/day · ~5B events/week sustained — without touching the CPU ceiling.
Same workload on managed services (5.4B writes/day, ~1KB)
Firestore ~$9,720/day
Amazon Keyspaces (Cassandra) ~$7,830/day
DynamoDB on-demand ~$6,750/day
CockroachDB Cloud Dedicated ~$460/day
Cloud Spanner ~$175/day
Kuma on Hetzner AX102 ~$3.70/day
List price, us-east-1 / us-central1, Apr 2026. Per-write rates: DynamoDB $1.25/M WRU · Keyspaces $1.45/M WRU · Firestore $1.80/M. Spanner / Cockroach are cluster-priced — sized for ~63K writes/s sustained.

Horizontal Scaling

Single-writer-per-entity means zero coordination on the write path. Adding nodes is linear — no Raft quorum tax, no ring rebalancing storms.

Self-hosted on Hetzner AX102 (€104/mo ≈ $3.70/node/day)
Throughput Writes/day Kuma Cassandra Cockroach
50K ev/s 4.3B 1 node
$3.70/day
3 nodes *
$11.10/day
3 nodes *
$11.10/day
100K ev/s 8.6B 2 nodes
$7.40/day
4 nodes
$14.80/day
6 nodes
$22.20/day
250K ev/s 21.6B 4 nodes
$14.80/day
9 nodes
$33.30/day
14 nodes
$51.80/day
500K ev/s 43.2B 8 nodes
$29.60/day
17 nodes
$62.90/day
28 nodes
$103.60/day
1M ev/s 86.4B 16 nodes
$59.20/day
34 nodes
$125.80/day
56 nodes
$207.20/day

Throughput math: Kuma ~63K ev/s/node · Cassandra ~30K ev/s/node · Cockroach ~18K ev/s/node, scaled linearly from the same-hardware benchmark above. * Cassandra and Cockroach require a 3-node minimum cluster for any production deployment (RF=3, quorum reads). At very large clusters (20+ nodes) Cassandra/Cockroach amortize overhead and approach ~100K / ~83K ev/s/node in published cluster benchmarks — narrowing the gap, but Kuma stays linear from one node.

Same throughput on managed services
Throughput Kuma (Hetzner) Spanner Cockroach Cloud DynamoDB Keyspaces Firestore
50K ev/s $3.70 $108 $365 $5,400 $6,264 $7,776
100K ev/s $7.40 $216 $730 $10,800 $12,528 $15,552
250K ev/s $14.80 $540 $1,825 $27,000 $31,320 $38,880
500K ev/s $29.60 $1,080 $3,650 $54,000 $62,640 $77,760
1M ev/s $59.20 $2,160 $7,300 $108,000 $125,280 $155,520

All values USD/day. Per-write rates: DynamoDB on-demand $1.25/M WRU, Keyspaces $1.45/M WRU, Firestore $1.80/M (us-east-1, ≤1KB writes, list price Apr 2026). Spanner sized at ~10K writes/sec/node × $0.90/node-hr. Cockroach Cloud Dedicated estimated at ~$0.30/vCPU-hr × cluster size. At 1M ev/s, Kuma costs less in a year ($21,608) than Firestore costs in a single day ($155,520).

Full methodology and raw run-by-run numbers in docs/benchmarks.md · benchmark date 2026-04-08