src/sydra/alloc.zig
Purpose
Implements build-time allocator selection and provides a single AllocatorHandle abstraction for the rest of the runtime.
This module supports three allocator modes (selected via build_options.allocator_mode):
default:std.heap.GeneralPurposeAllocator(.{})mimalloc: mimalloc-backed allocator vtablesmall_pool: a custom small-object allocator with:- optional sharded slab allocator for small allocations (
slab_shard.zig) - mutex-protected bucket allocator for a fixed set of sizes
- fallback to a backing
GeneralPurposeAllocatorfor oversize/aligned allocations
- optional sharded slab allocator for small allocations (
Public build flags
pub const mode = build_options.allocator_modepub const is_mimalloc = (mode == "mimalloc")pub const is_small_pool = (mode == "small_pool")
There is a comptime guard that rejects unknown allocator_mode values.
AllocatorHandle (main integration point)
pub const AllocatorHandle = ...
AllocatorHandle is a compile-time selected struct:
- If
is_small_pool:- stores
pool: SmallPoolAllocator - exposes:
init()allocator() std.mem.AllocatorsnapshotSmallPoolStats() SmallPoolAllocator.StatsenterEpoch() ?u64leaveEpoch(observed: u64) voidadvanceEpoch() ?u64deinit()
- stores
- Else if
is_mimalloc:- stores
mimalloc: MimallocAllocator - exposes
init(),allocator(),deinit()(no-op)
- stores
- Else (
default):- stores
gpa: std.heap.GeneralPurposeAllocator(.{}) - exposes
init(),allocator(),deinit()(callsgpa.deinit())
- stores
The rest of SydraDB typically receives *AllocatorHandle and calls handle.allocator() to obtain std.mem.Allocator.
Mimalloc mode
When allocator_mode == "mimalloc", MimallocAllocator implements a std.mem.Allocator.VTable backed by mi_malloc_aligned, mi_realloc_aligned, and mi_free.
Notes:
resizeFnalways returnsfalse(in-place resize unsupported); callers must remap/copy.allocFnandremapFntranslatestd.mem.Alignmentto a byte count, using1for a 0-byte alignment.
Small pool mode (custom allocator)
High-level design
SmallPoolAllocator routes allocations through three strategies, in priority order:
- Shard allocator (optional):
slab_shard.Shardinstances managed byShardManager. - Bucket allocator: fixed size classes with per-bucket mutex and slab refills.
- Fallback allocator: backing
GeneralPurposeAllocatorfor oversize or stricter alignments.
Key constants (small_pool)
default_alignment:@sizeOf(usize)alignmentheader_size:default_alignment.toByteUnits()- bucket and shard allocators return
ptr + header_size
- bucket and shard allocators return
slab_bytes: usize = 64 * 1024bucket_sizes = [16, 24, 32, 48, 64, 96, 128, 192, 256]fallback_bucket_bounds = [64, 128, 256, 512, 1024, 2048, 4096, 8192]pub const max_shard_size: derived from the generated shard class table
ShardManager (optional sharded allocator)
ShardManager owns []slab_shard.Shard and provides:
currentShard()returning a per-thread shard- uses a
threadlocalThreadShardStatecache (small_pool_tls_state) - assigns shard indices via an atomic counter (round-robin)
- uses a
freeLocal(ptr):- detects the owning shard via
slab_shard.Shard.owningShard(ptr) - if local shard owns it: calls
free - otherwise: calls
freeDeferred
- detects the owning shard via
- epoch helpers:
enterEpoch()→currentShard().currentEpoch()leaveEpoch(observed)→currentShard().observeEpoch(observed)advanceEpoch()→currentShard().advanceEpoch()
collectGarbage()callscollectGarbage()on all shards
Shard manager initialization is controlled by build_options.allocator_shards:
configured_shard_count > 0→ tries to create a manager- failures are caught and result in “no shards” mode
Bucket allocator
For sizes that fit a bucket_sizes class:
allocBucketlocks a bucket mutex, refills slabs when needed, and pops from a free list.refillBucketallocates a new slab from the backing allocator and builds a linked list ofFreeNodeblocks.freeSmalllinearly scans buckets, checking if a pointer belongs to that bucket’s slabs, and pushes the node back onto the free list.
The implementation tracks lock timing and contention:
- wait/hold times in nanoseconds
- acquisition count
- contention count (wait time over
lock_wait_threshold_ns)
Fallback tracking + stats
For allocations that bypass buckets/shards, the allocator updates counters:
fallback_allocs,fallback_frees,fallback_resizes,fallback_remapsfallback_sizes[...]counts allocation sizes binned byfallback_bucket_bounds
pub const Stats and pub fn snapshotStats() return a detailed snapshot including:
- per-bucket usage (
allocations,in_use,high_water,slabs, free-list length) - per-bucket lock stats
- fallback counters and size histogram
- shard manager summary (if enabled): shard count, alloc hit/miss counts, deferred totals, epoch info
Tests (small_pool only)
When built with "small_pool" mode, this file includes tests for:
- oversize allocations falling back to the backing allocator
- shard allocation hit tracking in stats
- per-thread shard assignment
- cross-thread frees being deferred and later reclaimed via epochs + garbage collection
Code excerpts
const std = @import("std");
const build_options = @import("build_options");
const allocator_mode = build_options.allocator_mode;
const use_mimalloc = std.mem.eql(u8, allocator_mode, "mimalloc");
const use_small_pool = std.mem.eql(u8, allocator_mode, "small_pool");
pub const mode = allocator_mode;
pub const is_mimalloc = use_mimalloc;
pub const is_small_pool = use_small_pool;
comptime {
if (!std.mem.eql(u8, allocator_mode, "default") and !use_mimalloc and !use_small_pool) {
@compileError("unknown allocator-mode: " ++ allocator_mode);
}
}
threadlocal var small_pool_tls_state: SmallPoolAllocator.ThreadShardState = .{};
pub const AllocatorHandle = if (use_small_pool) struct {
pool: SmallPoolAllocator,
pub fn init() AllocatorHandle {
return .{ .pool = SmallPoolAllocator.init() };
}
pub fn allocator(self: *AllocatorHandle) std.mem.Allocator {
return self.pool.allocator();
}
} else if (use_mimalloc) struct {
mimalloc: MimallocAllocator,
pub fn init() AllocatorHandle {
return .{ .mimalloc = MimallocAllocator.init() };
}
pub fn allocator(self: *AllocatorHandle) std.mem.Allocator {
return self.mimalloc.allocator();
}
} else struct {
gpa: std.heap.GeneralPurposeAllocator(.{}),
pub fn init() AllocatorHandle {
return .{ .gpa = std.heap.GeneralPurposeAllocator(.{}){} };
}
pub fn allocator(self: *AllocatorHandle) std.mem.Allocator {
return self.gpa.allocator();
}
};