Performance Benchmarks
Benchmark Methodology
All benchmarks use a consistent methodology. The benchmark suite lives in src/benchmark/ in the repository and can be run locally to reproduce these results.
| Parameter | Value |
|---|---|
| Framework | Custom benchmark harness with warm-up iterations |
| Runtime | Node.js (latest LTS) with --expose-gc for memory measurements |
| Iterations | 100,000 serialization + deserialization cycles per library |
| Dataset | Identical test data generated once, shared across all libraries |
| Warm-up | 1,000 iterations discarded before measurement |
| GC | Forced between benchmarks to isolate memory effects |
Test Data Structure
All benchmarks use a User object with mixed field types:
import { faker } from "@faker-js/faker";
interface User {
id: number;
name: string;
email: string;
age: number;
active: boolean;
tags: string[];
score: number;
bio: string;
}
const generateUser = (): User => ({
id: faker.number.int({ min: 1, max: 1000000 }),
name: faker.person.fullName(),
email: faker.internet.email(),
age: faker.number.int({ min: 18, max: 99 }),
active: faker.datatype.boolean(),
tags: Array.from({ length: faker.number.int({ min: 1, max: 5 }) }, () =>
faker.lorem.word(),
),
score: faker.number.int({ min: 0, max: 10000 }),
bio: faker.lorem.sentence(),
});
Serialization Performance
Results
| Library | Ops/sec | Relative | Avg Time |
|---|---|---|---|
| Sia | 1,850,000 | 1.0x (baseline) | 0.54 µs |
| Protobuf (protobufjs) | 720,000 | 2.6x slower | 1.39 µs |
| MessagePack (msgpackr) | 650,000 | 2.8x slower | 1.54 µs |
| JSON.stringify | 480,000 | 3.9x slower | 2.08 µs |
Implementations
import { Sia } from "@timeleap/sia";
const serializeUser = (s: Sia, user: User): void => {
s.addUInt32(user.id)
.addString8(user.name)
.addString8(user.email)
.addUInt8(user.age)
.addBool(user.active)
.addArray8(user.tags, (s, tag) => s.addString8(tag))
.addUInt16(user.score)
.addString16(user.bio);
};
const sia = new Sia();
// Benchmark loop
for (let i = 0; i < iterations; i++) {
sia.seek(0);
serializeUser(sia, user);
const bytes = sia.toUint8Array();
}
import protobuf from "protobufjs";
const UserMessage = new protobuf.Type("User")
.add(new protobuf.Field("id", 1, "uint32"))
.add(new protobuf.Field("name", 2, "string"))
.add(new protobuf.Field("email", 3, "string"))
.add(new protobuf.Field("age", 4, "uint32"))
.add(new protobuf.Field("active", 5, "bool"))
.add(new protobuf.Field("tags", 6, "string", "repeated"))
.add(new protobuf.Field("score", 7, "uint32"))
.add(new protobuf.Field("bio", 8, "string"));
// Benchmark loop
for (let i = 0; i < iterations; i++) {
const msg = UserMessage.create(user);
const bytes = UserMessage.encode(msg).finish();
}
const encoder = new TextEncoder();
// Benchmark loop
for (let i = 0; i < iterations; i++) {
const json = JSON.stringify(user);
const bytes = encoder.encode(json);
}
import { pack } from "msgpackr";
// Benchmark loop
for (let i = 0; i < iterations; i++) {
const bytes = pack(user);
}
Deserialization Performance
Results
| Library | Ops/sec | Relative | Avg Time |
|---|---|---|---|
| Sia | 2,100,000 | 1.0x (baseline) | 0.48 µs |
| MessagePack (msgpackr) | 890,000 | 2.4x slower | 1.12 µs |
| Protobuf (protobufjs) | 680,000 | 3.1x slower | 1.47 µs |
| JSON.parse | 520,000 | 4.0x slower | 1.92 µs |
Implementations
const deserializeUser = (s: Sia): User => ({
id: s.readUInt32(),
name: s.readString8(),
email: s.readString8(),
age: s.readUInt8(),
active: s.readBool(),
tags: s.readArray8((s) => s.readString8()),
score: s.readUInt16(),
bio: s.readString16(),
});
const reader = new Sia(serializedBytes);
// Benchmark loop
for (let i = 0; i < iterations; i++) {
reader.seek(0);
const user = deserializeUser(reader);
}
// Benchmark loop
for (let i = 0; i < iterations; i++) {
const decoded = UserMessage.decode(serializedBytes);
const user = UserMessage.toObject(decoded);
}
const decoder = new TextDecoder();
// Benchmark loop
for (let i = 0; i < iterations; i++) {
const json = decoder.decode(serializedBytes);
const user = JSON.parse(json);
}
import { unpack } from "msgpackr";
// Benchmark loop
for (let i = 0; i < iterations; i++) {
const user = unpack(serializedBytes);
}
Output Size Comparison
| Library | Size (bytes) | Relative |
|---|---|---|
| Sia | 112 | 1.0x (baseline) |
| Protobuf | 128 | 1.14x larger |
| MessagePack | 145 | 1.29x larger |
| JSON | 198 | 1.77x larger |
Sia uses no field tags, no type markers, and no key names. Data is written sequentially in a known order, so the schema is implicit in the code. The only overhead is length prefixes for variable-length data, and Sia uses the smallest prefix that fits (1 byte for strings under 256 bytes, etc.).
JSON includes quoted key names and string delimiters for every field. Protobuf includes field tags (1–2 bytes each) and varint-encoded lengths. MessagePack includes type markers and key encodings. Sia has none of this overhead.
Memory Usage Analysis
// Sia: zero allocations in the hot path
const sia = new Sia(); // uses shared 32 MB buffer (lazy-allocated on first use)
for (let i = 0; i < 100_000; i++) {
sia.seek(0); // reset offset — no allocation
serializeUser(sia, user);
const bytes = sia.toUint8Array(); // one allocation: the output copy
}
// Total allocations: 100,000 (one Uint8Array per iteration)
const encoder = new TextEncoder();
for (let i = 0; i < 100_000; i++) {
const json = JSON.stringify(user); // allocation: string
const bytes = encoder.encode(json); // allocation: Uint8Array
}
// Total allocations: 200,000 (one string + one Uint8Array per iteration)
GC Impact
| Library | Allocations per Op | GC Pauses (100K ops) | Peak Heap |
|---|---|---|---|
| Sia | 1 | ~2 ms | ~8 MB |
| Sia (reference) | 0 | ~0 ms | ~4 MB |
| Protobuf | 3–5 | ~15 ms | ~45 MB |
| JSON | 2 | ~12 ms | ~35 MB |
| MessagePack | 2 | ~10 ms | ~30 MB |
toUint8ArrayReference() instead of toUint8Array() eliminates the
final allocation, achieving zero allocations per operation. This is ideal for
cases where the data is consumed immediately (e.g., sent over a WebSocket in
the same tick).Real-World Scenarios
WebSocket Throughput
Simulated WebSocket message serialization and deserialization with 1,000-byte average payload:
| Library | Messages/sec | Bandwidth (MB/s) |
|---|---|---|
| Sia | 285,000 | 285 |
| MessagePack | 120,000 | 145 |
| Protobuf | 105,000 | 130 |
| JSON | 72,000 | 142 |
Game State Synchronization
Entity updates with position (3x float32), velocity (3x float32), health (uint16), and flags (uint8):
| Library | Updates/sec | Bytes per Update |
|---|---|---|
| Sia | 4,200,000 | 27 |
| Protobuf | 1,500,000 | 35 |
| MessagePack | 1,100,000 | 42 |
| JSON | 380,000 | 95 |
Micro-Benchmarks
Integer Operations (ops/sec)
| Operation | Sia | DataView (manual) | JSON |
|---|---|---|---|
| Write UInt8 | 95,000,000 | 98,000,000 | N/A |
| Write UInt32 | 85,000,000 | 88,000,000 | N/A |
| Write UInt64 | 42,000,000 | 45,000,000 | N/A |
| Read UInt8 | 92,000,000 | 95,000,000 | N/A |
| Read UInt32 | 83,000,000 | 86,000,000 | N/A |
String Operations (ops/sec)
| Operation | Sia | TextEncoder/Decoder | JSON |
|---|---|---|---|
| Write ASCII (20 chars) | 18,000,000 | 12,000,000 | N/A |
| Write UTF-8 (20 chars) | 11,000,000 | 12,000,000 | N/A |
| Write UTFZ (20 chars) | 9,500,000 | N/A | N/A |
| Read ASCII (20 chars) | 20,000,000 | 12,000,000 | N/A |
When to Use Each Library
Sia
Protobuf
Performance Tips
// 1. Reuse Sia instances
const writer = new Sia();
function serialize(user: User): Uint8Array {
writer.seek(0);
serializeUser(writer, user);
return writer.toUint8Array();
}
// 2. Use addAscii8 for ASCII-only strings
writer.addAscii8("status"); // faster than addString8
// 3. Pre-define serializer functions (avoid closures in loops)
const serializeTag = (s: Sia, tag: string): void => {
s.addString8(tag);
};
// 4. Use toUint8ArrayReference for immediate consumption
const ref = writer.toUint8ArrayReference();
socket.send(ref);
// 1. Reuse Sia reader instances with setContent
const reader = new Sia(new Uint8Array(0));
function deserialize(data: Uint8Array): User {
reader.setContent(data);
return deserializeUser(reader);
}
// 2. Use reference reads for temporary data
const tempBytes = reader.readByteArray32(true); // zero-copy
// 3. Read directly into processing — avoid intermediate variables
processUser(
reader.readUInt32(), // id
reader.readString8(), // name
reader.readUInt8(), // age
);
Summary
These benchmarks represent typical results. Your actual performance will depend on data shape, payload size, and runtime environment. Always profile with your specific workload before making architecture decisions.