Advanced

Performance Benchmarks

Detailed performance comparisons of Sia against JSON, Protobuf, and MessagePack across serialization, deserialization, and output size.

Benchmark Methodology

All benchmarks use a consistent methodology. The benchmark suite lives in src/benchmark/ in the repository and can be run locally to reproduce these results.

ParameterValue
FrameworkCustom benchmark harness with warm-up iterations
RuntimeNode.js (latest LTS) with --expose-gc for memory measurements
Iterations100,000 serialization + deserialization cycles per library
DatasetIdentical test data generated once, shared across all libraries
Warm-up1,000 iterations discarded before measurement
GCForced between benchmarks to isolate memory effects

Test Data Structure

All benchmarks use a User object with mixed field types:

import { faker } from "@faker-js/faker";

interface User {
  id: number;
  name: string;
  email: string;
  age: number;
  active: boolean;
  tags: string[];
  score: number;
  bio: string;
}

const generateUser = (): User => ({
  id: faker.number.int({ min: 1, max: 1000000 }),
  name: faker.person.fullName(),
  email: faker.internet.email(),
  age: faker.number.int({ min: 18, max: 99 }),
  active: faker.datatype.boolean(),
  tags: Array.from({ length: faker.number.int({ min: 1, max: 5 }) }, () =>
    faker.lorem.word(),
  ),
  score: faker.number.int({ min: 0, max: 10000 }),
  bio: faker.lorem.sentence(),
});

Serialization Performance

Results

LibraryOps/secRelativeAvg Time
Sia1,850,0001.0x (baseline)0.54 µs
Protobuf (protobufjs)720,0002.6x slower1.39 µs
MessagePack (msgpackr)650,0002.8x slower1.54 µs
JSON.stringify480,0003.9x slower2.08 µs

Implementations

import { Sia } from "@timeleap/sia";

const serializeUser = (s: Sia, user: User): void => {
  s.addUInt32(user.id)
    .addString8(user.name)
    .addString8(user.email)
    .addUInt8(user.age)
    .addBool(user.active)
    .addArray8(user.tags, (s, tag) => s.addString8(tag))
    .addUInt16(user.score)
    .addString16(user.bio);
};

const sia = new Sia();
// Benchmark loop
for (let i = 0; i < iterations; i++) {
  sia.seek(0);
  serializeUser(sia, user);
  const bytes = sia.toUint8Array();
}

Deserialization Performance

Results

LibraryOps/secRelativeAvg Time
Sia2,100,0001.0x (baseline)0.48 µs
MessagePack (msgpackr)890,0002.4x slower1.12 µs
Protobuf (protobufjs)680,0003.1x slower1.47 µs
JSON.parse520,0004.0x slower1.92 µs

Implementations

const deserializeUser = (s: Sia): User => ({
  id: s.readUInt32(),
  name: s.readString8(),
  email: s.readString8(),
  age: s.readUInt8(),
  active: s.readBool(),
  tags: s.readArray8((s) => s.readString8()),
  score: s.readUInt16(),
  bio: s.readString16(),
});

const reader = new Sia(serializedBytes);
// Benchmark loop
for (let i = 0; i < iterations; i++) {
  reader.seek(0);
  const user = deserializeUser(reader);
}

Output Size Comparison

LibrarySize (bytes)Relative
Sia1121.0x (baseline)
Protobuf1281.14x larger
MessagePack1451.29x larger
JSON1981.77x larger

Memory Usage Analysis

// Sia: zero allocations in the hot path
const sia = new Sia(); // uses shared 32 MB buffer (lazy-allocated on first use)

for (let i = 0; i < 100_000; i++) {
  sia.seek(0); // reset offset — no allocation
  serializeUser(sia, user);
  const bytes = sia.toUint8Array(); // one allocation: the output copy
}
// Total allocations: 100,000 (one Uint8Array per iteration)

GC Impact

LibraryAllocations per OpGC Pauses (100K ops)Peak Heap
Sia1~2 ms~8 MB
Sia (reference)0~0 ms~4 MB
Protobuf3–5~15 ms~45 MB
JSON2~12 ms~35 MB
MessagePack2~10 ms~30 MB
Using toUint8ArrayReference() instead of toUint8Array() eliminates the final allocation, achieving zero allocations per operation. This is ideal for cases where the data is consumed immediately (e.g., sent over a WebSocket in the same tick).

Real-World Scenarios

WebSocket Throughput

Simulated WebSocket message serialization and deserialization with 1,000-byte average payload:

LibraryMessages/secBandwidth (MB/s)
Sia285,000285
MessagePack120,000145
Protobuf105,000130
JSON72,000142

Game State Synchronization

Entity updates with position (3x float32), velocity (3x float32), health (uint16), and flags (uint8):

LibraryUpdates/secBytes per Update
Sia4,200,00027
Protobuf1,500,00035
MessagePack1,100,00042
JSON380,00095

Micro-Benchmarks

Integer Operations (ops/sec)

OperationSiaDataView (manual)JSON
Write UInt895,000,00098,000,000N/A
Write UInt3285,000,00088,000,000N/A
Write UInt6442,000,00045,000,000N/A
Read UInt892,000,00095,000,000N/A
Read UInt3283,000,00086,000,000N/A

String Operations (ops/sec)

OperationSiaTextEncoder/DecoderJSON
Write ASCII (20 chars)18,000,00012,000,000N/A
Write UTF-8 (20 chars)11,000,00012,000,000N/A
Write UTFZ (20 chars)9,500,000N/AN/A
Read ASCII (20 chars)20,000,00012,000,000N/A

When to Use Each Library

Sia

Good fit for cases where you control both ends of the wire: WebSocket protocols, game networking, internal services, and real-time streaming.

Protobuf

Good fit for cross-language interop with a formal schema: public APIs, gRPC services, and teams that need contract enforcement.

MessagePack

Good fit as a drop-in JSON replacement with smaller output. Works well for caching, logging, and schema-less use cases.

JSON

Best for debugging, human readability, and broad ecosystem compatibility. Use when performance is not a bottleneck.

Performance Tips

// 1. Reuse Sia instances
const writer = new Sia();

function serialize(user: User): Uint8Array {
  writer.seek(0);
  serializeUser(writer, user);
  return writer.toUint8Array();
}

// 2. Use addAscii8 for ASCII-only strings
writer.addAscii8("status"); // faster than addString8

// 3. Pre-define serializer functions (avoid closures in loops)
const serializeTag = (s: Sia, tag: string): void => {
  s.addString8(tag);
};

// 4. Use toUint8ArrayReference for immediate consumption
const ref = writer.toUint8ArrayReference();
socket.send(ref);

Summary

In these benchmarks, Sia was 2.6-3.9x faster for serialization, 2.4-4.0x faster for deserialization, and produced 14-77% smaller output than JSON, MessagePack, and Protobuf. Its zero-allocation design also reduces GC pressure, which matters for latency-sensitive applications.

These benchmarks represent typical results. Your actual performance will depend on data shape, payload size, and runtime environment. Always profile with your specific workload before making architecture decisions.