Skip to content

Zero-Copy API

The zero-copy API allows compression and decompression into pre-allocated buffers, avoiding allocations in hot paths.

Standard compression allocates memory for the output:

// Allocates new memory
const compressed = try cz.compress(.lz4, data, allocator);
defer allocator.free(compressed);

Zero-copy uses your buffer:

// Uses pre-allocated buffer
var buffer: [65536]u8 = undefined;
const compressed = try cz.compressInto(.lz4, data, &buffer, .{});
// No allocation, no free needed

Use zero-copy when:

  • Processing many small items in a loop
  • Working in memory-constrained environments
  • Latency-sensitive code paths
  • Embedded or real-time systems

Use standard API when:

  • Output size is unpredictable
  • Simplicity is more important than performance
  • Working with large data (may need to reallocate anyway)

pub fn compressInto(
codec: Codec,
input: []const u8,
output: []u8,
options: CompressOptions,
) Error![]u8
ParameterTypeDescription
codecCodecCompression algorithm
input[]const u8Data to compress
output[]u8Pre-allocated buffer
optionsCompressOptionsCompression options
  • []u8 — Slice of output containing compressed data
  • error.OutputTooSmall — Buffer too small
  • error.UnsupportedFeature — Codec doesn’t support zero-copy
CodecZero-Copy Support
lz4
lz4_raw
snappy
zstd
gzip
brotli
const cz = @import("compressionz");
var buffer: [65536]u8 = undefined;
const compressed = try cz.compressInto(.lz4, data, &buffer, .{});
// compressed.ptr == &buffer
// compressed.len <= buffer.len

pub fn decompressInto(
codec: Codec,
input: []const u8,
output: []u8,
) Error![]u8
ParameterTypeDescription
codecCodecCompression algorithm
input[]const u8Compressed data
output[]u8Pre-allocated buffer
  • []u8 — Slice of output containing decompressed data
  • error.OutputTooSmall — Buffer too small
  • error.UnsupportedFeature — Codec doesn’t support zero-copy
var buffer: [1024 * 1024]u8 = undefined; // 1 MB
const decompressed = try cz.decompressInto(.lz4, compressed, &buffer);
// decompressed.ptr == &buffer

Use maxCompressedSize to calculate the worst-case size:

const max_size = cz.maxCompressedSize(.lz4, data.len);
const buffer = try allocator.alloc(u8, max_size);
defer allocator.free(buffer);
const compressed = try cz.compressInto(.lz4, data, buffer, .{});
// compressed.len will be <= max_size

If you know the original size:

var buffer: [known_original_size]u8 = undefined;
const decompressed = try cz.decompressInto(.lz4, compressed, &buffer);

If the size is encoded in the compressed data (LZ4 frame):

// LZ4 frame includes content size in header
const content_size = cz.lz4.frame.getContentSize(compressed) orelse {
// Size not in header, use estimate or error
return error.SizeUnknown;
};
const buffer = try allocator.alloc(u8, content_size);
defer allocator.free(buffer);
const decompressed = try cz.decompressInto(.lz4, compressed, buffer);

Zero-copy shines when processing multiple items:

const cz = @import("compressionz");
pub fn processItems(items: []const []const u8) !void {
// Allocate buffers once
var compress_buf: [65536]u8 = undefined;
var decompress_buf: [65536]u8 = undefined;
for (items) |item| {
// Compress into buffer (no allocation)
const compressed = try cz.compressInto(.lz4, item, &compress_buf, .{});
// Send compressed data somewhere...
try sendData(compressed);
}
}

Compare with standard API:

pub fn processItemsStandard(allocator: Allocator, items: []const []const u8) !void {
for (items) |item| {
// Allocates for each item
const compressed = try cz.compress(.lz4, item, allocator);
defer allocator.free(compressed); // Free for each item
try sendData(compressed);
}
}

const result = cz.compressInto(.lz4, large_data, &small_buffer, .{}) catch |err| {
if (err == error.OutputTooSmall) {
// Buffer too small, need to allocate larger
const size = cz.maxCompressedSize(.lz4, large_data.len);
const bigger = try allocator.alloc(u8, size);
return cz.compressInto(.lz4, large_data, bigger, .{});
}
return err;
};
const result = cz.compressInto(.zstd, data, &buffer, .{}) catch |err| {
if (err == error.UnsupportedFeature) {
// Zstd doesn't support zero-copy, fall back to standard
return cz.compress(.zstd, data, allocator);
}
return err;
};

const cz = @import("compressionz");
var buffer: [65536]u8 = undefined;
// Compress
const compressed = try cz.lz4.block.compressInto(data, &buffer);
// Decompress
const decompressed = try cz.lz4.block.decompressInto(compressed, &output_buffer);
const compressed = try cz.lz4.frame.compressInto(data, &buffer, .{
.level = .fast,
.content_checksum = true,
});
const decompressed = try cz.lz4.frame.decompressInto(compressed, &output_buffer);
const compressed = try cz.snappy.compressInto(data, &buffer);
const decompressed = try cz.snappy.decompressInto(compressed, &output_buffer);

Benchmark: Compressing 1000 x 1 KB items

MethodTimeAllocations
Standard API15 ms2000
Zero-Copy12 ms0

The performance gain comes from avoiding allocator overhead, not from the compression itself.


  1. Pre-size buffers — Use maxCompressedSize or known sizes
  2. Reuse buffers — Don’t reallocate between operations
  3. Handle errors — Always check for OutputTooSmall
  4. Fall back gracefully — Use standard API for unsupported codecs
pub fn compressWithFallback(
codec: cz.Codec,
data: []const u8,
buffer: []u8,
allocator: std.mem.Allocator,
) ![]u8 {
return cz.compressInto(codec, data, buffer, .{}) catch |err| {
if (err == error.UnsupportedFeature or err == error.OutputTooSmall) {
// Fall back to allocating version
return cz.compress(codec, data, allocator);
}
return err;
};
}