Skip to content

LZ4

LZ4 is an extremely fast compression algorithm, prioritizing speed over compression ratio. compressionz provides a pure Zig implementation with SIMD optimizations.

PropertyValue
DeveloperYann Collet
First Release2011
ImplementationPure Zig with SIMD
LicenseApache 2.0 (compressionz implementation)
VariantCompressDecompressRatio
LZ4 Block36.6 GB/s8.1 GB/s99.5%
LZ4 Frame4.8 GB/s3.8 GB/s99.3%
FeatureLZ4 FrameLZ4 Block
StreamingYesNo
ChecksumYesNo
Auto-detectYesNo
Zero-copyYesYes
Size in headerYesNo

compressionz supports two LZ4 variants:

Self-describing format with headers, checksums, and streaming support.

const cz = @import("compressionz");
// Compress
const compressed = try cz.lz4.frame.compress(data, allocator, .{});
defer allocator.free(compressed);
// Decompress - size is in frame header
const decompressed = try cz.lz4.frame.decompress(compressed, allocator, .{});
defer allocator.free(decompressed);

Raw block format for maximum speed. Requires tracking original size.

const cz = @import("compressionz");
// Compress
const compressed = try cz.lz4.block.compress(data, allocator);
const original_size = data.len; // MUST save this!
defer allocator.free(compressed);
// Decompress - MUST provide original size
const decompressed = try cz.lz4.block.decompressWithSize(compressed, original_size, allocator);
defer allocator.free(decompressed);
Use CaseRecommended
File storageLZ4 Frame
Network protocolsLZ4 Frame
In-memory cachingLZ4 Block
IPC / message passingLZ4 Block
Database pagesLZ4 Block
Unknown recipientLZ4 Frame
// Fast (default for LZ4)
const fast = try cz.lz4.frame.compress(data, allocator, .{
.level = .fast,
});
// Default (slightly better ratio)
const default = try cz.lz4.frame.compress(data, allocator, .{
.level = .default,
});

LZ4 has a narrower speed/ratio trade-off than Zstd. Both levels are extremely fast.

When using LZ4 Frame, you can control:

const cz = @import("compressionz");
const compressed = try cz.lz4.frame.compress(data, allocator, .{
.level = .fast,
.content_checksum = true, // Include XXH32 checksum
.block_checksum = false, // Per-block checksum
.content_size = data.len, // Store size in header
.block_size = .max64KB, // Block size
});
// With checksum (default) - detects corruption
const safe = try cz.lz4.frame.compress(data, allocator, .{
.content_checksum = true,
});
// Without checksum - slightly faster/smaller
const fast = try cz.lz4.frame.compress(data, allocator, .{
.content_checksum = false,
});

Both variants support zero-copy for allocation-free compression:

const cz = @import("compressionz");
var compress_buf: [65536]u8 = undefined;
var decompress_buf: [65536]u8 = undefined;
// Compress into buffer
const compressed = try cz.lz4.block.compressInto(data, &compress_buf);
// Decompress into buffer
const decompressed = try cz.lz4.block.decompressInto(compressed, &decompress_buf);
// Calculate maximum compressed size
const max_block = cz.lz4.block.maxCompressedSize(data.len);
const max_frame = cz.lz4.frame.maxCompressedSize(data.len);
const cz = @import("compressionz");
const std = @import("std");
// Compress to file
pub fn compressToFile(allocator: std.mem.Allocator, data: []const u8, path: []const u8) !void {
const file = try std.fs.cwd().createFile(path, .{});
defer file.close();
var comp = try cz.lz4.frame.Compressor(@TypeOf(file.writer())).init(allocator, file.writer(), .{});
defer comp.deinit();
try comp.writer().writeAll(data);
try comp.finish();
}
// Decompress from file
pub fn decompressFromFile(allocator: std.mem.Allocator, path: []const u8) ![]u8 {
const file = try std.fs.cwd().openFile(path, .{});
defer file.close();
var decomp = try cz.lz4.frame.Decompressor(@TypeOf(file.reader())).init(allocator, file.reader());
defer decomp.deinit();
return decomp.reader().readAllAlloc(allocator, 1024 * 1024 * 1024);
}
+-----------------------------------------------------+
| Magic Number (4 bytes): 0x184D2204 |
+-----------------------------------------------------+
| Frame Descriptor |
| - FLG byte (flags) |
| - BD byte (block descriptor) |
| - Content Size (0-8 bytes, optional) |
| - Header Checksum (1 byte) |
+-----------------------------------------------------+
| Data Blocks |
| - Block Size (4 bytes) |
| - Block Data (variable) |
| - Block Checksum (4 bytes, optional) |
| - ... more blocks ... |
+-----------------------------------------------------+
| End Mark (4 bytes): 0x00000000 |
+-----------------------------------------------------+
| Content Checksum (4 bytes, optional) |
+-----------------------------------------------------+
+-----------------------------------------------------+
| Sequence of tokens: |
| |
| Token (1 byte) |
| - High 4 bits: literal length |
| - Low 4 bits: match length |
| |
| [Extended literal length] (0+ bytes) |
| Literals (literal_length bytes) |
| Offset (2 bytes, little-endian) |
| [Extended match length] (0+ bytes) |
+-----------------------------------------------------+
// LZ4 frame magic (little-endian)
const LZ4_MAGIC: u32 = 0x184D2204;
// Detection
if (data.len >= 4 and
data[0] == 0x04 and data[1] == 0x22 and
data[2] == 0x4D and data[3] == 0x18)
{
// It's LZ4 frame
}

LZ4 is a byte-oriented LZ77 variant optimized for speed:

  1. Hash table — 4-byte sequences hashed to find matches
  2. Greedy parsing — Takes first match found (no optimal parsing)
  3. Simple encoding — Minimal overhead per token
  4. No entropy coding — Raw bytes, no Huffman/ANS
  • Cache-friendly linear scanning
  • Minimal branching
  • Simple token format
  • SIMD-optimized match extension and copy
  • No dictionary or entropy coding overhead

Our pure Zig implementation uses explicit SIMD:

// 16-byte vectorized match extension
const v1: @Vector(16, u8) = src[pos..][0..16].*;
const v2: @Vector(16, u8) = src[match_pos..][0..16].*;
const eq = v1 == v2;
const mask = @as(u16, @bitCast(eq));
// Count trailing ones for match length
// 8-byte vectorized copy
const chunk: @Vector(8, u8) = src[offset..][0..8].*;
dest[0..8].* = @as([8]u8, chunk);

Best for:

  • Maximum throughput requirements
  • Real-time compression
  • In-memory data structures
  • Game asset compression
  • Database page compression

Not ideal for:

  • Maximum compression ratio (use Zstd/Brotli)
  • Web content delivery (use Gzip/Brotli)
  • Dictionary compression (use Zstd)
MetricLZ4 BlockLZ4 FrameZstd
Compress36.6 GB/s4.8 GB/s12 GB/s
Decompress8.1 GB/s3.8 GB/s11.6 GB/s
Ratio99.5%99.3%99.9%
ChecksumNoYesYes
StreamingNoYesYes