Back to Blog
Engineering

Why We Build in Rust

Memory safety without garbage collection. Performance without compromise. How Rust enables the speed and reliability our products demand.

GrepLabs Team
May 10, 2025
9 min read

At GrepLabs, we build performance-critical software: DNS servers handling millions of queries, file indexers processing hundreds of thousands of documents, and encryption engines protecting sensitive data. We chose Rust for these systems, and it's been transformative.

Why Not [Insert Language]?

Why Not Go?

Go is excellent for many use cases, but:

  • Garbage collection causes latency spikes (unacceptable for <1ms DNS)
  • Less control over memory layout
  • Goroutine overhead for fine-grained parallelism

Why Not C/C++?

C++ provides the performance we need, but:

  • Memory safety issues are too easy to introduce
  • Security vulnerabilities from buffer overflows, use-after-free
  • Harder to maintain and audit

Why Not Python/Node.js?

Great for prototyping, but:

  • 10-100x slower for CPU-intensive work
  • High memory overhead
  • GIL (Python) limits parallelism

Rust's Advantages

Memory Safety Without GC

Rust's ownership system prevents entire classes of bugs at compile time:

// This won't compile - Rust prevents use-after-free
fn bad_code() {
    let data = vec![1, 2, 3];
    let reference = &data[0];
    drop(data);  // Compiler error: data still borrowed
    println!("{}", reference);
}

// This is safe - Rust ensures validity
fn good_code() {
    let data = vec![1, 2, 3];
    let reference = &data[0];
    println!("{}", reference);
    drop(data);  // OK: reference no longer used
}

Zero-Cost Abstractions

High-level code compiles to efficient machine code:

// This iterator chain...
let sum: i32 = numbers
    .iter()
    .filter(|&x| x % 2 == 0)
    .map(|x| x * 2)
    .sum();

// ...compiles to the same assembly as this loop:
let mut sum = 0;
for x in numbers {
    if x % 2 == 0 {
        sum += x * 2;
    }
}

Fearless Concurrency

The type system prevents data races:

use std::sync::{Arc, Mutex};
use std::thread;

// Shared state is explicitly marked
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];

for _ in 0..10 {
    let counter = Arc::clone(&counter);
    handles.push(thread::spawn(move || {
        let mut num = counter.lock().unwrap();
        *num += 1;
    }));
}

for handle in handles {
    handle.join().unwrap();
}
// Counter is guaranteed to be 10

Real-World Impact

Shields AI: DNS Performance

Our DNS server handles 100K+ queries per second:

// Hot path: DNS query processing
#[inline(always)]
fn process_query(packet: &[u8], blocklist: &HashSet<&str>) -> QueryResult {
    // Parse DNS packet (zero-copy)
    let query = DnsQuery::parse(packet)?;

    // Check blocklist (O(1) lookup)
    if blocklist.contains(query.domain()) {
        return QueryResult::Blocked;
    }

    QueryResult::Forward
}

Results:

  • P99 latency: <500μs
  • Memory: <100MB for 5M blocked domains
  • Zero garbage collection pauses

Hippo: File Indexing

Processing 100K files requires efficiency:

// Parallel file processing
use rayon::prelude::*;

fn index_directory(path: &Path) -> Vec<Document> {
    // Parallel directory traversal
    WalkDir::new(path)
        .into_iter()
        .par_bridge()  // Parallelize
        .filter_map(|entry| entry.ok())
        .filter(|e| e.file_type().is_file())
        .map(|entry| {
            let content = std::fs::read_to_string(entry.path())?;
            let embedding = model.embed(&content);
            Ok(Document { path: entry.path(), embedding })
        })
        .collect()
}

Results:

  • Index 1000 files/minute
  • Memory scales linearly
  • Full CPU utilization

Chai.im: Encryption Engine

Cryptographic operations must be timing-safe:

// Constant-time comparison (prevents timing attacks)
use subtle::ConstantTimeEq;

fn verify_mac(expected: &[u8], actual: &[u8]) -> bool {
    expected.ct_eq(actual).into()
}

// AEAD encryption with explicit memory management
fn encrypt(key: &[u8; 32], nonce: &[u8; 12], plaintext: &[u8]) -> Vec<u8> {
    let cipher = Aes256Gcm::new(key.into());

    // Plaintext is never copied unnecessarily
    cipher.encrypt(nonce.into(), plaintext).expect("encryption failure")
}

Results:

  • Constant-time operations prevent side-channel attacks
  • No plaintext ever leaks to heap
  • Memory zeroed after use

Developer Experience

Error Handling

Rust's Result type makes errors explicit:

// Errors are part of the type signature
fn read_config(path: &Path) -> Result<Config, ConfigError> {
    let content = std::fs::read_to_string(path)
        .map_err(|e| ConfigError::IoError(e))?;

    let config: Config = toml::from_str(&content)
        .map_err(|e| ConfigError::ParseError(e))?;

    config.validate()?;

    Ok(config)
}

// Callers must handle errors
match read_config(&path) {
    Ok(config) => start_server(config),
    Err(e) => {
        eprintln!("Failed to load config: {}", e);
        std::process::exit(1);
    }
}

Testing

Property-based testing catches edge cases:

use proptest::prelude::*;

proptest! {
    #[test]
    fn encrypt_decrypt_roundtrip(plaintext: Vec<u8>) {
        let key = generate_key();
        let nonce = generate_nonce();

        let ciphertext = encrypt(&key, &nonce, &plaintext);
        let decrypted = decrypt(&key, &nonce, &ciphertext).unwrap();

        prop_assert_eq!(plaintext, decrypted);
    }
}

Documentation

Documentation is first-class:

/// Processes a DNS query and returns the appropriate response.
///
/// # Arguments
///
/// * `query` - The incoming DNS query packet
/// * `config` - Server configuration including blocklists
///
/// # Returns
///
/// A `DnsResponse` that should be sent back to the client.
///
/// # Example
///
/// ```
/// let response = process_query(&query_packet, &config);
/// socket.send_to(&response.to_bytes(), client_addr)?;
/// ```
pub fn process_query(query: &DnsQuery, config: &Config) -> DnsResponse {
    // Implementation
}

Challenges

Learning Curve

Rust's ownership system takes time to learn:

  • "Fighting the borrow checker" is real initially
  • Concepts like lifetimes are unfamiliar
  • Design patterns differ from other languages

Our approach:

  • Gradual introduction for new team members
  • Code review focused on idiomatic Rust
  • Internal documentation of common patterns

Compilation Time

Large Rust projects compile slowly:

  • Full rebuild: 2-3 minutes
  • Incremental: 10-30 seconds

Mitigations:

  • Split into smaller crates
  • Use `cargo check` during development
  • CI caches dependencies

Ecosystem Maturity

Some libraries are young:

  • Fewer options than Python/JS
  • Some crates abandoned
  • API stability varies

We evaluate carefully:

  • Maintenance activity
  • Test coverage
  • Security audit status

Conclusion

Rust enables us to build software that's simultaneously fast, safe, and maintainable. The initial learning investment pays dividends in:

  • Fewer production bugs
  • Better performance
  • More confident refactoring
  • Easier security audits

For performance-critical, security-sensitive software, Rust is the right choice.


*Interested in our Rust code? Check out our open source repositories.*

Tags
RustPerformanceEngineeringSystems Programming