Rillence reviews Rust codebases for memory safety assumptions, unsafe boundaries, concurrency faults, protocol correctness, and production reliability before they reach users.
We inspect the code paths where compiler guarantees end: unsafe blocks, FFI, async cancellation, shared state, serialization, cryptography, and deployment assumptions.
Validate invariants around raw pointers, lifetimes, aliasing, custom allocators, FFI, transmute usage, and unchecked assumptions.
Find deadlocks, cancellation hazards, race-prone state machines, backpressure gaps, lock ordering issues, and task lifecycle leaks.
Check parsers, state transitions, consensus or settlement logic, signature validation, replay protection, and edge-case handling.
Review crate exposure, feature flags, supply-chain assumptions, unsafe transitive dependencies, build scripts, and vendored code.
Identify unbounded allocations, accidental copies, blocking work in async paths, panic surfaces, and denial-of-service vectors.
Assess observability, failure recovery, configuration handling, secrets exposure, upgrade paths, and incident response hooks.
Engagements are scoped around concrete code, threat models, and release timelines. Each audit ends with reproducible findings, severity ratings, and remediation guidance.
A release-focused review of application logic, unsafe boundaries, async behavior, error handling, and dependency exposure before production launch.
A deep review of unsafe Rust, C/C++ interop, raw memory access, ABI assumptions, pointer ownership, and soundness invariants.
Review cryptographic usage, serialization formats, verification flows, replay resistance, parser behavior, and consensus-sensitive state transitions.
Find resource exhaustion paths, algorithmic complexity traps, unbounded queues, blocking async work, panic surfaces, and production bottlenecks.
The audit is designed for engineering teams that need more than a checklist. We trace behavior through the codebase and give fixes your team can implement.
We define assets, trust boundaries, attacker capabilities, deployment assumptions, and the code paths that deserve the most attention.
Automated tooling supports the work, but the core review is manual: invariants, state machines, error paths, and real exploitability.
Findings include affected code, impact, conditions, proof or reasoning, remediation options, and severity calibrated to your system.
After fixes land, we review patches and distinguish resolved risks from remaining assumptions or accepted tradeoffs.
The final report is structured for maintainers: concise executive summary, prioritized issues, and implementation-level guidance.
We discuss architecture questions, ambiguous risks, and remediation tradeoffs directly with the engineers who own the code.
Send the repository scope, release timeline, and the parts of the system that worry you most. We will propose a focused audit plan.