Software Assurance
Ship software that stands up to attackers.
Find critical flaws early. Get your remediations verified. Prove risk
reduction with KPIs.
Whether you build for Web3 or enterprise, we help you deliver software
with peace of mind.

- Attacker-mindset audits
- SDLC improvement
- Ambassador program
How it works
Four steps: Agree what “bad” is. Inspect. Test. Fix. You keep the artifacts.
Model
Agree what “bad” looks like for your business.
You get
Threat map + ranked abuse paths + starter invariant list
What we do
- Map crown-jewel flows and attacker incentives
- Rank abuse paths by business impact
- Write simple “must-always” rules (invariants)
Inspect
Walk the design and code against those rules.
You get
Design/code gaps + telemetry plan
What we do
- Manual 4-eyes review on critical components
- Targeted static checks where it pays off
- Wire light telemetry so nothing is “hidden”
Test
Try to break it, then turn breaks into tests.
You get
Findings + repro harnesses + CI-ready tests
What we do
- Coverage-guided fuzzing + property/invariant tests
- Proof-of-concepts for exploitable issues
- Failing tests that become green in CI
Fix
Pair on fixes and verify they stay fixed.
You get
Fix-review appendix + rulepacks + attestation
What we do
- Fix review with re-tests
- Rules and shims to block the whole class
- Signed attestation for important artefacts
Systems we specialize in
Our assurance work spans every layer, from blockchain networks to classic SaaS and firmware, making it easy to zero in on what's relevant. Choose a category to explore concise, real-world examples and the outcomes we deliver.
Blockchain Protocols (L0/L2 & Bridges)
Consensus, rollups, and bridges where state roots and messages move value across domains.
On-chain Applications (Blockchain)
Smart contracts, DeFi mechanisms, and wallet policy paths where users approve, mint, and settle.
Runtimes & Sandboxes (WASM/EVM)
Execution engines and isolation boundaries where capability scoping and determinism must hold.
SaaS & APIs
Multi-tenant platforms and partner APIs where authorization and data boundaries must not bend.
AI/ML Systems
Pipelines that turn data into decisions — protected inputs, evaluated models, signed releases.
Firmware & Hardware
Boot stories and privilege boundaries that establish trust before software even runs.
OSS Components
Critical open-source you ship — safer APIs, fuzzed paths, trustworthy builds.
Don't see your exact system? One of these categories will fit—let's scope the right path. Email us.
Can't view the PDF in your browser? Download it here.
How we work on your code
We start from a threat model so we are not “blind” auditing. Risks are ranked by impact on your business and users, and we target crown-jewel paths first. To be holistic, two senior engineers pair a 4-eyes manual code review with static checks and dynamic assessment (coverage-guided fuzzing + invariants). We deploy AI to ensure no critical bugs have been missed. You leave with artifacts and CI gates, not just a PDF.
- Threat-model-driven plan: ranked risks, abuse-path focus, assumptions & trust boundaries
- Holistic review: manual 4-eyes + static (Semgrep / custom checkers) + dynamic (AFL++, libAFL/GoLibAFL, Honggfuzz, ziggy) & invariant suites + AI quality assurance
- Artifacts you keep: fuzz harnesses, minimized corpora, invariant/property tests, containerized runners, GitHub Checks & PR Audit Dashboard
- Remediation support: we stay until fixes are verified in CI and gates are green
How we prove risk reduction
These KPIs map to the assessment stages and answer the only questions that matter: are we finding issues quickly, covering crown-jewel paths, and closing the loop fast?
Why these metrics?
- MTTFinding shows momentum during onboarding/verify.
- Critical-asset & abuse-path coverage proves we're exercising the threat model's crown jewels.
- Invariant pass-rate ties findings to tests you own (regression-proof).
- Fix-review pass-rate & critical MTTR demonstrate remediation velocity post-report.
- PR Audit Dashboard events (GitHub) drive PR priority and review SLAs.
What good looks like
Where numbers come from
- CI checks for property/invariant tests & fuzz campaigns
- Issue tracker timestamps for MTTR & fix-review outcomes
- Threat-model registry for asset & abuse-path denominators
- SBOM & provenance jobs for freshness & attestation
Acme Chain — L2 Bridge
2025-07-28 – 2025-08-25Engagement options
Choose Baseline, Continuous, or Embedded. All include observable assurance.
Baseline launch readiness
Feature or pre-release audit driven by threat model + manual 4-eyes code review for logic bugs on crown-jewel paths. Clear ship/no-ship guidance with artifact-as-code.
Deliverables
- Launch readiness report (severity, difficulty, exploit path)
- Threat model & ranked abuse paths for the feature
- Manual 4-eyes code review of critical components (logic-bug hunt guided by threat model)
- Repro harnesses + coverage-guided fuzzing/property/invariant tests wired to CI
KPIs
- MTTFinding ≤ 3d
- Critical-asset coverage ≥ 90%
- Invariant pass-rate ≥ 95%
- 4-eyes coverage = 100% (critical components)
Continuous code reviews
Automated + human review on every high-risk PR. Coverage-guided fuzzing + invariant gates, targeted manual review, and GitHub Checks within minutes — prioritized via a PR Audit Dashboard (GitHub).
Deliverables
- PR Audit Dashboard (GitHub) for triage & priority tracking
- 100% PR auto-run fuzzing + invariant suites (GitHub Checks)
- Risk-based escalation to manual review on sensitive changes
- Coverage heatmap + monthly risk report
KPIs
- Time-to-signal per PR < 10 min
- Regressions blocked at PR = 100%
- False-positive rate ≤ 5%
Embed (Ambassador residency)
Work directly with your developers to fix security issues, burn down backlog, and install ownership and guardrails.
Deliverables
- Backlog triage & remediation plan with owners
- Ownership (RACI) & onboarding across squads
- Quarterly executive scorecard with risk trendlines
KPIs
- Critical MTTR < 3d
- ≥ 80% squads with security owner
- Regression re-open < 2% (90d)
Fuzz-first toolchain we use
We lead with dynamic testing and back it with lightweight static checks. Your assessment ships with harnesses, corpora, and CI wiring.
Fuzzers
- AFL++
- Honggfuzz
- libAFL / GoLibAFL
- ziggy
Static & policy
- Semgrep rulesets
- Custom static checkers
- Powerful AI models for quality assurance
CI & triage
- GitHub Checks & PR Audit Dashboard
- Reproducible harness containers & corpora
- SBOM & provenance jobs (SLSA-aligned)
We also integrate with existing runners (e.g., oss-fuzz) and your CI.