C · 09 / Secure Code Review & AppSec

Code review. Before your
users find the bugs.

SAST, DAST, SCA, IaC scanning, secrets scanning, manual secure-code review, threat modeling, and SDLC / CI-CD integration — delivered across Go, Rust, Python, Java, C#/.NET, TypeScript, Kotlin, Swift, C/C++, and Ruby. Every engagement ends with an executive summary, file-by-file findings, a CVSS-scored vulnerability list, a remediation branch with git patches where straightforward, and a free re-test. The goal is merged fixes, not a report that sits in a drawer.

CoverageSAST · DAST · SCA IaC · Secrets · Manual
Languages10 supported Go · Rust · Py · Java · .NET · TS
Practice leadSecurityX · CISM · CySA+ DoD 8140 baseline
MethodologySTRIDE · Attack trees OWASP ASVS L2
CI integrationGH · GL · CircleCI Jenkins · Azure DevOps
Re-testIncluded within 90 days
10
Languages covered
Go, Rust, Python, Java, C#/.NET, TypeScript/JavaScript, Kotlin, Swift, C/C++, Ruby — the mainstream server-and-client stack, with reviewers who write each language day-to-day.
5 streams
Parallel scan streams
SAST, DAST, SCA, IaC, and secrets — each with dedicated tooling, runbooks, and triage discipline. Not one scanner pretending to be five.
$0
Re-test fee
Every paid engagement includes one free re-test of the items flagged, requested within ninety days of final report delivery.
Human
Final-word review
Every finding in every report is confirmed by a human reviewer with hands on the code. AI assists. It does not sign.
01 / Why secure code review

Find it in the pull request,
not in the breach report.

The cheapest vulnerability is the one caught before it merges. Every step downstream — staging, production, incident, customer disclosure, regulator notification — costs an order of magnitude more than catching the same issue in code review. We run the scanners, and more importantly we read the code, because most of the issues that actually matter cannot be found by a scanner.

A pattern-matched SAST rule can tell you that exec(userInput) is dangerous. It cannot tell you that the authorization check two files up accepts a user ID from a cookie that a different route lets the client set — because the authorization decision reads fine in isolation, and the route reads fine in isolation, and the two files live in different directories written by two different engineers over three different sprints. That class of finding is where breaches come from, and that class of finding requires a human reviewer who reads the whole codebase, understands the business logic, and can recognize when two correct-looking pieces of code produce an incorrect-looking system.

We run five parallel scan streams — SAST, DAST, SCA, IaC, secrets — because each catches a different class of issue and none of them overlap completely. We layer manual review on top because the strongest findings always come from a reviewer who has read the code. We deliver threat models using STRIDE and attack trees, because the defect you did not know existed is the one you did not model. And we integrate into the SDLC — pull-request-time checks, quality gates, signed commits, SBOM generation — because the only durable fix is the one that prevents the class of defect from recurring.

This service is priced and delivered separately from our vulnerability assessment and penetration testing work. A pen test asks what an attacker can do against your running system. A code review asks why the system is the way it is. Both matter; they answer different questions; most mature programs run both annually.

02 / The five scan streams

SAST, DAST, SCA,
IaC, secrets —
plus human review.

Each stream has its own tooling, its own tuning cadence, its own failure mode, and its own class of finding. We run all five because a single scanner is a single point of missed coverage. The sixth stream — manual secure-code review — is the one that decides whether the engagement was worth the line-item.

Stream 01 · SASTStatic

Static analysis

Source-code scanning without execution. Tooling: Semgrep for pattern-matched rules and custom policy, SonarQube for quality plus security overlap, CodeQL for deep taint analysis, and Checkmarx for enterprise environments with compliance reporting needs. Tuned per-language; noise suppressed; rules versioned.

Semgrep · SonarQube · CodeQL · Checkmarx
Stream 02 · DASTDynamic

Dynamic analysis

Running-target scanning against staging or production-equivalent. Burp Suite Professional for authenticated web-app testing, OWASP ZAP for automated baseline and CI integration, Invicti (formerly Netsparker) for large-surface automation. Authenticated passes, unauthenticated baseline, and API-centric scans for every endpoint you expose.

Burp Pro · OWASP ZAP · Invicti
Stream 03 · SCADependencies

Software composition

Third-party dependency inventory, CVE triage, license risk, and supply-chain review. Snyk for open-source vulnerability depth, Dependabot for GitHub-native updates, Mend (formerly WhiteSource) for enterprise, GitHub Advanced Security where you already own the license. SBOM delivered in CycloneDX and SPDX.

Snyk · Dependabot · Mend · GitHub AS
Stream 04 · IaCInfrastructure

Infrastructure-as-code

Scanning Terraform, CloudFormation, Kubernetes manifests, Helm charts, and Pulumi against CIS baselines and organizational policy. Tooling: Checkov, tfsec, and Snyk IaC. Drift detection between declared state and deployed reality, and policy-as-code integration via OPA / Conftest when the stack is mature enough to support it.

Checkov · tfsec · Snyk IaC
Stream 05 · SecretsCredentials

Secrets scanning

Repository-wide and history-deep secrets detection. git-secrets as the pre-commit default, TruffleHog for deep historical scans across entire git history, GitGuardian for organization-wide continuous monitoring. Any finding triggers immediate rotation; pre-commit hooks roll out during the engagement.

git-secrets · TruffleHog · GitGuardian
Stream 06 · ManualHuman

Manual secure-code review

A reviewer with hands on the code, reading the repo end-to-end for business-logic flaws, authorization boundaries, crypto implementation, and integration trust. This is where most of the real findings come from. We scope by path and budget in hours; the deliverable is file-by-file annotated review, not a checklist.

Annotated review · business-logic focused
Stream 07 · Threat modelDesign

Threat modeling

STRIDE, attack trees, and PASTA applied to new applications, major architecture shifts, and high-risk features before code is written. Scoping session with engineering lead, model drafted by the reviewer, mitigation mapping, walk-through with security and engineering leadership. Deliverable is portable: usable by the next engineer, readable by the next auditor.

STRIDE · Attack trees · PASTA
Stream 08 · SDLCIntegration

SDLC + CI/CD integration

Security integrated into the development lifecycle. PR-time SAST, SCA, and secrets checks. Quality gates on critical and high findings. Signed commits. SBOM generation. Pipeline hardening for GitHub Actions, GitLab CI, Jenkins, CircleCI, Buildkite, Azure DevOps, and Bitbucket Pipelines. DevSecOps maturity assessment against BSIMM or SAMM as a separate deliverable when requested.

DevSecOps · BSIMM · SAMM
03 / Languages

Ten languages.
Written by reviewers
who write them daily.

Every reviewer on our roster has a primary and secondary language — the one they write day-to-day and one adjacent. Engagements are staffed accordingly: a Go service gets a Go reviewer, a TypeScript monorepo gets a TypeScript reviewer, an embedded-feeling C++ service gets the reviewer who writes C++. We do not hand a Python reviewer a Rust codebase and hope for the best.

01 · Systems
Go

Server-side services, CLI tooling, cloud-native workloads. Deep coverage on concurrency safety, context-handling, and standard-library misuse.

02 · Systems
Rust

Performance-sensitive server code, CLI, WASM. Borrow-checker sanity plus unsafe audit; supply-chain review across crates.io.

03 · General
Python

Django, Flask, FastAPI, data pipelines, ML training. Pickle deserialization, template injection, and framework-specific middleware review.

04 · Enterprise
Java

Spring, Spring Boot, Jakarta EE. JVM deserialization, reflection hazards, and XML-external-entity review. Maven and Gradle dependency audit.

05 · Enterprise
C# / .NET

ASP.NET Core, Blazor, Entity Framework. View-state and data-protection-API review; NuGet supply-chain audit.

06 · Web
TS / JS

Node.js, Deno, Bun, Next.js, React, Vue, Svelte, Express, NestJS. Prototype pollution, SSR hazards, and npm dependency risk.

07 · Mobile
Kotlin

Android apps and server-side Kotlin. Jetpack Compose, Ktor, keystore handling, and Android intent-filter review.

08 · Mobile
Swift

iOS applications and server-side Swift. Keychain handling, URLSession pinning, and crypto-kit implementation review.

09 · Native
C / C++

Native services, game servers, high-performance libraries. Memory-safety analysis, integer-overflow audit, and ABI-boundary review.

10 · Scripting
Ruby

Rails applications, Sinatra microservices. Active-record injection, mass-assignment audit, and Sidekiq / job-queue review.

04 / How an engagement runs

Six stages.
Always the same.

From scoping call to re-test, every engagement runs the same six stages. The names do not change; the depth scales with scope. A small review runs these in two weeks. A monorepo takes eight. The rigor is identical.

01

Scoping & kickoff

Thirty-minute scoping call plus a code walk-through with the tech lead. We confirm the repo, the services in-scope, the services explicitly out-of-scope, the deployment environments to DAST-test, the CI platform to integrate with, and the languages involved. Every NDA is executed here; every secure-transfer channel is opened here.

DeliverableSigned SOW + NDADuration2–3 business days
02

Automated scan pass

SAST baseline, SCA inventory, IaC scan, and secrets-history deep pass run in parallel. Raw output is de-duplicated, assigned a first-pass severity, and held for manual triage. DAST begins against a provided staging environment once we have a running target and credentials. This is where AI-assisted first-pass triage runs; this is not where findings are written.

DeliverableRaw scan registerDuration3–5 business days
03

Manual review

A reviewer with hands on the code reads the repo end-to-end — or the scoped paths end-to-end for a monorepo — looking for the classes of finding scanners cannot see. Authorization boundaries, crypto mis-use, business-logic flaws, integration trust, and data-flow pollution. Every finding is logged with file path, line range, reproduction narrative, and CVSS score.

DeliverableAnnotated reviewDuration5–20 business days
04

Threat model

If the engagement includes threat modeling, the model is drafted here using STRIDE or attack trees, validated against the manual-review findings, and walked through with engineering leadership. The deliverable is a portable artifact: diagrams, decision log, mitigation map, and residual-risk register.

DeliverableThreat modelDuration3–5 business days
05

Report + remediation branch

Executive summary drafted; file-by-file technical findings drafted; remediation branch opened with git patches for every finding whose fix is mechanical. CVSS scores finalized. A sixty-minute read-out with engineering and security leadership walks the report end-to-end and answers every question raised during the review.

DeliverableReport + branch + read-outDuration3–5 business days
06

Re-test & attestation

Within ninety days of report delivery, at your request, we re-test every item flagged and update each finding with a new status — fixed, partial, accepted, or still open — and issue a short attestation letter suitable for auditors, customers, and boards. The re-test is included in the engagement fee.

DeliverableUpdated report + letterDuration3–7 business days
05 / Deliverables

Five artifacts.
Every engagement.

What you get, every time. Each deliverable is signed by the reviewer who authored it, reviewed by our practice lead, and dated on firm letterhead. Nothing is subcontracted; nothing is ghost-written.

Artifact 01

Executive summary

Four to six pages. Business-level risk written for a non-technical audience. The three to five findings that would actually matter in a breach. Strategic recommendations tied to a twelve-month program.

Board-ready · signed
Artifact 02

File-by-file findings

Every finding catalogued by file path and line range. CVSS v3.1 score, reproduction narrative, business-context severity, and stack-specific remediation guidance. Evidence captured — scanner output, proof-of-concept code, screenshots.

Technical · CVSS-scored
Artifact 03

CVSS-scored vuln list

A single flat register of every issue, sortable by severity, file, class, and remediation difficulty. Usable by project managers who do not want to read the full technical report. Mapped to OWASP Top 10, CWE, and MITRE ATT&CK.

Register · sortable · mapped
Artifact 04

Remediation branch

Git patches for every finding whose fix is mechanical — dependency bumps, configuration changes, pattern-safe refactors. Opened as a branch on your repo (or ours, by preference), ready for your team's PR review. Complex fixes get a written runbook, not a speculative patch.

Git branch · PR-ready
Artifact 05

Re-test attestation

Included free within ninety days. We re-test every flagged item, update the finding status, and issue a one-page attestation letter suitable for SOC 2 auditors, customer security questionnaires, PCI assessors, and board packets.

Attestation · auditor-ready
Who signs the report

Practice led by
a director with the three-cert stack.

Every engagement is reviewed and signed off by our Cyber Command lead — a director with prior military cyber-operations experience, including adversarial-simulation and red-team operational roles, who holds CompTIA SecurityX (formerly CASP+), ISACA CISM, and CompTIA CySA+. Three ANSI-accredited credentials covering practitioner, manager, and analyst work. Each credential is a DoD 8140 baseline.

That matters because the discipline of code review spans exactly those three lanes. A reviewer needs the hands-on technical knowledge to read the code; the program lens to recommend durable fixes and not just one-line patches; and the analyst discipline to tell a real finding from a false positive when the scanner output is ambiguous. We've structured the practice around that combination, and the report you get reflects it.

  • SecurityX (CASP+)Senior-level advanced security practitioner · hands-on technical depth.
  • ISACA CISMManagement and governance · program-level remediation guidance.
  • CompTIA CySA+Analyst and SOC discipline · real-finding versus false-positive triage.
  • Prior military cyberAdversarial-simulation and red-team operational experience.
  • DoD 8140 baselineAll three certs accepted for senior technical, governance, and SOC-analyst roles.
  • Sign-off authorityEvery report passes the practice lead before it leaves the firm.
Stream Enterprise Midmarket Open-source / startup
SAST Checkmarx · Veracode Compliance reporting SonarQube · CodeQL CI-native Semgrep · Bandit · Brakeman Per-language OSS
DAST Invicti · Rapid7 InsightAppSec Surface-scale Burp Suite Professional Manual + automation OWASP ZAP · Nuclei CI integrated
SCA Mend (WhiteSource) · Black Duck License + CVE Snyk · GitHub Advanced Security Git-native Dependabot · OSV-Scanner · Trivy Per-language OSS
IaC Prisma Cloud · Wiz Code Policy + drift Snyk IaC · Checkov CI baseline tfsec · KICS · Terrascan Free · fast
Secrets GitGuardian Enterprise Org-wide monitoring GitGuardian Pro · GitHub secret scanning Repo-wide git-secrets · TruffleHog · gitleaks Pre-commit + history
Threat modeling IriusRisk · ThreatModeler Platform-based OWASP Threat Dragon · pytm Code-as-model STRIDE worksheets · Attack trees Whiteboard-first
CI integration Jenkins · Azure DevOps · GitLab Ultimate Self-hosted GitHub Actions · GitLab CI · CircleCI SaaS pipelines Drone · Woodpecker · Buildkite Self-hosted OSS
06 / Code-Review FAQ

Questions we
hear every week.

If your question is not answered here, the senior reviewer who would run your engagement takes the call directly — not a sales rep, not a gatekeeper. Dispatch routes you in under five minutes.

What's the difference between SAST and DAST?

SAST is static application security testing — the scanner reads your source code without running it, tracing data flow through functions, flagging dangerous sinks, insecure APIs, and pattern-based vulnerabilities like SQL concatenation or cross-site scripting. Tooling: Semgrep, SonarQube, CodeQL, and Checkmarx for enterprise environments that need compliance reporting.

DAST is dynamic application security testing — the scanner runs against a live, deployed instance of your application and probes it the way an attacker would, using Burp Suite Professional, OWASP ZAP, or Invicti. SAST catches issues earlier and cheaper but produces false positives and cannot see runtime state. DAST catches issues that only appear at runtime — authentication flows, SSRF against cloud metadata, race conditions, misconfigured reverse proxies — but runs late in the cycle and can miss code paths it never reaches.

Both matter. Run SAST in the pull request, DAST in staging before release, and back both with human review for the classes of finding neither scanner will catch.

Do you use AI tools for code review?

Yes, as amplifiers — never as the final word. We use AI-assisted review inside every engagement for pattern recognition, consistency checking across large codebases, and first-pass triage on SAST output to separate real findings from pattern-matched noise. AI is good at the first thousand things. It is bad at deciding which of the first thousand things matter, and worse at reasoning about the business logic that defines what matters in context.

Every finding in the final report is confirmed by a human reviewer with hands on the code, and every piece of guidance is written specifically to your stack. If your codebase is sensitive — defense, healthcare, regulated financial work — we can run the engagement without uploading source to external AI services. The model runs locally on reviewer hardware, or on a VM you provide, or we read the code without AI assistance at all. Confirm that preference at scoping and we configure accordingly.

Will you sign an NDA before looking at our code?

Yes, always, before source ever moves. Our default posture: mutual NDA executed at scoping; source pulled through a secure, audit-logged channel — SSH key deployed for the engagement, signed-URL bundle expiring in seventy-two hours, or direct clone on a client-supplied VDI; source retained only for the duration of the engagement plus a retention window you specify; destruction attested in writing on close-out.

For highly sensitive codebases — defense primes, regulated fintech, healthcare with PHI access in the test path — we run the review on client-supplied VDI, on-site at your facility, or by flying a reviewer to work from your office. Nothing about any of this is unusual; it is the default for any engagement we take on. If you have a custom DPA or a specific data-handling requirement (CUI, ITAR, HIPAA-covered, PCI-scoped), send it during scoping — we match it.

Can you review our monorepo?

Yes. Monorepos change the tooling, not the discipline. We scope by path — which services, libraries, and packages are in, which are out — and we set SAST rules per-module so you do not get twenty thousand findings from vendored dependencies, generated code, or mock fixtures. Semgrep is particularly good at monorepo scoping via the Registry plus custom path-based configs, and CodeQL supports per-database scoping across very large trees.

We've worked in Bazel, Nx, Turborepo, Yarn workspaces, Go modules, Maven multi-module, Gradle multi-project, and Pants repos. The review report is delivered module-by-module rather than as one monolithic file, and the remediation branch gets opened per-service so your teams can merge independently without waiting on each other. For very large monorepos — say, 500K lines and up — we staff with two to four reviewers in parallel and still deliver a single, consolidated report.

Do you integrate with our CI?

Yes. SAST, SCA, IaC scanning, and secrets scanning all run best as pull-request-time checks — findings land on the PR where the author can fix them in context, not three weeks later in a remediation spreadsheet. We integrate with GitHub Actions, GitLab CI, Bitbucket Pipelines, Jenkins, CircleCI, Buildkite, and Azure DevOps. For each tool we set a baseline that does not block merges on day one (you will drown in noise), tune rules over two to four sprints, then move to blocking on critical and high severities.

Secrets scanning always blocks from day one. If the scanner flags a credential in a diff, the PR cannot merge until it is resolved — we rotate the credential and strip it from history, then unblock. Quality gates, signed commits, and SBOM generation get wired in during the same engagement if they are not already. DAST is harder to run in-pipeline because of the running-target requirement, but we integrate OWASP ZAP baseline scans into nightly CI runs against a dedicated staging environment whenever the stack supports it.

How do you handle findings we disagree with?

Every finding in the report carries a CVSS score, reproduction steps, and a written rationale. If your team disagrees — either on severity or on whether the finding is real — we do a thirty-minute triage call with the engineer who raised the objection and the reviewer who wrote the finding. If the reviewer can demonstrate the exploit path and the impact live, the finding stays as written. If the engineer can demonstrate an existing mitigating control the reviewer missed, the finding is downgraded or accepted with the mitigation documented.

If nobody can tell for sure, we build a proof-of-concept together and the PoC decides. We've had plenty of findings downgraded or struck on review; we've never had a reviewer refuse to reconsider an objection. The goal is a report you can stand behind — one that reads accurately six months from now when an auditor picks it up — not a score, not a finding count, and not a defense of the reviewer's ego.

What's the turnaround time for a code review?

Depends on scope. A single service or microservice — say, ten to thirty thousand lines of code — is five to ten business days from kickoff to draft report. A mid-size application, fifty to a hundred and fifty thousand lines, is two to four weeks. A full monorepo, five hundred thousand lines and up across a dozen services, is six to ten weeks and runs with two to four reviewers in parallel.

Every engagement begins with a thirty-minute scoping call plus a code walk-through with the tech lead, and ends with a sixty-minute read-out to engineering and security leadership. The included re-test runs inside three to seven business days once you tell us the remediation branch has merged. If you need a compressed timeline for a launch, an incoming customer security questionnaire, or an audit date that cannot move, tell us at scoping — we can usually commit to half the standard timeline at a premium rate, as long as your team is available to answer reviewer questions inside the compressed window.

What languages do you cover?

The mainstream server-side and client-side stack: Go, Rust, Python, Java, C#/.NET, TypeScript and JavaScript (Node.js, Deno, Bun, React, Vue, Svelte), Kotlin (Android plus server-side), Swift (iOS plus server-side), C and C++, and Ruby (including Rails). Each is covered with both SAST tooling and reviewers who write the language day-to-day — a reviewer tracing a Go channel deadlock should write Go, not translate from Java.

For rarer languages — Elixir, Erlang, Haskell, OCaml, Clojure, Scala, Zig, Nim — we still take the engagement but with reduced SAST coverage (more manual, fewer pre-built rules) and clearer disclosure of that in scoping. Embedded C, firmware analysis, hardware security, and side-channel work are out of scope for this service line; we refer those engagements out to specialist firms and are happy to introduce you. If your stack is polyglot, that is normal — we staff across languages for the same engagement.

What about my third-party dependencies?

Third-party dependencies are SCA territory — software composition analysis — and they get a dedicated stream inside every engagement. We inventory every direct and transitive dependency, check each against the NVD, GitHub Advisory Database, and vendor-specific sources (OSS-Fuzz advisories, language-specific registries), flag license risks (GPL reach-through, AGPL in a commercial product, unclear or unattributed licenses), and deliver a remediation roadmap prioritized by exploitability, not just CVE presence.

Tooling for the scanning layer: Snyk, Dependabot, Mend (formerly WhiteSource), and GitHub Advanced Security. Deliverable for the audit artifact: an SBOM in CycloneDX or SPDX. The underexamined part of SCA is the supply chain itself — typosquatted packages, compromised maintainers, malicious updates to otherwise-trusted packages — which we flag separately with different tooling and manual review. For high-assurance engagements we verify package signatures, lock transitive versions, and recommend an internal package mirror when the dependency tree is large enough to warrant it.

Can you certify our code for PCI, SOC 2, or SOX?

We don't issue certifications — no code-review firm does, and any firm that claims otherwise is misrepresenting how those audits work. What we do is deliver the evidence and the attestation letters your auditor expects.

PCI-DSS Requirement 6.2 requires development staff to be trained in secure coding; Requirement 6.3 requires secure software-development practices; Requirement 11.3 requires annual testing including code review for applications in the CDE. Our engagement report documents the cadence, the training, and the remediation process — usable directly by your QSA. SOC 2 CC7.1 and CC8.1 cover change management and threat identification; our engagement report, integrated into your evidence package, satisfies both. SOX ITGC on application development requires segregation of duties, change documentation, and control over privileged access to production; our report speaks to the portion of that control set related to code quality and security review.

In every case, we coordinate directly with your auditor — Big Four, Schellman, A-LIGN, or your regional firm — at the start of the engagement to confirm the format, the scope, and the evidence they need. If a custom template is required, send it at scoping and we will match it before delivery.

07 / Next Step

Tell us the repo.
We'll tell you what's in it.

Send the language, the approximate line count, the deadline, and whether this is a one-off review or the start of an SDLC integration engagement. A senior reviewer — the one who would run the engagement — is on a scoping call inside two business days. Every engagement includes executive summary, file-by-file findings, CVSS-scored vuln list, remediation branch, and free re-test.