Governance Risk and Compliance

Threat Modeling

Threat modeling is a structured, proactive process for identifying and evaluating security risks in a system before attackers find them. It involves mapping how a system is built and how data flows through it, then systematically identifying where an attacker could cause harm and what controls would reduce that risk. According to OWASP, threat modeling "works to identify, communicate, and understand threats and mitigations within the context of protecting something of value."

What is threat modeling?

Threat modeling is how security-conscious engineering teams ask the question: "what could go wrong, and how bad would it be?" before building something rather than after shipping it. It's a design-phase activity that makes attack surfaces explicit, so the team can make deliberate choices about which risks to address, accept, or mitigate.

The core output of a threat model is not a document. It's a shared understanding among engineers and product teams of where the security risks are in a given system, how severe they are, and what's being done about them. That shared understanding is what turns security from something the security team worries about into something the engineering team designs around.

NIST Special Publication 800-154 provides a data-centric framework for threat modeling that focuses on how sensitive data flows through a system, where it's stored, and how it might be exposed. OWASP's threat modeling guidance takes a broader systems view, starting from architecture diagrams and working outward to attack scenarios.

Threat modeling is not a one-time audit. OWASP recommends applying it continuously throughout the software development lifecycle, revisiting the model whenever the system changes, new features are added, or new attack techniques emerge.

Threat modeling methodologies

Several structured methodologies exist. Each approaches the problem from a slightly different angle.

STRIDE

STRIDE is the most widely adopted starting point for teams new to threat modeling. Developed at Microsoft, it categorizes potential threats into six types:

  • Spoofing (pretending to be someone or something you're not)
  • Tampering (modifying data or code without authorization)
  • Repudiation (denying you performed an action)
  • Information disclosure (exposing data to unauthorized parties)
  • Denial of service (disrupting availability)
  • Elevation of privilege (gaining permissions beyond what's authorized)

STRIDE works well in agile environments because it's systematic without being overly prescriptive. Teams can apply it to individual features, API endpoints, or entire service architectures. The categories map naturally to security controls, so the gap between identifying a threat and deciding how to address it is short.

PASTA

PASTA (Process for Attack Simulation and Threat Analysis) is a seven-stage risk-centric methodology designed to align threat modeling with business impact. It's more comprehensive than STRIDE and more appropriate for organizations that need to communicate security risk in terms of financial or operational consequence. PASTA works through defining scope, enumerating threats, identifying vulnerabilities, simulating attacks, analyzing weaknesses, correlating findings, and producing a risk-prioritized report.

MITRE ATT&CK

For teams that want to ground their threat models in documented attacker behavior, MITRE ATT&CK provides a knowledge base of real-world tactics, techniques, and procedures (TTPs). Using ATT&CK as a threat modeling input lets you ask not just "could an attacker tamper with this?" but "how would an attacker realistically tamper with this, based on observed behavior?"

The basic threat modeling process

Regardless of methodology, threat modeling follows a consistent structure:

  1. Define what you're protecting. Scope the system: what components are in scope, what data flows through them, and where the trust boundaries are.
  2. Draw the data flow diagram. Map how data moves through the system. Identify where data crosses trust boundaries (e.g., from user input to a service, from one service to another, from a service to a database).
  3. Identify threats. For each component and data flow, enumerate what could go wrong. Use STRIDE categories or another framework as a prompt.
  4. Rate and prioritize. Not all threats are equal. Assess likelihood and impact. Focus effort on high-severity, realistic threats.
  5. Define mitigations. For each prioritized threat, decide what control reduces the risk: input validation, authentication enforcement, encryption, rate limiting, access controls.
  6. Track and revisit. Record mitigations as engineering tasks. Update the threat model when the system changes.

Why threat modeling matters for startups

  1. Fixing security issues before you build is orders of magnitude cheaper than fixing them after. A finding in a design review costs an engineering conversation and a revised approach. The same finding after your first SOC 2 audit or after a customer reports a vulnerability costs remediation time, potential customer trust, and possibly a breach disclosure. Threat modeling moves security left in the development lifecycle, where it's cheapest.
  2. It produces compliance evidence as a byproduct. SOC 2 (CC6 and CC8 controls), ISO 27001 (Annex A 8.25, Secure Development Policy, and the broader risk assessment process), and other frameworks require documented risk identification and treatment in your development process. A threat modeling practice, documented and repeatable, is direct evidence of security-by-design. Auditors want to see that you identify risks deliberately, not reactively.
  3. Architecture mapping becomes a security artifact. When threat models are tied to architecture diagrams, you produce two compliance deliverables simultaneously: a diagram that auditors need and a threat analysis that demonstrates you thought about the security of that architecture. Fencer auto-generates architecture diagrams that can seed the threat modeling process.
  4. Small teams can do this without a dedicated security person. Threat modeling does not require a security engineer. It requires engineers who are willing to ask "what could go wrong?" systematically. STRIDE gives any developer a repeatable framework. A one-hour architecture review with STRIDE applied to a new feature generates more useful security signal than a penetration test scheduled six months after the feature shipped.

Threat modeling and vulnerability management

Threat models generate findings, and findings need to go somewhere. The output of a threat modeling session should feed directly into your vulnerability management process: tickets in your issue tracker, prioritized by severity, with owners and due dates. The goal is not a document that lives in Confluence; it's engineering work that gets done.

How Fencer helps with threat modeling

Fencer's architecture mapping capability automatically generates system architecture diagrams from your connected cloud infrastructure and repositories, giving engineering teams a starting point for threat modeling sessions rather than a blank whiteboard. Fencer's vulnerability management layer then provides a unified place to track the findings that threat modeling surfaces, alongside findings from automated scanning. When a threat model identifies a risk that a scanner later confirms as an actual vulnerability, that continuity from design-time risk to runtime finding is where the investment in threat modeling pays off.

Frequently asked questions

How is threat modeling different from a penetration test?

A penetration test is a point-in-time adversarial exercise: a tester tries to exploit your running system. Threat modeling is a design-time exercise: your team analyzes how your system could be attacked before or as it's being built. They're complementary. A threat model is most valuable early, when changes are cheap. A pen test is most valuable after the system is built, to validate that your controls work. Startups that do threat modeling generally find that pen tests surface fewer critical findings because the worst design flaws were addressed earlier.

Toggle answer

How long does a threat modeling session take?

For a single feature or service, a focused session using STRIDE takes one to two hours with two to four engineers. For a full system architecture review, expect a half-day. The cadence matters more than session length: a one-hour threat model for every significant new feature is far more effective than an annual three-day exercise. The goal is to make threat modeling a lightweight, habitual part of design reviews, not a heavyweight audit event.

Toggle answer

Does threat modeling require specialized security expertise?

No. The core process, diagramming the system and asking what could go wrong using a framework like STRIDE, can be led by any engineer who understands the system being built. Security expertise helps with recognizing subtle attack paths and knowing which controls are most effective, but it's not a prerequisite for getting started. Many startups run effective threat modeling sessions without a security team by training engineers in STRIDE and making it part of their design review process. The OWASP Threat Modeling Cheat Sheet is a practical starting point.

Toggle answer

Secure your startup’s momentum