Threat modeling is a structured, proactive process for identifying and evaluating security risks in a system before attackers find them. It involves mapping how a system is built and how data flows through it, then systematically identifying where an attacker could cause harm and what controls would reduce that risk. According to OWASP, threat modeling "works to identify, communicate, and understand threats and mitigations within the context of protecting something of value."
Threat modeling is how security-conscious engineering teams ask the question: "what could go wrong, and how bad would it be?" before building something rather than after shipping it. It's a design-phase activity that makes attack surfaces explicit, so the team can make deliberate choices about which risks to address, accept, or mitigate.
The core output of a threat model is not a document. It's a shared understanding among engineers and product teams of where the security risks are in a given system, how severe they are, and what's being done about them. That shared understanding is what turns security from something the security team worries about into something the engineering team designs around.
NIST Special Publication 800-154 provides a data-centric framework for threat modeling that focuses on how sensitive data flows through a system, where it's stored, and how it might be exposed. OWASP's threat modeling guidance takes a broader systems view, starting from architecture diagrams and working outward to attack scenarios.
Threat modeling is not a one-time audit. OWASP recommends applying it continuously throughout the software development lifecycle, revisiting the model whenever the system changes, new features are added, or new attack techniques emerge.
Several structured methodologies exist. Each approaches the problem from a slightly different angle.
STRIDE is the most widely adopted starting point for teams new to threat modeling. Developed at Microsoft, it categorizes potential threats into six types:
STRIDE works well in agile environments because it's systematic without being overly prescriptive. Teams can apply it to individual features, API endpoints, or entire service architectures. The categories map naturally to security controls, so the gap between identifying a threat and deciding how to address it is short.
PASTA (Process for Attack Simulation and Threat Analysis) is a seven-stage risk-centric methodology designed to align threat modeling with business impact. It's more comprehensive than STRIDE and more appropriate for organizations that need to communicate security risk in terms of financial or operational consequence. PASTA works through defining scope, enumerating threats, identifying vulnerabilities, simulating attacks, analyzing weaknesses, correlating findings, and producing a risk-prioritized report.
For teams that want to ground their threat models in documented attacker behavior, MITRE ATT&CK provides a knowledge base of real-world tactics, techniques, and procedures (TTPs). Using ATT&CK as a threat modeling input lets you ask not just "could an attacker tamper with this?" but "how would an attacker realistically tamper with this, based on observed behavior?"
Regardless of methodology, threat modeling follows a consistent structure:
Threat models generate findings, and findings need to go somewhere. The output of a threat modeling session should feed directly into your vulnerability management process: tickets in your issue tracker, prioritized by severity, with owners and due dates. The goal is not a document that lives in Confluence; it's engineering work that gets done.
Fencer's architecture mapping capability automatically generates system architecture diagrams from your connected cloud infrastructure and repositories, giving engineering teams a starting point for threat modeling sessions rather than a blank whiteboard. Fencer's vulnerability management layer then provides a unified place to track the findings that threat modeling surfaces, alongside findings from automated scanning. When a threat model identifies a risk that a scanner later confirms as an actual vulnerability, that continuity from design-time risk to runtime finding is where the investment in threat modeling pays off.
A penetration test is a point-in-time adversarial exercise: a tester tries to exploit your running system. Threat modeling is a design-time exercise: your team analyzes how your system could be attacked before or as it's being built. They're complementary. A threat model is most valuable early, when changes are cheap. A pen test is most valuable after the system is built, to validate that your controls work. Startups that do threat modeling generally find that pen tests surface fewer critical findings because the worst design flaws were addressed earlier.
For a single feature or service, a focused session using STRIDE takes one to two hours with two to four engineers. For a full system architecture review, expect a half-day. The cadence matters more than session length: a one-hour threat model for every significant new feature is far more effective than an annual three-day exercise. The goal is to make threat modeling a lightweight, habitual part of design reviews, not a heavyweight audit event.
No. The core process, diagramming the system and asking what could go wrong using a framework like STRIDE, can be led by any engineer who understands the system being built. Security expertise helps with recognizing subtle attack paths and knowing which controls are most effective, but it's not a prerequisite for getting started. Many startups run effective threat modeling sessions without a security team by training engineers in STRIDE and making it part of their design review process. The OWASP Threat Modeling Cheat Sheet is a practical starting point.