Threat Modeling (Beginner → Elite)
Threat modeling is the habit of thinking like an attacker before writing code.
In real incidents: threat modeling only works when it changes decisions (defaults, architecture, acceptance criteria) — not when it becomes a document exercise.
Instead of asking “Is this code vulnerable?”, threat modeling asks:
- What are we building?
- What do we need to protect? (assets)
- How does data move? (data flows + trust boundaries)
- How could it be abused? (threats + attacker paths)
- What do we change in the design to reduce risk? (mitigations)
Where threat modeling fits in SDLC
- Planning: identify high-risk features and sensitive assets.
- Design: draw data flows, mark trust boundaries, identify threats.
- Development: implement mitigations (authZ rules, rate limits, encryption, logging).
- Testing: validate assumptions (negative tests, abuse cases, security test cases).
- Production: monitor abuse patterns, improve controls, revisit the model when features change.
The 4-step mental model (simple and repeatable)
- Assets — what matters most (money, identities, tokens, PII, admin powers).
- Data flows — where data travels and where it crosses trust boundaries.
- Threats — what can go wrong (attacker mindset).
- Mitigations — what design changes reduce risk the most.
Step 1: Identify assets (what attackers want)
An asset is anything valuable to an attacker or critical to the business.
Common assets
- Identity: accounts, roles, privileges
- Secrets: API keys, tokens, session cookies, signing keys
- Money: balances, payments, refunds, credits, transfers
- PII: customer data, documents, KYC details
- Admin powers: feature flags, configuration, bulk actions
- Availability: login, payments, checkout, core APIs
Step 2: Draw the data flow (and mark trust boundaries)
Threat modeling becomes powerful when you visualize how data moves.
Example: typical banking/fintech flow
- Mobile/Web Client → API Gateway/WAF
- API Gateway → Auth service (SSO/IdP)
- API → Core services (accounts, payments, profile)
- Core services → Database
- Core services → Third-party providers (SMS/OTP, KYC, payment gateway)
Trust boundary = where trust level changes:
- Browser/mobile (untrusted) → backend (trusted)
- Internet → internal network
- Your system → third-party system
- User role boundary (user → admin)
Step 3: Identify threats (STRIDE)
The most common framework is STRIDE. The key is applying it with real examples.
| STRIDE | Threat | Real-world example |
|---|---|---|
| S | Spoofing | Attacker pretends to be another user (stolen tokens, weak auth) |
| T | Tampering | Attacker modifies transfer amount or beneficiary ID |
| R | Repudiation | User denies making a transaction (insufficient logs/audit) |
| I | Information Disclosure | PII leak via insecure API (BOLA/IDOR) |
| D | Denial of Service | Bot floods login/OTP endpoint causing outages or lockouts |
| E | Elevation of Privilege | Normal user becomes admin via broken authorization |
Elite technique: Attack trees (think in attacker steps)
STRIDE is a checklist of categories. Attack trees are a path An attacker can actually follow.
Attack tree example: “steal money via transfer abuse”
- Goal: unauthorized transfer
- Path A: bypass authorization
- Find API that accepts
accountId/beneficiaryId - Change object ID → test if server enforces ownership (BOLA)
- Exploit missing authZ check → transfer from victim account
- Find API that accepts
- Path B: tamper with parameters
- Manipulate amount/limits/fees
- Race conditions to bypass balance checks
- Replay requests if idempotency is missing
- Path C: compromise session
- Steal session token (XSS, insecure storage, logs)
- Reuse token to initiate transfer
Real-world example #1: Fund transfer feature (banking-grade)
Assets
- Account balance + transaction integrity
- Beneficiary list + payout destination
- Audit trail (who did what, when)
Data flow (simplified)
- Client →
POST /transfers(amount, fromAccountId, toBeneficiaryId) - API → AuthZ + risk engine
- API → Core banking ledger
- API → Notifications (SMS/email)
Threats (attacker mindset)
- BOLA/IDOR: change
fromAccountIdto another user’s account - Parameter tampering: modify amount/currency/fees
- Replay: repeat the same transfer request
- Race condition: submit multiple transfers simultaneously to bypass balance check
- Repudiation: deny a transfer due to weak audit logging
Mitigations (design-level)
- Authorization: enforce ownership on server for every object (account/beneficiary)
- Idempotency keys: prevent duplicates and safe retries
- Atomic ledger operations: DB transactions / locking to prevent race conditions
- Limits + step-up: thresholds, velocity limits, MFA for risky transfers
- Audit logs: immutable logs with request IDs + actor + device/IP
Real-world example #2: Password reset workflow abuse
Why it’s high risk
Password reset is an attacker’s favorite because it can convert small weaknesses into full account takeover.
Threats
- User enumeration: “email exists” vs “email not found” messages
- Reset token leakage: tokens in URLs, logs, referrers, analytics
- Weak token design: predictable tokens or long expiry
- Replay: token reusable multiple times
Mitigations
- Generic responses (“If account exists, we emailed you”)
- Short expiry + one-time tokens
- Store tokens hashed server-side
- Rotate sessions after reset
Real-world example #3: Third-party webhook integration
Webhooks are common in payments, shipping, KYC, and messaging. They are also commonly abused.
Threats
- Spoofing: attacker sends fake webhook events to your endpoint
- Replay: attacker replays a “payment succeeded” event
- Tampering: modifies payload values (amount, status)
- DoS: floods the webhook receiver
Mitigations
- Validate signatures (HMAC or asymmetric), strict timestamp checks
- Idempotency + event IDs to prevent replay
- Allowlist IPs where possible (with care)
- Queue + rate limit to absorb spikes
Common failure modes (what goes wrong in real companies)
- Threat modeling too late: done after code, so mitigations become “patches” instead of design fixes.
- Only OWASP Top 10: missing workflow abuse and business logic threats.
- No trust boundaries: treating client input as semi-trusted (“mobile is trusted”).
- No follow-up: model exists as a document but not linked to tickets, tests, or acceptance criteria.
Threat modeling deliverables (what you should produce)
- Data Flow Diagram (DFD): components, data stores, trust boundaries.
- Threat list: mapped to STRIDE (or equivalent).
- Mitigations: design decisions, not just “do input validation.”
- Security requirements: concrete rules (authZ rules, logging, rate limits).
- Abuse cases: negative test ideas for QA/security testing.
Interview Questions & Answers (Easy → Expert)
Easy (foundation)
- What is threat modeling?
A: A design-time process to identify assets, data flows, trust boundaries, attacker paths, and mitigations before implementation. - Why do threat modeling before coding?
A: Design flaws are expensive to fix later. Threat modeling reduces rework and prevents architectural security gaps. - Is threat modeling the same as security testing?
A: No. Threat modeling focuses on design/architecture risk; testing focuses on implementation flaws. - Who should participate?
A: Security, developers, architects, and product owners — because threats map to design and business workflows.
Medium (applied knowledge)
- What is an asset? Give examples.
A: Anything valuable to attackers: credentials, tokens, PII, money movement, admin functions, availability of critical APIs. - What is a trust boundary?
A: Where trust changes (client→server, internet→internal, your system→third-party). Trust boundaries are common attack points. - Explain STRIDE with examples.
A: Spoofing (stolen token login), Tampering (change transfer amount), Repudiation (deny action), Info disclosure (IDOR), DoS (flood OTP), Elevation (role escalation). - How do you prioritize threats?
A: By business impact, likelihood, ease of exploitation, and exposure. Focus first on money, identity, and high-privilege paths.
Hard (experienced-level depth)
- Threat model a fund transfer feature quickly.
A: Assets: balances/beneficiaries. Threats: BOLA, tampering, replay, race conditions. Mitigations: strict authZ, idempotency keys, atomic ledger ops, limits/step-up, audit logs. - How do you threat model third-party integrations?
A: Identify shared data, validate authenticity (signatures), prevent replay (event IDs), design failure handling, add monitoring and rate-limits. - What’s the biggest mistake teams make?
A: Treating threat modeling as a one-time document rather than linking it to tickets, tests, and ongoing feature changes. - How does threat modeling differ for web vs API vs mobile?
A: Web: sessions/CSRF/XSS; API: authZ/object-level access/replay; mobile: client trust, reverse engineering, token storage.
Expert (principal/architect level)
- How do you threat model business logic?
A: By mapping workflows and abuse cases: valid actions used in invalid sequences (refund abuse, coupon stacking, workflow bypass, privilege chaining). - How do you measure success of threat modeling?
A: Fewer high-severity findings late in SDLC, reduced re-architecture, faster security reviews, fewer production incidents, and clearer security requirements. - What does “assume breach” change in threat modeling?
A: It shifts focus to containment: strong identity, least privilege, segmentation, tamper-evident logging, monitoring, and safe failure modes. - How do you handle ambiguous ownership in multi-tenant systems?
A: Explicit tenant isolation rules, consistent authorization middleware, ownership checks on every object, and tests for cross-tenant access (BOLA prevention).
Checklist
- Assets identified (identity, money, tokens, PII, admin)
- DFD created (components + data stores)
- Trust boundaries marked
- Threats identified (STRIDE + abuse flows)
- Mitigations documented as design decisions
- Outputs converted to tickets + acceptance criteria
- Abuse cases become negative tests
- Monitoring signals defined (rate spikes, authZ failures, replay attempts)
Remediation playbook (how to operationalize it)
- Pick the highest-risk feature (auth, money movement, admin, integrations).
- Draw a simple DFD with trust boundaries.
- Apply STRIDE and write 10–20 realistic threats.
- Turn mitigations into security requirements (authZ rules, idempotency, logging).
- Create abuse-case test cases and add them to security tests.
- Add monitoring/alerting for the threats you care about most.
- Update the model whenever the feature changes.