🛡️ Application Security CheatSheet

SSTI Deep Dive (Server-Side Template Injection)

SSTI happens when an application treats attacker-controlled text as a template instead of as plain data. The server then renders that template, and template features (variables, expressions, filters, helpers) can be abused.

Production reality: these issues usually slip through because everything looks “fine” in happy-path testing — until one weird request hits production.

Key idea: the dangerous boundary is “user input → template compiler/interpreter”. If users can influence the template structure (not just the values), you may have SSTI.

Why it exists (root cause)

Template engines are designed to be powerful: they allow conditions, loops, helpers, includes, partials, and sometimes expressions. SSTI appears when we accidentally give that power to untrusted input.

Mental model: data vs template

Most web apps should follow: Template = trusted and Data = untrusted. SSTI happens when the boundary flips: Template becomes untrusted.

What you controlSafe?Why
User controls a field value shown in a templateUsually safeEscaping/encoding can keep it as data
User controls the template text that gets renderedHigh riskUser may change structure and trigger template features
User controls which template file is selectedMedium–Highoften leads to unintended templates or unsafe contexts
Interview line: “SSTI is about template control, not just reflection. If A real attacker can influence template syntax evaluation, they can often escalate beyond simple content injection.”

Common places SSTI shows up

Node.js examples (Express)

Vulnerable pattern (rendering attacker-controlled template string)

// ❌ Vulnerable: user controls the TEMPLATE (structure), not just data
import express from "express";
import ejs from "ejs";

const app = express();
app.use(express.json());

app.post("/preview", (req, res) => {
  const tpl = String(req.body.template || "");
  const data = { user: { name: "Alice" } };

  // Rendering from a raw string is the red flag.
  // (Many engines have similar APIs: renderString / compile / render.)
  const html = ejs.render(tpl, data); // ❌ dangerous when tpl is untrusted
  res.type("text/html").send(html);
});

Fixed pattern (trusted templates + untrusted data)

// ✅ Secure: template is trusted (file), user input is treated as data
app.post("/preview", (req, res) => {
  const message = String(req.body.message || "");
  const safeData = {
    userName: message, // untrusted data
  };

  // Render a trusted template file (not attacker-supplied syntax)
  res.render("preview-email.ejs", safeData); // ✅
});

Fixed pattern #2 (restricted templating: allow-list placeholders only)

// ✅ Secure approach for “custom templates”: allow-list placeholders, no expressions
const allowed = new Set(["{{firstName}}", "{{lastName}}", "{{orderId}}"]);

function renderPlaceholders(text, values) {
  // Replace only known placeholders; reject unknown template syntax.
  for (const [k, v] of Object.entries(values)) {
    const ph = "{{" + k + "}}";
    if (allowed.has(ph)) text = text.split(ph).join(String(v));
  }
  // Any remaining {{...}} is treated as invalid (prevent template language features).
  if (/{{[^}]+}}/.test(text)) throw new Error("Unsupported placeholder");
  return text;
}

app.post("/email-template", (req, res) => {
  const tpl = String(req.body.template || "");
  const out = renderPlaceholders(tpl, { firstName: "Alice", orderId: "12345" });
  res.type("text/plain").send(out);
});
Takeaway: If the business needs “custom templates”, build a limited DSL (placeholders) rather than exposing a full template engine.

Exploitation progression (attacker mindset)

This section explains the progression at a high level (no copy/paste exploit steps).

Phase 1: Identify a rendering surface

Phase 2: Determine what is controlled

Phase 3: Infer engine capability and sandboxing

Phase 4: Escalate via exposed objects and helpers

Defensive insight: Most real SSTI incidents come from rendering untrusted template strings or allowing powerful helpers in “template customization” features.

Tricky edge cases & bypass logic (conceptual)

Safe validation & testing guidance (defensive verification)

Your goal is to verify whether user input influences template evaluation (structure), without attempting destructive behavior.

High-confidence finding usually requires showing: (1) untrusted template string is rendered, and (2) template evaluation changes output beyond plain reflection.

Fixes that hold in production

1) Don’t render untrusted template strings

2) If you must allow customization, restrict expressiveness

3) Harden template context

4) Add guardrails

Confidence levels (low / medium / high)

Interview-ready summaries (60-second + 2-minute)

60-second answer

SSTI is when untrusted input is rendered as a server-side template, letting an attacker influence template evaluation. The root cause is treating user input as presentation logic (rendering raw strings) or exposing powerful helpers/objects to templates. I validate by confirming a render/compile boundary and how the output changes due to template evaluation. Fixes are to render trusted templates only, restrict customization to allow-listed placeholders, and minimize template context and helper power.

2-minute answer

I think of SSTI as a boundary violation: template syntax must be trusted, and only data is untrusted. When apps introduce template previews or “custom templates”, they sometimes render attacker-controlled strings using engines that support expressions and helpers. If the rendering context exposes rich objects or custom helpers, the impact can expand from output manipulation to sensitive data exposure or other high-impact behavior. I aim for high-confidence proof by tracing request input to a render/compile call and demonstrating evaluation beyond plain reflection. The durable remediation is to remove untrusted string rendering, build a restricted placeholder system for customization, and harden the context with least-privileged data and safe helpers.

Checklist (quick review)

Remediation playbook

  1. Contain: disable template preview/custom template editing if untrusted rendering is suspected.
  2. Inventory: find all template engines and render/compile call sites.
  3. Fix boundary: remove raw string rendering; move to trusted template files.
  4. Restrict customization: replace with allow-listed placeholders; validate tokens and length.
  5. Harden context: minimize exposed objects/helpers; remove dangerous helpers.
  6. Guardrails: add timeouts, rate limits, and audit logs; add tests preventing regressions.
  7. Verify: security tests for template evaluation boundaries and safe rendering behavior.

Interview Questions & Answers (Easy → Hard)

Easy

  1. What is SSTI?
    A: Plainly, it’s when user input is treated as a server template and gets evaluated. Deeply, it’s an untrusted template crossing into a template compiler/interpreter boundary.
  2. How is SSTI different from XSS?
    A: Plainly, XSS runs in the browser; SSTI runs on the server. Deeply, SSTI is server-side template evaluation; XSS is client-side script execution. They can chain if SSTI output becomes unsafe HTML/JS.
  3. What’s the most common root cause?
    A: Plainly, rendering user-supplied template strings. Deeply, using “renderString/compile” on untrusted input or exposing powerful helpers/objects to templates.
  4. What’s the safest general fix?
    A: Plainly, don’t render untrusted templates. Deeply, render only trusted template files and treat user input strictly as data.
  5. Is escaping enough?
    A: Plainly, not if the template itself is attacker-controlled. Deeply, escaping helps when input is data, but SSTI is about controlling template structure and evaluation.
  6. What’s a safe way to allow “custom templates”?
    A: Plainly, allow placeholders only. Deeply, build a restricted placeholder system with allow-lists and reject any template language constructs.

Medium

  1. Scenario: An email preview endpoint accepts a “template” field.
    A: Plainly, that’s a big red flag. Deeply, if the server renders that field through a template engine, the attacker may influence evaluation; fix by moving to trusted templates and allow-list placeholders for customization.
  2. Scenario: Templates are stored per tenant in the DB. Is that safe?
    A: Plainly, it depends who can edit them. Deeply, it becomes risky if tenants/admins aren’t fully trusted; this also creates second-order SSTI. Treat stored templates as untrusted and restrict expressiveness.
  3. Follow-up: What evidence makes you confident it’s SSTI?
    A: Plainly, proof that input is evaluated, not just echoed. Deeply, code-level trace to render/compile call and repeatable output change attributable to template evaluation.
  4. Scenario: “We use Handlebars, it’s logic-less, so no SSTI.” True?
    A: Plainly, not always. Deeply, helper functions and exposed context can reintroduce risk; custom helpers can become the dangerous capability.
  5. Follow-up: What should never be passed into a template context?
    A: Plainly, secrets and powerful objects. Deeply, avoid passing request, response, config, env, DB handles, or rich objects that expose sensitive methods or properties.
  6. Scenario: A PDF worker renders templates asynchronously.
    A: Plainly, that’s second-order. Deeply, stored input later rendered can bypass front-end validation; audit the worker boundary and enforce safe rendering and allow-lists at the worker too.
  7. Follow-up: What non-security risks does SSTI create?
    A: Plainly, outages. Deeply, template evaluation can be expensive and lead to DoS if users can create heavy templates; add timeouts and complexity limits.

Hard

  1. Scenario: The app renders a template string, but the engine claims to sandbox execution.
    A: Plainly, don’t rely on “sandbox” marketing. Deeply, sandboxing is often configuration-dependent; the correct fix is still to avoid untrusted template rendering and restrict features and context.
  2. Scenario: Only “admins” can edit templates. Is SSTI acceptable?
    A: Plainly, still risky. Deeply, admin accounts get phished; tenant admins may not be fully trusted; and templates can become an escalation path. Apply least privilege, allow-lists, and audit logs.
  3. Follow-up: How do you design a secure customization feature?
    A: Plainly, keep it simple. Deeply, define an allow-listed placeholder DSL, validate tokens, restrict outputs to safe contexts, and store versioned templates with approvals and audit trails.
  4. Scenario: You can’t change the template engine quickly. What compensating controls help?
    A: Plainly, reduce exposure. Deeply, disable string rendering, remove dangerous helpers, reduce context, add strict validation and length limits, and isolate rendering in a low-privileged worker with timeouts.
  5. Follow-up: How do you prevent regressions?
    A: Plainly, enforce rules in CI. Deeply, lint/grep for renderString/compile on request fields, code review checklist for template features, and automated tests ensuring only allow-listed placeholders are supported.
  6. Scenario: SSTI output ends up inside a script tag on the client. What happens?
    A: Plainly, you may also get XSS. Deeply, SSTI can generate unsafe HTML/JS which then becomes a client-side injection; treat output context and encoding carefully and avoid injecting server-rendered strings into JS contexts.
  7. Follow-up: Where does SSTI sit in OWASP Top 10?
    A: Plainly, it’s an Injection class. Deeply, it maps to A03 Injection, often enabled by insecure design (A04) and misconfiguration (A05) depending on engine and context.
  8. Scenario: A feature chooses templates by name from a request parameter.
    A: Plainly, that can be dangerous too. Deeply, template selection should be allow-listed; otherwise it often leads to unintended templates that expose sensitive data or unsafe contexts.
Safety note: for understanding +