SSTI Deep Dive (Server-Side Template Injection)
SSTI happens when an application treats attacker-controlled text as a template instead of as plain data. The server then renders that template, and template features (variables, expressions, filters, helpers) can be abused.
Production reality: these issues usually slip through because everything looks “fine” in happy-path testing — until one weird request hits production.
Why it exists (root cause)
Template engines are designed to be powerful: they allow conditions, loops, helpers, includes, partials, and sometimes expressions. SSTI appears when we accidentally give that power to untrusted input.
- Mixing concerns: treating user input as presentation logic rather than data.
- Unsafe rendering APIs: rendering from raw strings (not from trusted template files).
- Dangerous helpers: engines or custom helpers that can access objects, environment, files, or execute code.
- Implicit trust in “preview” features: email templates, PDF templates, CMS snippets, “custom branding”, admin notes.
Mental model: data vs template
Most web apps should follow: Template = trusted and Data = untrusted. SSTI happens when the boundary flips: Template becomes untrusted.
| What you control | Safe? | Why |
|---|---|---|
| User controls a field value shown in a template | Usually safe | Escaping/encoding can keep it as data |
| User controls the template text that gets rendered | High risk | User may change structure and trigger template features |
| User controls which template file is selected | Medium–High | often leads to unintended templates or unsafe contexts |
Common places SSTI shows up
- Template previews: email preview, invoice preview, PDF preview, “test message” UIs.
- Branding/customization: tenant-specific headers/footers, receipt templates, notification templates.
- CMS-like features: “custom content blocks”, “dynamic snippets”, “custom landing pages”.
- Admin tooling: internal dashboards where “admins can write templates”.
- Multi-step pipelines: user input stored, later rendered by a background worker (second-order SSTI).
Node.js examples (Express)
Vulnerable pattern (rendering attacker-controlled template string)
// ❌ Vulnerable: user controls the TEMPLATE (structure), not just data
import express from "express";
import ejs from "ejs";
const app = express();
app.use(express.json());
app.post("/preview", (req, res) => {
const tpl = String(req.body.template || "");
const data = { user: { name: "Alice" } };
// Rendering from a raw string is the red flag.
// (Many engines have similar APIs: renderString / compile / render.)
const html = ejs.render(tpl, data); // ❌ dangerous when tpl is untrusted
res.type("text/html").send(html);
}); Fixed pattern (trusted templates + untrusted data)
// ✅ Secure: template is trusted (file), user input is treated as data
app.post("/preview", (req, res) => {
const message = String(req.body.message || "");
const safeData = {
userName: message, // untrusted data
};
// Render a trusted template file (not attacker-supplied syntax)
res.render("preview-email.ejs", safeData); // ✅
}); Fixed pattern #2 (restricted templating: allow-list placeholders only)
// ✅ Secure approach for “custom templates”: allow-list placeholders, no expressions
const allowed = new Set(["{{firstName}}", "{{lastName}}", "{{orderId}}"]);
function renderPlaceholders(text, values) {
// Replace only known placeholders; reject unknown template syntax.
for (const [k, v] of Object.entries(values)) {
const ph = "{{" + k + "}}";
if (allowed.has(ph)) text = text.split(ph).join(String(v));
}
// Any remaining {{...}} is treated as invalid (prevent template language features).
if (/{{[^}]+}}/.test(text)) throw new Error("Unsupported placeholder");
return text;
}
app.post("/email-template", (req, res) => {
const tpl = String(req.body.template || "");
const out = renderPlaceholders(tpl, { firstName: "Alice", orderId: "12345" });
res.type("text/plain").send(out);
}); Exploitation progression (attacker mindset)
This section explains the progression at a high level (no copy/paste exploit steps).
Phase 1: Identify a rendering surface
- Look for “preview” or “customization” features where server output changes based on supplied content.
- Check whether user input appears in responses after server-side rendering (emails/PDFs/HTML views).
Phase 2: Determine what is controlled
- Is the attacker controlling data values in a template, or controlling the template itself?
- Is rendering happening immediately or later (second-order) via jobs/workers?
Phase 3: Infer engine capability and sandboxing
- Engines vary: some are logic-less; some allow expressions/helpers; some compile to code.
- Attackers attempt to move from “template logic” → “access to objects/env” → “sensitive impact”.
Phase 4: Escalate via exposed objects and helpers
- Custom helpers and global objects are common escalation points.
- Impact often becomes: sensitive data exposure, SSRF-like fetches, or server-side execution depending on engine and sandbox.
Tricky edge cases & bypass logic (conceptual)
- Second-order SSTI: template text stored in DB and rendered later by a worker.
- Template selection issues: user chooses which template to render (can be unsafe if not allow-listed).
- Custom helpers: helper functions may expose filesystem, network, or process info.
- Object exposure: passing “rich” objects (request, config, ORM models) into templates increases risk.
- Sandbox assumptions: “sandboxed” engines can be misconfigured; safe defaults vary by engine.
- Mixed contexts: server template output later embedded in client-side HTML/JS can chain into XSS.
Safe validation & testing guidance (defensive verification)
Your goal is to verify whether user input influences template evaluation (structure), without attempting destructive behavior.
- Confirm rendering: show that the server is evaluating a template engine (not just echoing text).
- Confirm boundary: identify whether a raw-string render API is used and whether input reaches it.
- Check scope: what variables/helpers are available in the rendering context.
- Prefer code-level evidence: if you have source, the strongest proof is a trace from request field → render/compile call.
Fixes that hold in production
1) Don’t render untrusted template strings
- Use trusted template files; treat user input as data only.
- Remove “renderString/compile” usage for attacker-controlled fields.
2) If you must allow customization, restrict expressiveness
- Use allow-listed placeholders; no expressions, no loops, no helper calls.
- Reject unknown tokens and enforce length limits.
3) Harden template context
- Pass minimal primitive data into templates (strings, numbers), not full request/config objects.
- Review and restrict helpers/filters; avoid helpers that touch filesystem/network/process.
4) Add guardrails
- Timeouts and resource limits for rendering (avoid template DoS).
- Audit logging for template changes (who changed what and when).
- Security reviews for any new “templating” feature.
Confidence levels (low / medium / high)
- Low: rendering is suspected, but no evidence input reaches template evaluation.
- Medium: input reaches a render function, but scope/impact is unclear.
- High: confirmed untrusted template is evaluated with clear, repeatable evidence and meaningful impact scope.
Interview-ready summaries (60-second + 2-minute)
60-second answer
SSTI is when untrusted input is rendered as a server-side template, letting an attacker influence template evaluation. The root cause is treating user input as presentation logic (rendering raw strings) or exposing powerful helpers/objects to templates. I validate by confirming a render/compile boundary and how the output changes due to template evaluation. Fixes are to render trusted templates only, restrict customization to allow-listed placeholders, and minimize template context and helper power.
2-minute answer
I think of SSTI as a boundary violation: template syntax must be trusted, and only data is untrusted. When apps introduce template previews or “custom templates”, they sometimes render attacker-controlled strings using engines that support expressions and helpers. If the rendering context exposes rich objects or custom helpers, the impact can expand from output manipulation to sensitive data exposure or other high-impact behavior. I aim for high-confidence proof by tracing request input to a render/compile call and demonstrating evaluation beyond plain reflection. The durable remediation is to remove untrusted string rendering, build a restricted placeholder system for customization, and harden the context with least-privileged data and safe helpers.
Checklist (quick review)
- Search for render/compile APIs:
renderString,compile,ejs.renderon raw strings, etc. - Ensure templates are trusted files, not user-provided strings.
- Use allow-listed placeholders if customization is required.
- Pass minimal primitive data into templates; avoid passing request/config objects.
- Review helpers/filters for filesystem/network/process access.
- Add timeouts/resource limits and audit logging for template changes.
Remediation playbook
- Contain: disable template preview/custom template editing if untrusted rendering is suspected.
- Inventory: find all template engines and render/compile call sites.
- Fix boundary: remove raw string rendering; move to trusted template files.
- Restrict customization: replace with allow-listed placeholders; validate tokens and length.
- Harden context: minimize exposed objects/helpers; remove dangerous helpers.
- Guardrails: add timeouts, rate limits, and audit logs; add tests preventing regressions.
- Verify: security tests for template evaluation boundaries and safe rendering behavior.
Interview Questions & Answers (Easy → Hard)
Easy
- What is SSTI?
A: Plainly, it’s when user input is treated as a server template and gets evaluated. Deeply, it’s an untrusted template crossing into a template compiler/interpreter boundary. - How is SSTI different from XSS?
A: Plainly, XSS runs in the browser; SSTI runs on the server. Deeply, SSTI is server-side template evaluation; XSS is client-side script execution. They can chain if SSTI output becomes unsafe HTML/JS. - What’s the most common root cause?
A: Plainly, rendering user-supplied template strings. Deeply, using “renderString/compile” on untrusted input or exposing powerful helpers/objects to templates. - What’s the safest general fix?
A: Plainly, don’t render untrusted templates. Deeply, render only trusted template files and treat user input strictly as data. - Is escaping enough?
A: Plainly, not if the template itself is attacker-controlled. Deeply, escaping helps when input is data, but SSTI is about controlling template structure and evaluation. - What’s a safe way to allow “custom templates”?
A: Plainly, allow placeholders only. Deeply, build a restricted placeholder system with allow-lists and reject any template language constructs.
Medium
- Scenario: An email preview endpoint accepts a “template” field.
A: Plainly, that’s a big red flag. Deeply, if the server renders that field through a template engine, the attacker may influence evaluation; fix by moving to trusted templates and allow-list placeholders for customization. - Scenario: Templates are stored per tenant in the DB. Is that safe?
A: Plainly, it depends who can edit them. Deeply, it becomes risky if tenants/admins aren’t fully trusted; this also creates second-order SSTI. Treat stored templates as untrusted and restrict expressiveness. - Follow-up: What evidence makes you confident it’s SSTI?
A: Plainly, proof that input is evaluated, not just echoed. Deeply, code-level trace to render/compile call and repeatable output change attributable to template evaluation. - Scenario: “We use Handlebars, it’s logic-less, so no SSTI.” True?
A: Plainly, not always. Deeply, helper functions and exposed context can reintroduce risk; custom helpers can become the dangerous capability. - Follow-up: What should never be passed into a template context?
A: Plainly, secrets and powerful objects. Deeply, avoid passing request, response, config, env, DB handles, or rich objects that expose sensitive methods or properties. - Scenario: A PDF worker renders templates asynchronously.
A: Plainly, that’s second-order. Deeply, stored input later rendered can bypass front-end validation; audit the worker boundary and enforce safe rendering and allow-lists at the worker too. - Follow-up: What non-security risks does SSTI create?
A: Plainly, outages. Deeply, template evaluation can be expensive and lead to DoS if users can create heavy templates; add timeouts and complexity limits.
Hard
- Scenario: The app renders a template string, but the engine claims to sandbox execution.
A: Plainly, don’t rely on “sandbox” marketing. Deeply, sandboxing is often configuration-dependent; the correct fix is still to avoid untrusted template rendering and restrict features and context. - Scenario: Only “admins” can edit templates. Is SSTI acceptable?
A: Plainly, still risky. Deeply, admin accounts get phished; tenant admins may not be fully trusted; and templates can become an escalation path. Apply least privilege, allow-lists, and audit logs. - Follow-up: How do you design a secure customization feature?
A: Plainly, keep it simple. Deeply, define an allow-listed placeholder DSL, validate tokens, restrict outputs to safe contexts, and store versioned templates with approvals and audit trails. - Scenario: You can’t change the template engine quickly. What compensating controls help?
A: Plainly, reduce exposure. Deeply, disable string rendering, remove dangerous helpers, reduce context, add strict validation and length limits, and isolate rendering in a low-privileged worker with timeouts. - Follow-up: How do you prevent regressions?
A: Plainly, enforce rules in CI. Deeply, lint/grep for renderString/compile on request fields, code review checklist for template features, and automated tests ensuring only allow-listed placeholders are supported. - Scenario: SSTI output ends up inside a script tag on the client. What happens?
A: Plainly, you may also get XSS. Deeply, SSTI can generate unsafe HTML/JS which then becomes a client-side injection; treat output context and encoding carefully and avoid injecting server-rendered strings into JS contexts. - Follow-up: Where does SSTI sit in OWASP Top 10?
A: Plainly, it’s an Injection class. Deeply, it maps to A03 Injection, often enabled by insecure design (A04) and misconfiguration (A05) depending on engine and context. - Scenario: A feature chooses templates by name from a request parameter.
A: Plainly, that can be dangerous too. Deeply, template selection should be allow-listed; otherwise it often leads to unintended templates that expose sensitive data or unsafe contexts.