Web Loom logo
Plugin BookSecurity Considerations in Plugin Architecture
Part III: Security, Testing, and Best Practices

Security Considerations in Plugin Architecture

Address security risks and best practices in plugin-based systems.

Chapter 9: Security Considerations in Plugin Architecture

The previous chapter ended with a preview of the security model: Beekeeper Studio validates every message from an iframe plugin against a permission set derived from the plugin's manifest. That pattern — declare capabilities upfront, enforce them at the boundary — is the thread that runs through every security decision in a plugin architecture.

The moment you run third-party code inside your application, you have transferred a degree of control to a party you may not fully trust. That is not a reason to avoid plugin systems, but it is a reason to be deliberate about where trust is granted, how wide each plugin's blast radius is, and what the system looks like when a plugin misbehaves. The difference between a plugin system that survives a hostile or buggy plugin and one that collapses under it is rarely a single defensive technique — it is the depth to which security thinking pervades every layer of the architecture.

This chapter examines that thinking across five areas: the threat model, sandboxing and isolation, permission system design, Content Security Policy integration, and code validation before a plugin ever runs.


9.1 Security Threat Model

Before building defences, the threats need to be named clearly.

DOM injection is the most immediately visible risk: a plugin that can manipulate the host's document directly can replace login forms, capture credentials, redirect navigation, or exfiltrate session cookies. Data exfiltration is subtler — a plugin that has legitimate access to user data can forward it to an external endpoint, and if all network calls are not routed through a controlled channel, that exfiltration is invisible. Resource abuse is the least dramatic but the most common in practice: a plugin with an unbounded event loop or an unchecked polling interval can degrade the host application for every user. Privilege escalation happens when a plugin reaches SDK APIs it did not declare in its manifest, either through a bug in the host's permission checking or through indirect access via another plugin's exposed service. Dependency hijacking is the supply-chain variant: a malicious update to a library that many plugins share can compromise those plugins wholesale, even if the plugin author's own code is clean.

The attack surface table below maps each vector to its mitigation:

| Attack Vector | Mitigation | | -------------------------- | -------------------------------------------------- | | Arbitrary DOM manipulation | Use Shadow DOM, block document access in sandbox | | Untrusted network requests | Proxy through host API client with validation | | Excessive API calls | Rate limit in SDK services | | Cross-plugin data leaks | Namespace all storage keys, enforce isolation | | Malicious updates | Require digital signatures, verify checksums |

Trust Models

The mitigations in that table assume different things about how much you trust the plugins running in your system. Not all plugin architectures make the same trust assumptions, and the right trust model depends on where the plugins come from and what they can do.

Vendure treats plugins as vetted, first-party code. A Vendure plugin has unrestricted access to the file system, environment variables, the database, and the network. There is no sandbox, no permission checking at runtime. The security mechanism is compatibility validation through semantic versioning: a plugin that declares compatibility: '^3.0.0' will refuse to load against a v4 host, preventing a class of accidental breakage. This works for a deployment model where all plugins are written by the development team or carefully reviewed before being added to the codebase. It is completely inappropriate for an open marketplace where plugins come from unknown authors.

Backstage occupies the middle position with logical isolation: plugins run in the same process but operate within HTTP route scopes, database table prefixes, and explicit service declarations. The isolation is not enforced by the operating system — a plugin with a bug can still affect other plugins — but inter-service calls require authentication tokens that carry principal information:

// Service-to-service authentication
const { token } = await auth.getPluginRequestToken({
  onBehalfOf: credentials,
  targetPluginId: 'catalog',
});
 
const response = await fetch('http://catalog/api/entities', {
  headers: { Authorization: `Bearer ${token}` },
});

The token includes a typed principal — a BackstageUserPrincipal, a BackstageServicePrincipal, or a BackstageNonePrincipal for unauthenticated callers — so the receiving service knows exactly who is making the request and can make a permission decision accordingly. This gives Backstage audit trails and granular permission checking without the overhead of process isolation.

NocoBase adds a third layer with database-driven access control. Package name whitelisting (@nocobase/plugin-* and @nocobase/preset-* by default, configurable via PLUGIN_PACKAGE_PREFIX) prevents arbitrary packages from being installed. At runtime, a middleware-based ACL system checks every resource action against a permission table:

// Permission snippet registration
this.app.acl.registerSnippet({
  name: 'pm.acl.roles',
  actions: ['roles:*', 'roles.users:*', 'availableActions:list']
});
 
// Middleware-based permission checking
app.resourcer.use(async (ctx, next) => {
  const { actionName, resourceName } = ctx.action.params;
  const allowed = await ctx.app.acl.can({
    role: ctx.state.currentRole,
    resource: resourceName,
    action: actionName,
  });
 
  if (!allowed) {
    ctx.throw(403, 'Forbidden');
  }
 
  await next();
});

This model supports multi-tenant deployments where different tenant administrators configure different permission sets for the same installed plugins, and where those permissions can change at runtime without restarting the server.

The trust model is the first decision in any plugin security architecture. It shapes every subsequent choice, because a system built for vetted first-party plugins needs very different defences from one designed for an open marketplace.


9.2 Sandboxing and Isolation

Whatever trust model you choose, the goal of sandboxing is to contain the damage a plugin can do if it misbehaves. Sandboxing comes in four practical forms, arranged by isolation strength:

| System | Isolation Level | Trade-offs | | --------- | --------------- | ------------------------------------------- | | Vendure | None | Max performance, full integration, max risk | | Backstage | Logical | Balance of safety and capability | | VS Code | Process | High isolation, communication overhead | | Browser | IFrame/Worker | Maximum security, limited capabilities |

IFrame Sandboxing

Beekeeper Studio runs each plugin inside an iframe with a restrictive sandbox attribute. The attribute is a whitelist — only the capabilities explicitly listed are permitted:

<iframe
  src="plugin://my-plugin/index.html"
  sandbox="allow-scripts allow-same-origin"
  allow="clipboard-read; clipboard-write"
/>

allow-scripts is necessary for the plugin to run JavaScript at all. allow-same-origin allows the plugin to read from localStorage and make same-origin fetch requests. Notably absent: allow-forms, allow-top-navigation, allow-popups. The plugin cannot submit forms to external servers, cannot redirect the parent page, and cannot open new windows — all common data exfiltration vectors.

Communication with the host goes through postMessage, which means the host controls the channel and can validate every message as shown in chapter 8. The plugin cannot call host APIs directly; it can only send structured messages that the host may or may not honour.

Web Worker Isolation

For plugins that perform computation rather than rendering UI, Web Workers provide strong isolation with less overhead than an iframe:

// Host — creates a worker for the plugin
const worker = new Worker(new URL('./plugin-worker.js', import.meta.url), {
  type: 'module',
  credentials: 'same-origin',
});
 
// Host — controlled message channel
worker.postMessage({ type: 'execute', payload: taskData });
 
worker.addEventListener('message', (event) => {
  if (event.data.type === 'result') {
    handlePluginResult(event.data.payload);
  }
});
 
worker.addEventListener('error', (event) => {
  console.error('Plugin worker error:', event.message);
  // Terminate the worker if it misbehaves
  worker.terminate();
});

Workers have no access to the DOM at all — they run in a completely separate execution context with no window, no document, no ability to manipulate the page. They can make network requests (subject to CORS), use IndexedDB, and communicate with the host only through postMessage. For analytics plugins, report generators, and any plugin that processes data in the background, this is the appropriate isolation level.

DOM Access Restrictions

Even plugins that run in the main thread can be prevented from accessing the host's DOM directly by rendering their UI into a Shadow DOM root:

function mountPlugin(pluginId: string, container: HTMLElement): ShadowRoot {
  const host = document.createElement('div');
  host.dataset.pluginId = pluginId;
  container.appendChild(host);
 
  // Shadow root isolates the plugin's DOM from the host
  const shadow = host.attachShadow({ mode: 'closed' });
  return shadow;
}

With mode: 'closed', the shadow root is inaccessible via element.shadowRoot from outside the host code that created it. The plugin renders into the shadow root, its styles are scoped to that root, and it cannot traverse up to the host document. Chapter 6 covered CSS isolation within Shadow DOM in detail; the security benefit is the same boundary in reverse.

Network Request Filtering

Routing all plugin network calls through sdk.services.apiClient is the network equivalent of the shadow root boundary. The plugin cannot make arbitrary fetch requests; it must use the SDK's HTTP client, which the host controls:

// SDK implementation — wraps fetch with logging and permission checking
const apiClient: ApiClient = {
  async get<T>(url: string, params?: Record<string, unknown>): Promise<T> {
    if (!isAllowedUrl(url, pluginManifest)) {
      throw new Error(`Plugin ${pluginManifest.id} is not permitted to access: ${url}`);
    }
    logRequest(pluginManifest.id, 'GET', url);
    const response = await fetch(buildUrl(url, params));
    return response.json() as Promise<T>;
  },
};

The isAllowedUrl check can enforce that plugins only reach the host's own API, or that they can reach additional URLs they declared in their manifest's network capability. This is the mechanism that prevents data exfiltration — a plugin cannot send user data to attacker.example.com if the SDK client rejects any URL not on the approved list.


9.3 Permission System Design

Sandboxing limits what a plugin can physically do. The permission system limits what it is allowed to do within those physical constraints. The two work together: sandboxing removes capabilities entirely for untrusted environments, while permissions provide fine-grained control within trusted ones.

Granular Permission Declarations

Plugins declare their required permissions in the manifest:

{
  "id": "com.example.analytics",
  "name": "Analytics Plugin",
  "permissions": ["events", "api:read", "storage:global"]
}

The host validates this list at installation time and enforces it at runtime. An SDK call from a plugin that did not declare the required permission fails immediately with an error that names the missing permission. This is the fail-fast principle from chapter 7's error handling applied to security: better a loud failure at the point of the SDK call than a silent bypass.

Backstage applies this model to resource-level actions. The authorize call checks a specific permission against the requester's credentials:

router.delete('/entities/:id', async (req, res) => {
  const credentials = await httpAuth.credentials(req);
 
  const decision = await permissions.authorize(
    [{ permission: catalogEntityDeletePermission }],
    { credentials }
  );
 
  if (decision[0].result === 'DENY') {
    return res.status(403).json({ error: 'Permission denied' });
  }
 
  await catalog.deleteEntity(req.params.id);
});

The catalogEntityDeletePermission is a typed permission object, not a string. This means the TypeScript compiler catches typos and incorrect permission references at compile time — a meaningful improvement over string-based permission checking.

Runtime Permission Requests

Some permissions cannot be determined at installation time. A plugin that processes user-uploaded documents might need access to a sensitive API only when the user explicitly triggers a document import. For these cases, the host can prompt the user at the moment the permission is needed:

async function requestPermission(
  permission: string,
  reason: string
): Promise<boolean> {
  const granted = await sdk.ui.showModal(PermissionRequestDialog, {
    permission,
    reason,
    pluginName: sdk.plugin.manifest.name,
  });
 
  if (granted) {
    sessionPermissions.add(permission);
  }
 
  return granted;
}

The reason parameter is not optional from a UX perspective. Users consent to permissions they understand; they resist permissions that seem excessive or unexplained. The Android permission model, which requires developers to explain why each permission is needed before the system prompt appears, has shown that well-explained permission requests are granted more often and revoked less often than unexplained ones.

Namespace Isolation

Storage and database access should be automatically scoped to the plugin's identifier, so that two plugins using the same key string cannot interfere with each other:

// NocoBase: Plugin-scoped database collections
db.collection({
  name: 'myData',
  from: '@nocobase/plugin-myPlugin', // Ownership tracking
});
 
// Backstage: Plugin-scoped HTTP routes
router.get('/api/catalog/entities'); // Scoped to catalog plugin
router.get('/api/scaffolder/actions'); // Scoped to scaffolder plugin

The SDK's storage.set(key, value) implementation should prepend the plugin's ID before writing: 'com.example.analytics:preferences' rather than 'preferences'. The plugin never sees the prefix — it just uses the key it declared — but the underlying storage layer is isolated. A plugin that tries to read another plugin's storage gets back undefined; it never sees an error that would tell it the key exists.

Permission Escalation Prevention

Once loaded, a plugin cannot acquire permissions it did not declare in its manifest. This must be enforced by the host, not by the plugin: the plugin's code runs after the manifest has been processed, and any attempt by the plugin to call an undeclared SDK API is rejected. The only way to acquire new permissions is to update the manifest and reinstall, which gives administrators the opportunity to review the change. This mirrors how mobile operating systems handle app updates that request new permissions: the update is presented to the user for re-approval, not silently applied.


9.4 Content Security Policy Integration

Content Security Policy is an HTTP response header (or meta tag) that instructs the browser to refuse to execute scripts, load resources, or make requests that do not meet declared policy rules. For a plugin host, CSP is the last line of defence against injection attacks that somehow bypass all the other controls.

CSP Configuration for Plugin Hosts

A baseline policy for a plugin host application:

Content-Security-Policy:
  default-src 'self';
  script-src 'self' 'nonce-{RANDOM}';
  style-src 'self' 'unsafe-inline';
  img-src 'self' data: https:;
  connect-src 'self' https://api.yourhost.com;
  object-src 'none';
  frame-src 'none';

script-src 'self' with a nonce means only scripts from the same origin with a matching nonce are executed. object-src 'none' blocks Flash and other binary plugin formats entirely. frame-src 'none' blocks iframes unless explicitly overridden for plugins that use iframe sandboxing.

If plugins load resources from a CDN, those domains need to be explicitly listed: connect-src 'self' https://api.yourhost.com https://cdn.yourhost.com. The manifest's declared network permissions can feed this list dynamically — the host builds the CSP header at request time from the union of all installed plugins' declared origins.

Nonce-Based Script Loading

For dynamically loaded plugin scripts, a per-request nonce prevents any injected scripts from executing:

import { randomBytes } from 'crypto';
 
function generateNonce(): string {
  return randomBytes(16).toString('base64');
}
 
// Attach nonce to approved script elements only
function loadPluginScript(src: string, nonce: string): HTMLScriptElement {
  const script = document.createElement('script');
  script.src = src;
  script.nonce = nonce;
  script.crossOrigin = 'anonymous';
  document.head.appendChild(script);
  return script;
}
 
// Include the nonce in the CSP header for this response
// Content-Security-Policy: script-src 'nonce-<value>'

The nonce is regenerated on every page load. A script injected by an attacker cannot know the current nonce value, so even if it is present in the DOM, the browser refuses to execute it.

Trusted Types

Trusted Types is a browser API that prevents unsafe DOM manipulation by requiring all assignments to dangerous sinks (innerHTML, outerHTML, document.write, eval) to come from explicitly approved sources:

if (window.trustedTypes) {
  const policy = window.trustedTypes.createPolicy('plugin-renderer', {
    createHTML: (input: string) => {
      // Only allow HTML that has been sanitised through DOMPurify
      return DOMPurify.sanitize(input);
    },
  });
 
  // All plugin HTML rendering must go through this policy
  element.innerHTML = policy.createHTML(pluginRenderedContent);
}

With Trusted Types enforced, a plugin that tries to set element.innerHTML = '<script>...</script>' directly will throw a TypeError. The only path to setting innerHTML runs through the createHTML method, which applies sanitisation. This is a strong defence against the class of XSS vulnerabilities where a plugin renders unsanitised user content.


9.5 Code Validation and Signing

The previous sections all assume the plugin is already running. Code validation happens before that — it is the set of checks that determine whether a plugin is permitted to load at all.

Static Analysis Before Installation

A static analysis pass over the plugin bundle before installation can catch obvious problems. The checks that matter most are: use of eval or Function() constructor (which can execute arbitrary code at runtime, bypassing any sandbox); direct assignment to innerHTML without sanitisation; fetch calls to hardcoded external URLs that were not declared in the manifest; and patterns that indicate obfuscated code, which is not itself malicious but is a signal worth flagging for human review.

async function analysePluginBundle(bundlePath: string): Promise<AnalysisResult> {
  const code = await fs.readFile(bundlePath, 'utf-8');
  const warnings: string[] = [];
 
  const dangerousPatterns = [
    { pattern: /\beval\s*\(/, message: 'Uses eval()' },
    { pattern: /new\s+Function\s*\(/, message: 'Uses Function constructor' },
    { pattern: /\.innerHTML\s*=/, message: 'Direct innerHTML assignment' },
    { pattern: /document\.write\s*\(/, message: 'Uses document.write' },
  ];
 
  for (const { pattern, message } of dangerousPatterns) {
    if (pattern.test(code)) {
      warnings.push(message);
    }
  }
 
  return { bundlePath, warnings, approved: warnings.length === 0 };
}

This is not a complete security audit — a determined attacker can obfuscate their way around string pattern matching. It is a filter for the most common and most careless security mistakes, appropriate for catching accidental problems in plugins from trusted sources and as a first pass for marketplace submissions.

Digital Signature Verification

For marketplace or enterprise environments, plugins should be signed by a trusted certificate and the signature verified before loading. The signing step happens in the plugin author's CI pipeline; verification happens in the host before the module is evaluated:

async function verifyPluginSignature(
  bundlePath: string,
  signaturePath: string,
  trustedPublicKey: CryptoKey
): Promise<boolean> {
  const [bundle, signature] = await Promise.all([
    fs.readFile(bundlePath),
    fs.readFile(signaturePath),
  ]);
 
  return crypto.subtle.verify(
    { name: 'RSASSA-PKCS1-v1_5', hash: 'SHA-256' },
    trustedPublicKey,
    signature,
    bundle
  );
}

The public key belongs to the organisation operating the marketplace or enterprise registry. Plugin authors submit their bundles for signing; the registry signs bundles it has reviewed. The host trusts only bundles signed with that key. A plugin modified after signing — whether by a compromised CDN, a man-in-the-middle, or a malicious package registry — will fail verification.

For remote plugin scripts, Subresource Integrity provides a simpler version of this guarantee:

<script src="plugin.js" integrity="sha384-..." crossorigin="anonymous"></script>

The browser computes the hash of the downloaded script and compares it to the declared value. If they do not match, the script is not executed. This does not authenticate the author — it only confirms the script has not been tampered with since the hash was recorded.

Compatibility Validation

Vendure's @VendurePlugin({ compatibility: '^3.0.0' }) pattern is the lightest form of code validation: it checks that the plugin was built for the current host version before allowing it to run. A plugin compiled against a v2 host API calling methods that no longer exist in v3 will fail in unpredictable ways; failing fast at load time with a clear error is a significant improvement:

@VendurePlugin({
  compatibility: '^3.0.0',
})
export class PaymentPlugin {
  // Plugin aborts bootstrap immediately if version mismatch
}

Fail-Safe Defaults

The last principle is perhaps the most important: plugin failures should be contained, not catastrophic. A plugin that throws during setup() should be marked as failed in the registry and excluded from the running application; it should not prevent other plugins from loading or crash the host application. Vendure, NocoBase, and Backstage all implement variants of this — incompatible plugins abort with clear error messages, failed plugin loads do not propagate exceptions to the host, and the system degrades gracefully to a working state with the misbehaving plugin absent.

The event-driven permission updates in NocoBase are worth noting as a security pattern in their own right:

// React to permission changes without restart
this.app.on('acl:writeRoleToACL', (role) => {
  this.updatePermissions(role);
});
 
// Assign default role when new users are created
db.on('users.afterCreateWithAssociations', (user) => {
  this.assignDefaultRole(user);
});

Security state that can be updated at runtime — without a server restart — means that when a compromised plugin is discovered, the permissions can be revoked immediately. The window between discovery and remediation is narrower than in systems that require redeployment to change permission configuration.


Conclusion

Security in a plugin architecture is not a feature added at the end — it is a series of decisions that run through every layer of the design. The trust model determines the baseline: how much do you assume about who wrote this plugin and how it was delivered? Sandboxing determines the blast radius: if this plugin misbehaves, how much can it affect? The permission system determines the surface area: what capabilities does each plugin actually need? CSP and Trusted Types close the gap at the browser level. Code validation and signing push the security boundary to before the plugin loads at all.

Each of these layers addresses a different threat. Sandboxing does not replace permission checking; permission checking does not replace code validation. A robust plugin system implements all of them at the depth appropriate to its trust model — lighter for a system of vetted internal plugins, deeper for a marketplace open to the public.

In the next chapter, we turn to testing — how to verify that a plugin behaves correctly, that security boundaries hold, and that changes to the host do not silently break the plugins that depend on it.

Was this helpful?
Web Loom logo
Copyright © Web Loom. All rights reserved.