Skip to main content

Radar Action Items

Radar action items highlight security and operations gaps that should be reviewed by an administrator. Each section below describes the problem, why it is a risk, a concrete example of what can go wrong, and the step-by-step fix in Webrix.

SSO Not Configured

Problem. Your organization is using the default email/password authentication provider instead of an external identity provider such as Okta, Google Workspace, Azure AD, Auth0, Keycloak, JumpCloud, or ADFS.

Why it is a risk.

  • Identity events (new hire, role change, termination) live in your IdP. Without SSO, every change has to be replayed manually inside Webrix.
  • Email/password accounts are easier to phish, harder to enforce MFA on, and rarely covered by your conditional access or device posture policies.
  • There is no central place to disable a user, so a compromised or terminated account may keep access to MCP servers, tools, and audit data.

Example. A contractor leaves the company. IT removes them from Okta on Friday but the Webrix-only password account stays active over the weekend. The contractor signs in from a personal device and exfiltrates data through a connected MCP server before the account is manually disabled.

How to solve it.

  1. Open Admin > Settings > Authentication.
  2. Choose your identity provider (Okta, Google, Azure AD, Auth0, Keycloak, JumpCloud, ADFS, or a custom OIDC provider).
  3. Paste the client ID, client secret, issuer or tenant ID, and any other required values from your IdP.
  4. Save the configuration and click Test Sign-In with at least one admin account.
  5. Once a successful round trip is confirmed, ask other admins to sign in with SSO before considering the rollout complete.
  6. Continue to the SSO Not Enforced item to remove the local fallback.

SSO Not Enforced

Problem. SSO is configured but not enforced. Users are still allowed to authenticate with the default provider in addition to SSO.

Why it is a risk.

  • Local accounts can bypass MFA, conditional access, IP allowlists, and device posture rules that are configured in your IdP.
  • Attackers who phish a Webrix password do not need to defeat your IdP — the local login is enough.
  • Joiner/leaver automation in your IdP cannot disable accounts that exist outside it.

Example. A user enables MFA in Okta but their Webrix password is reused from a breached service. An attacker logs in with the password, never has to face MFA, and accesses sensitive integrations.

How to solve it.

  1. Open Admin > Settings > Authentication.
  2. Confirm at least one admin user can successfully sign in through SSO.
  3. Toggle Enforce SSO on.
  4. Communicate the change to users and document the recovery contact (an owner who still has SSO access).
  5. Optionally, audit the user list and remove any local accounts that should no longer exist.

Approved Clients Disabled

Problem. Any MCP client that can reach the gateway URL is allowed to connect. There is no allowlist of approved clients.

Why it is a risk.

  • Unknown clients can be used to enumerate tools, attempt prompt injection, or copy data into unmanaged contexts.
  • It is hard to audit who is calling the gateway because client identity is not constrained.
  • Internal users can experiment with random clients that bypass your usability and compliance review.

Example. A developer installs a brand-new community MCP client to try it out. It works against your gateway because nothing prevents it. The client logs the user's prompts to its own backend, leaking customer data.

How to solve it.

  1. Open Admin > Settings > Security (or Monitor > Security depending on your version).
  2. Enable Approved Clients Only.
  3. Review the existing client list under Admin > MCP Clients.
  4. Add the clients your organization wants to support (Cursor, Claude, GitHub Copilot, Codex, etc.).
  5. Inform users that unapproved clients will stop working and tell them how to request a new client.

Token Expiration Missing

Problem. Session tokens never expire. Once a token is issued it can be used until it is manually revoked.

Why it is a risk.

  • Tokens may be left in shell history, browser profiles, log lines, screen shares, or developer machines that leave the organization.
  • Without expiration, you have to detect a leak before you can stop it. Expiration is a passive control that limits the window of damage.
  • Long-lived tokens are usually flagged by SOC 2, ISO 27001, and customer security questionnaires.

Example. A laptop is donated to charity with the user's session token still cached in a config file. Six months later the laptop is resold and the new owner discovers the token. Because tokens never expire, the attacker has immediate access to the gateway.

How to solve it.

  1. Open Admin > Settings > Security.
  2. Set Token Expiration to a value that matches your policy. Common choices are 8 hours, 24 hours, 7 days, or 30 days.
  3. Communicate the change before saving — users may need to re-authenticate more frequently.
  4. Save and verify with one user that the new lifetime is enforced.
  5. Pair this with Log Sync Missing so token-related events also reach your SIEM.

MCP Enforcement Policy Missing

Problem. Device scans discover unmanaged MCP servers but no enforcement policy is set, so the default behavior is to allow them.

Why it is a risk.

  • Local MCP servers can expose files, secrets, or APIs that the user does not realize they are sharing.
  • Without a policy, every new server is treated as an interesting finding rather than something the platform actively blocks or warns about.
  • Different teams will reach different conclusions about whether a server is OK, leading to inconsistent behavior.

Example. An engineer installs a productivity-focused community MCP server that automatically reads the entire home directory. Webrix detects it but the policy is set to allow, so users continue to use it for weeks before the security team sees the finding in a quarterly review.

How to solve it.

  1. Open Admin > Settings > Security.
  2. Find the Enforcement Policy section and locate MCP Servers.
  3. Choose Warn if you need an observation period, or Block if you want unmanaged servers stopped immediately.
  4. Save the configuration.
  5. Visit Monitor > Shadow AI and triage any backlog of findings using the new policy.

Skill Enforcement Policy Missing

Problem. Unmanaged skills discovered on devices have no enforcement policy.

Why it is a risk.

  • Skills can include scripts, prompts, and instructions that agents will follow without review.
  • A malicious or careless skill can issue tool calls, exfiltrate data, or bypass guardrails that only run inside the gateway.
  • Without policy, you cannot consistently say "all skills must come from our managed marketplace".

Example. A user finds a public skill that promises to "summarize all files in the project". They install it locally. The skill's instructions also tell the agent to upload a copy of the README to a third-party endpoint. With no enforcement policy, the skill keeps running.

How to solve it.

  1. Open Admin > Settings > Security.
  2. Set the Skills enforcement action to Warn or Block.
  3. Promote any approved skills into managed Webrix skills so users can still use them.
  4. Document the request flow for new skills and link it from your internal wiki.

OAuth Policy Missing

Problem. OAuth authorization flows are governed by the default action, which is allow. There is no policy that decides which providers, scopes, or apps are reviewed.

Why it is a risk.

  • Users may authorize third-party apps with excessive scopes (for example, full Gmail access).
  • Without a policy, you cannot tell which OAuth apps were reviewed by security versus added ad hoc by users.
  • A compromised OAuth app can read or write data on behalf of the user even after their session ends.

Example. A user authorizes an unfamiliar app with the mail.read scope from a phishing prompt. The app silently reads incoming mail for months because no policy gates the authorization.

How to solve it.

  1. Open Admin > Settings > Security and find OAuth Policy.
  2. Change the Default Action from Allow to Warn or Block.
  3. Configure an explicit allowlist or blocklist for the providers and scopes that matter to your business.
  4. Tell users that new third-party authorizations may require review and link the request flow.

Shadow MCPs Detected

Problem. The Scan Agent found one or more MCP servers running on managed devices that are not registered in Webrix.

Why it is a risk.

  • The server may have access to local files, source code, secrets, or production APIs the security team has not reviewed.
  • Local servers can be misconfigured (no authentication, listening on 0.0.0.0, etc.) and become a foothold on the device.
  • They show that users are working around the managed gateway, which means logs and guardrails do not see those tool calls.

Example. Your scan reports an unmanaged Postgres MCP server running on a developer machine. The connection string in its config points at a production replica, and the server has no authentication. Anyone on the same network could query the production database via the local server.

How to solve it.

  1. Open Monitor > Shadow AI and filter by capability type MCP server.
  2. For each finding, decide whether the server should be:
    • Promoted to a managed Webrix integration,
    • Allowed because it is a safe local-only tool, or
    • Blocked and removed from the device.
  3. Apply policy to similar future findings using the MCP Enforcement Policy item.
  4. If the same server appears on many devices, document it once and apply the decision in bulk.

Shadow Skills Detected

Problem. Skills were discovered on devices that are not part of your managed catalog.

Why it is a risk.

  • Skills are instructions for an agent. Unreviewed instructions can change tool selection, exfiltrate data, or weaken guardrails.
  • Local skills are easy to copy from forums, AI-generated examples, or competitors and rarely have provenance.
  • Different users may run different versions of the "same" skill, making support and incident response harder.

Example. A user installs a community skill called "Help me commit all changes". The skill includes a hidden instruction telling the agent to also push the branch to a personal fork on GitHub. Without review, source code leaks every time the user runs the skill.

How to solve it.

  1. Open Monitor > Shadow AI and filter by Skill.
  2. For useful skills, click Promote to Managed and review their content before publishing to the organization marketplace.
  3. For unsafe skills, contact the owner, then enforce removal via the Skill Enforcement Policy.
  4. Track repeat offenders so you can offer training or a managed alternative.

No Active Guards

Problem. Your organization has no active guards or guardrail providers attached to MCP traffic.

Why it is a risk.

  • Without guards, sensitive data (API keys, customer PII, secrets) can flow through tool calls and into logs unobserved.
  • Prompt injection from a tool's output is not caught, so a malicious page or document can hijack agent behavior.
  • Compliance frameworks generally expect at least one technical control between users and external services.

Example. A user pastes a customer support transcript into a chat. Without a guard, the transcript — including credit card numbers — is sent verbatim to a third-party tool, and the call is logged in plaintext.

How to solve it.

  1. Open Guards and review the built-in guards (secrets detection, prompt injection, PII detection, etc.).
  2. Enable at least one runtime guard and one build-time guard. Start with secrets and prompt injection.
  3. Optionally connect an external guardrail provider (ActiveFence, Prompt Security, custom webhook) for additional coverage.
  4. Attach guards to the right scope (organization, group, or specific integrations) so they actually run.
  5. Confirm guards are firing by checking Monitor > Logs after a few tool calls.

No Runtime Guards

Problem. Build-time guards may exist, but no runtime guards are attached, so live tool calls are not inspected.

Why it is a risk.

  • Build-time checks miss anything that depends on runtime context (user input, tool output, dynamic arguments).
  • Sensitive data can be sent to a tool and returned to the agent without any inspection.
  • Detection becomes purely reactive — you only learn about leaks via logs after the fact.

Example. A user asks the agent to summarize a Notion page. The page contains the Stripe live API key. With no runtime guard, the key is read by the agent and emitted to the chat history.

How to solve it.

  1. Open Guards and create or activate a runtime guard.
  2. Choose a scope (runtime), a direction (input, output, or both), and the entities the guard applies to (integration, toolkit, skill, command).
  3. Set the action: Redact, Warn, or Block depending on severity and your tolerance for false positives.
  4. Save the guard and run a test tool call.
  5. Watch the guard results in Monitor > Logs > Guard Events.

No Build-Time Guards

Problem. No build-time guards are active, so skills, commands, and toolkits can be published without inspection.

Why it is a risk.

  • Hardcoded secrets, malicious instructions, or unsafe code can ship to many users at once via the managed marketplace.
  • Reviewers have to check everything by eye, which does not scale.
  • Anything synced from Git (skills, MCP servers, toolkits) bypasses runtime checks if there is no build-time gate.

Example. A new "Repository Scanner" skill is added to the marketplace. It contains an embedded private key the author forgot to remove. Without a build-time guard, the skill is published and copied to many devices.

How to solve it.

  1. Open Guards and enable the built-in build-time guards: Secret & Credential Detection, Prompt Injection Detection, and Code Injection Detection at minimum.
  2. Attach them to the entity types you want covered (skill, command, toolkit, integration).
  3. Set the action to Block for high-confidence findings so unsafe content cannot be published.
  4. Re-validate any existing entities that were created before the guards were enabled.

Content Redaction Disabled

Problem. MCP request and response content is being persisted in logs without redaction.

Why it is a risk.

  • Audit logs are read by more systems than the original tool call (SIEM, log archive, support tooling).
  • Anyone with access to logs may end up with copies of secrets or customer data.
  • Long retention combined with plaintext content means a single log breach can expose months of sensitive payloads.

Example. A help desk engineer queries the audit log to debug a tool failure. They see the raw payload, which includes a customer's social security number. The engineer should not have access to that data.

How to solve it.

  1. Open Admin > Settings > Logs.
  2. Enable Redact MCP Content.
  3. Decide whether to redact only sensitive fields (preferred) or all payload bodies.
  4. Verify by inspecting a fresh log entry and confirming sensitive fields are masked.
  5. Communicate the change to engineers who rely on log payloads for debugging — they may need to add a debug-only path.

Log Sync Missing

Problem. Audit logs only live inside Webrix. There is no transport configured to ship them to your SIEM, log warehouse, or webhook.

Why it is a risk.

  • Security teams cannot correlate Webrix events with the rest of your environment.
  • Independent retention is missing, so logs can be lost or modified without an external copy.
  • Many compliance regimes require centralized logging with separate access control.

Example. A customer asks for an attestation that admin actions are forwarded to your SIEM. You cannot answer yes because Webrix logs only exist in the application's own database.

How to solve it.

  1. Open Admin > Settings > Logs > Transports.
  2. Add a transport for your destination: Splunk HEC, Loki, Coralogix, or a generic webhook.
  3. Test the transport with a synthetic event and confirm it lands in the destination.
  4. Configure your SIEM with alerts on high-risk events (admin login, token creation, integration creation, guard breach).
  5. Document the destination, authentication, and on-call owner.

Admin Action Logging Disabled

Problem. Admin configuration changes are not being recorded.

Why it is a risk.

  • Without admin logs, you cannot tell who turned off a guard, created a token, or changed a security policy.
  • Incident response is harder because the actions taken to mitigate or cause an incident are invisible.
  • SOC 2 and ISO controls usually require logging of administrative actions.

Example. A guard is suddenly disabled. There is no record of who disabled it or when, so you cannot tell whether it was a legitimate operator or an attacker.

How to solve it.

  1. Open Admin > Settings > Logs.
  2. Enable Log Admin Actions.
  3. Optionally configure a transport so admin events flow to your SIEM as well.
  4. Review the Admin Audit Logs page and confirm new admin actions appear.

Audit Retention Missing

Problem. Audit logs have no retention policy, so they are kept forever or pruned arbitrarily.

Why it is a risk.

  • Storing logs forever can violate data retention or privacy policies.
  • Pruning them too quickly can break investigations that need historical context.
  • Auditors will ask for the retention policy in writing.

Example. Your data protection policy says PII-adjacent logs must be deleted after 365 days. Without a retention policy in Webrix, those logs are still around three years later, putting you out of compliance.

How to solve it.

  1. Open Admin > Settings > Logs.
  2. Set Audit Log Retention to the number of days that matches your governance requirements (commonly 30, 90, 180, or 365 days).
  3. Confirm the value is reflected in Monitor > Logs by checking the oldest available record after the next pruning run.
  4. Update your security documentation with the new retention period.

Device Scanning Missing

Problem. No devices have ever reported scan results, so Webrix has no visibility into local AI tools, MCP servers, or skills.

Why it is a risk.

  • Shadow MCPs and shadow skills cannot be detected if no agent is reporting them.
  • You cannot enforce device-level policies for users that work outside the managed gateway.
  • Onboarding gaps are invisible — new hires may install local tools without anyone seeing them.

Example. Several developers are running local MCP servers with broad filesystem access. Without the Scan Agent, you do not know about them and have no inventory.

How to solve it.

  1. Open Monitor > Shadow AI and click the Deploy Scan Agent instructions.
  2. Distribute the agent to a pilot group (5–10 users).
  3. Confirm devices appear in the device list and capabilities show up.
  4. Roll out to the rest of the organization through MDM, Intune, Jamf, or a similar tool.
  5. Re-evaluate the Shadow AI and policy items once data is flowing.

MCP Server Without Authentication

Problem. An active MCP server has its auth_settings.authType set to none. The gateway can connect to it without authenticating.

Why it is a risk.

  • Anyone with the URL can issue tool calls without identifying themselves.
  • There is no per-user attribution, so you cannot answer "who called this tool".
  • The server can be enumerated by attackers who scrape gateway URLs from logs or screenshots.

Example. An engineer publishes a new MCP server and forgets to enable OAuth. The server is briefly indexed by an internal portal. Anyone in the company who finds the URL can list and call its tools, including a delete_record tool that affects production data.

How to solve it.

  1. Open Integrations, click the affected server, and go to Settings > Authentication.
  2. Choose an authentication method: OAuth, API key, client credentials, server-app, proxy passthrough, or another supported provider.
  3. Save the credentials, including any client ID and secret values, in Vault rather than directly in the configuration.
  4. Test a connection using the Test button or by running a tool call.
  5. If the server should not be active yet, set its status to Draft until authentication is configured.

High-Risk Tool

Problem. A tool is classified as high risk, typically because its HTTP method is DELETE or because it has an explicit risk: "high" flag from a connector.

Why it is a risk.

  • The tool can mutate or delete data, often irreversibly.
  • Agents may call it with arguments inferred from user prompts or external content (prompt injection).
  • A single bad call can cause customer-visible outages or data loss.

Example. A connector exposes delete_user as a tool. An agent reads a phishing email that contains the instruction "delete the test user named admin". Without constraints, the agent may issue the call.

How to solve it.

  1. Open the integration and review the tool definition.
  2. Decide whether the tool is required:
    • No — disable it.
    • Yes — continue to the steps below.
  3. Constrain inputs with mappings (for example, force confirm: "yes" to be present, or force a regex on resource IDs).
  4. Attach a runtime guard that requires explicit user confirmation in the prompt for high-risk operations.
  5. Restrict the tool to specific groups via guard attachments or group policies.
  6. If your version supports it, click Reduce Risk with AI in the radar item sheet to generate suggested mappings.

Broad API Token

Problem. An API token is configured with the all scope, which grants every available admin and connect permission.

Why it is a risk.

  • If the token leaks, the attacker has the same authority as a top-level admin.
  • Audit logs cannot show which subset of capabilities a token actually needed.
  • It encourages reuse of one token across multiple integrations, which makes rotation hard.

Example. A CI pipeline uses an all-scope token. The pipeline accidentally prints the token to a public log. The attacker now has admin access to the gateway because nothing limits the token's scope.

How to solve it.

  1. Identify which capabilities the token actually needs (admin reads, connect reads, etc.).
  2. Open Admin > API Tokens and click Create Token.
  3. Pick the minimum required scopes (for example, connect:read only).
  4. Update the consumer (CI, internal service, script) to use the new token.
  5. Verify the consumer still works.
  6. Revoke the old all-scope token from Admin > API Tokens.

Background Agent Review

Problem. A background agent is active. Background agents run autonomously without a human in the loop.

Why it is a risk.

  • Capabilities tend to drift over time as engineers add more tools or skills.
  • Owners may leave the company without transferring responsibility.
  • A misconfigured agent can issue many tool calls per day, and the impact is amplified by the lack of human review.

Example. A background agent was created six months ago to "summarize daily activity". Since then, someone added a send_email tool to it. The agent now emails summaries to addresses outside the company because nobody reviewed the change.

How to solve it.

  1. Open Manage > Background Agents and select the agent.
  2. Review the Owners, Tools, Skills, and Rules sections.
  3. Remove anything that is no longer required for the agent's stated purpose.
  4. Confirm the owners are still active employees and add a backup owner.
  5. Disable the agent if it is no longer needed.
  6. Set a recurring calendar reminder (quarterly) to repeat this review.

Stale Vault Secret

Problem. A Vault secret has not been updated in at least 90 days.

Why it is a risk.

  • Long-lived secrets are more likely to leak via screenshots, logs, dotfiles, or terminated employees.
  • Some upstream providers (Stripe, AWS, GitHub Apps) require periodic rotation for compliance.
  • Even if the secret has not leaked, rotating proves your rotation process actually works.

Example. A vault secret called STRIPE_LIVE_KEY is one year old. A former contractor still has a copy in their .env.bak file from when they had access. Because the secret has not been rotated, that copy still works.

How to solve it.

  1. Open the upstream provider's dashboard (for example, Stripe, GitHub, AWS) and create a new credential.
  2. Open Admin > Settings > Vault in Webrix.
  3. Edit the affected secret and paste the new value. Click Save.
  4. Confirm the integrations or MCP configurations that reference the secret with {{vault.SECRET_NAME}} still work.
  5. Revoke the old credential at the upstream provider.
  6. Add the secret name to your rotation calendar so it does not slip again.

Missing Machine User Credentials

Problem. A machine user does not have both a clientId and clientSecret configured under its client credentials.

Why it is a risk.

  • Service-to-service automations may fall back to using a personal user's credentials, which is harder to audit and breaks when that user leaves.
  • Without machine user credentials, scripts may store ad hoc tokens in environment files instead of using a managed flow.
  • It is unclear who owns the integration because there is no dedicated identity.

Example. A nightly job uses a personal admin token instead of a machine user. The admin leaves the company, their account is disabled, and the nightly job silently fails for several days before anyone notices.

How to solve it.

  1. Open Manage > Machine Users and select the affected machine user.
  2. Click Generate Credentials (or Add Credentials) to create a new client ID and client secret.
  3. Copy the secret immediately — it is shown only once. Store it in your runtime secret manager (Vault, AWS Secrets Manager, etc.).
  4. Update the consumer to use the new client ID and secret.
  5. Confirm the consumer authenticates successfully and emits expected audit events.
  6. Document the owner of the machine user so future rotations have a clear contact.

Stale Machine User Credentials

Problem. Machine user client credentials have not been rotated for at least 90 days.

Why it is a risk.

  • Machine user credentials are often baked into pipelines, container images, or long-running services. They tend to spread silently.
  • A leaked credential can be hard to spot because there is no end-user to notice login anomalies.
  • Many compliance standards require rotation of non-human credentials at a defined interval.

Example. A Kubernetes secret containing a machine user's clientSecret is mounted into a pod. The pod has been running for 18 months. The credential is rebuilt into ten different images before anyone notices it has never been rotated.

How to solve it.

  1. Open Manage > Machine Users and select the affected user.
  2. Click Rotate Credentials to generate a new client secret.
  3. Roll the new secret into all consumers (CI, deployment, services). Use a brief overlap period if both secrets can be active.
  4. Verify each consumer is using the new secret (look for fresh authentication events).
  5. Revoke the old secret in Webrix.
  6. Update the rotation reminder so the next rotation happens before the next 90-day mark.

Secrets In MCP Configuration

Problem. An MCP server's configuration JSON appears to contain a hardcoded credential — for example, an API key, OAuth client secret, JWT, private key, password, or database connection string.

Why it is a risk.

  • MCP configurations are easier to share, copy, sync to Git, and screenshot than Vault-managed secrets.
  • A leaked configuration can immediately authenticate against the upstream provider.
  • Detection rules in your SIEM may flag the leak only after it has been pushed to a remote repository.

Example. A user enters this configuration:

{
"type": "http",
"url": "https://api.example.com/mcp",
"headers": {
"Authorization": "Bearer sk-ant-abc123...realsecret..."
}
}

Anyone who can read the integration sees the live API key. If the configuration is exported (via Git sync, JSON export, or screen sharing), the key leaves Webrix in plaintext.

How to solve it.

  1. Open Admin > Settings > Vault in Webrix.
  2. Click Create Secret, name it descriptively (for example, EXAMPLE_API_KEY), and paste the current value.
  3. Open the affected integration's MCP configuration.
  4. Replace the inline secret with a Vault reference: "Authorization": "Bearer {{vault.EXAMPLE_API_KEY}}".
  5. Save the configuration. Webrix resolves the reference securely at runtime.
  6. Rotate the credential at the upstream provider, because the previous value was already in plaintext and may have been copied.
  7. Update the Vault secret with the rotated value and confirm the integration still works.
  8. Repeat for every secret reported by the radar item.

Event Item

Problem. Event-generated radar items are produced by runtime signals, integrations, or future detectors. Each one carries its own context and is not part of the static catalog above.

Why it is a risk.

  • The signal indicates something concrete happened (for example, a guard breach, an unusual sign-in, a failed rotation).
  • The item is open until somebody acknowledges and addresses it.

How to solve it.

  1. Read the item's title, description, and detail carefully.
  2. Click the Fix action if one is provided — it points to the relevant page in Webrix.
  3. Add notes describing what you did and assign follow-up to the right owner.
  4. Resolve the item only after the underlying signal has been addressed, so it does not auto-reopen on the next evaluation.