vCISO Aegis AI™
Reflex AI™ — Operations

Reflex AI™

The telemetry continuity system inside vCISO Aegis AI™. Detects telemetry loss, freezes last-known-good evidence, alerts you, and never fabricates coverage.

Product: Reflex AI™ (a component of vCISO Aegis AI™)  ·  Owner: ElasticD3M, LLC  ·  Version: 1.0  ·  Effective: April 10, 2026

Legal anchor: Terms of Use §6.2, §6.3, §6.4, §10

Plain-language summary. Reflex AI™ is the telemetry continuity component inside vCISO Aegis AI™. vCISO Aegis AI™ produces compliance evidence only from live telemetry emitted by your environment. It never synthesizes answers from checklists, surveys, or questionnaires. When telemetry stops, Reflex AI™ takes over: it detects the loss, freezes last-known-good evidence, marks affected controls as stale or unknown, alerts you, and manages the tiered remediation described below. This page is the operational playbook that backs Terms §6.2–6.4.
Best efforts — no guarantees. vCISO Aegis AI™ is an AI-native product. AI agents can make mistakes, especially early in their lifecycle as they learn and evolve into mature technologies. We test and update our agents on a weekly cadence at minimum to keep them as current as possible. The Services are provided on a best-efforts basis. We make no guarantees. Service availability is subject to unforeseen events, including telemetry loss, upstream outages, collector failures, and third-party source failures. See the Terms of Use Section 10 for the full disclaimer.

1. Purpose

vCISO Aegis AI™ is an AI-Native AI Agent-as-a-Service (AaaS) product. Every output is derived from live telemetry. That architectural choice is what makes the product defensible in front of a DoD assessor, an auditor, or a court. It also creates one load-bearing failure mode: if telemetry stops flowing, the system must not fabricate coverage.

This page defines how telemetry loss is detected, how the platform responds automatically, how you are informed, and what continuity options are available so your regulatory posture does not go dark during an outage. This plan is executed by automated agents with a human-in-the-loop executive approval gate before any bridge service is extended or modified.

2. Core principle: fail closed, never fabricate

The single rule that governs every decision in this document:

When telemetry for a control is missing, stale, or unverifiable, the Services will mark that control as stale or unknown and freeze the last known good evidence with an explicit as_of timestamp. The Services will never produce a synthesized, estimated, or questionnaire-derived answer in its place.

Everything downstream of this rule — alerts, dashboards, bridge services, SLAs — exists to make "fail closed" a survivable experience for the customer rather than a cliff.

3. Telemetry sources under scope

Each customer environment feeds vCISO Aegis AI™ through a combination of collectors and integrations. The continuity plan applies to all of them:

Each source is tagged with the CMMC and NIST SP 800-171 control scopes it evidences. Loss of a source automatically marks every control in its scope as stale until telemetry resumes or a fallback source covers the gap.

4. Detection layer

4.1 Heartbeat and staleness thresholds

Every telemetry source has three timers maintained by the platform:

TimerDefaultPurpose
heartbeat_max5 minMissed heartbeat window before the source is marked degraded.
stale_window30 minTime without fresh events before the source is marked stale.
freeze_trigger2 hoursTime after which last-known-good evidence is frozen and the control is marked unknown.

Thresholds are overridable per source and per customer. Higher tiers (Guardian, Vanguard, Fortress, Sovereign) may purchase tighter thresholds and faster response SLAs; all tiers receive the default detection regardless.

4.2 Automatic reconciliation before alert

Before any customer-facing alert fires, the platform attempts re-authentication against the source, retry with exponential backoff for transient API failures, failover to a redundant collector where one is deployed, and comparison against a secondary read-only source if one is mapped to the same control scope. If all of the above fail within stale_window, the source is declared degraded and the response sequence in §5 begins.

4.3 Integrity check

Detection is not just "is data arriving." The platform also watches for volume anomalies (sudden drop-off without a corresponding source change), schema drift (fields disappearing mid-stream), clock skew that would invalidate evidence timestamps, and silent credential downgrade (source authenticating but returning empty results). Any of these will trigger the same response path as an outright outage.

5. Automatic response sequence

When a source crosses the stale_window:

  1. Freeze last-known-good evidence. The most recent valid evidence bundle for every control in the affected scope is written to an immutable store with an as_of timestamp, the triggering reason, and the operator identity (the automated agent).
  2. Mark affected controls as stale. Dashboards and API responses covering those controls return status stale with the as_of timestamp surfaced in every response. Downstream customer reports show the freeze condition, not a green check.
  3. Raise a customer alert. Three channels fire in parallel: dashboard banner on the customer console, email to designated security contacts, and webhook to the customer's SIEM or ITSM endpoint (if configured).
  4. Open an internal incident. An incident record is created in the ElasticD3M ops system with severity auto-assigned by control criticality. Watchman and Sentinel tiers receive best-effort response within business hours. Guardian and above receive the SLA defined in their order form.
  5. Begin tiered remediation (see §6).
  6. Continue reporting the freeze condition in every customer-facing surface until telemetry resumes and a normal evidence bundle is produced for the affected scope. No back-fill. No synthetic coverage.

This entire sequence is executed by automated agents. A human executive approval is required before the sequence escalates past Tier 3 (see §6).

6. Tiered remediation

Tier 1 — minutes

Automatic reconnect

The platform retries the source, refreshes credentials, and fails over to redundant collectors. No human action is required. You see a transient degraded state on the dashboard and, if the issue clears within stale_window, no alert is sent.

Tier 2 — hours

Expedited collector redeploy

If Tier 1 fails, an automated agent attempts to redeploy the collector from the ElasticD3M deployment pipeline using your existing consent and scope. This is only available for collectors that ElasticD3M owns (our lightweight agent, our cloud-API pollers). Third-party EDR outages fall through to Tier 3.

Tier 3 — same day

Fallback source mapping

Every control scope is optionally mapped to one or more fallback telemetry sources at onboarding. Example: if the primary EDR telemetry goes dark, the platform can pivot to SIEM-forwarded EDR logs, or to cloud workload protection data from the cloud provider's control plane. The fallback is always read-only and lower-fidelity; the dashboard clearly marks any control evidenced from a fallback source as fallback_evidence so you and your auditor can see exactly what is covered and by what. Fallback sources are free to configure at onboarding. Customers without fallback mapping skip this tier.

Tier 4 — SOW-gated

Human-assisted bridge service

If Tier 1–3 cannot restore compliance evidence within the SLA for your tier and you want to avoid a regulatory gap, ElasticD3M may offer a bridge service under a separate statement of work. Bridge services may include on-site or remote collector re-deployment with your approval, temporary direct ingest from customer-provided log exports, structured interviews with your security team to capture evidence of compensating controls (documented explicitly as customer attestations, never as system-produced compliance answers), and expedited review of existing audit artifacts you are willing to share.

Bridge services require a signed SOW and an executive approval from ElasticD3M before the work begins. They are not included in any subscription tier by default and do not change the telemetry-only nature of the underlying product. Any evidence produced via bridge services is labeled as bridge_evidence with the human operator identity attached.

Tier 5 — Fortress / Sovereign only

Continuity package

Fortress and Sovereign customers may pre-purchase a Continuity Package that pre-authorizes bridge services, guarantees a response window, and includes a standing set of fallback source mappings maintained by ElasticD3M. This converts Tier 4 from reactive-SOW to a contracted option that can be triggered without negotiation during an incident.

7. What you get during the gap

The promise during telemetry loss is specific and narrow:

  1. Frozen evidence. The last-known-good evidence bundle for every affected control is preserved with an as_of timestamp, the reason for the freeze, and a cryptographic hash so it cannot be tampered with. This is produced as a downloadable report on request.
  2. A clear alert trail. Every alert raised, every remediation step attempted, and every status transition is logged and available for export as an incident report you can hand to your auditor.
  3. A clear "unknown" signal. Controls that have gone past the freeze trigger show as unknown in every API and report. No green checks. No synthesized coverage. Auditors see exactly what is and is not known.
  4. A recovery plan. An incident record shows which tier the response is currently in, what the next step is, and what the expected window is.
  5. Optional bridge services per §6.4 and §6.5 if you want to fill the gap with human-assisted continuity rather than accept a temporary unknown state.

What you explicitly do not get:

This is the product's defensive posture. It is also the reason customers in regulated industries buy it.

8. Recovery and unfreeze

When telemetry for an affected source resumes, the platform waits for stable_window (default 15 min) of clean data before declaring the source recovered. The stale and unknown statuses for the affected control scope are then cleared on the next evidence cycle. The frozen evidence bundle is retained in the immutable store and linked from the recovered control's history. The incident record is closed with a full timeline, including the freeze window and any bridge services provided during the gap. The next customer report shows the full timeline — recovery does not erase the gap from the record.

Backdating is prohibited. If an auditor asks about the gap, you can produce the full incident record and the frozen evidence bundle.

9. Human-in-the-loop controls

Every decision that alters the customer relationship during a telemetry outage requires a human executive approval at ElasticD3M: opening a Tier 4 bridge SOW, extending an existing bridge service, waiving or modifying SLA timers, declaring an incident resolved, or granting any form of courtesy credit.

Automated agents handle detection, alerting, freeze, retry, and Tier 1–3 remediation without human involvement. Anything that changes contracts, pricing, or the compliance posture of record requires a named human with authority to approve it. This is a deliberate design choice and is not negotiable per the ElasticD3M operating principle that an executive always sits in the loop on decisions of record.

10. Testing and drills

Results of drills are recorded in the ElasticD3M ops journal and are available to any customer during their next business review.

11. Cross-references

12. Change control

This document is versioned in the vCISO LP project. Any change to detection timers, response tiers, or bridge service definitions must be reflected in both this page and the corresponding Terms of Use section in the same release. The legal and operational surfaces stay in sync by design.

13. Contact

Questions about this plan or about telemetry continuity for your environment? Contact us at support@ai4ciso.ai.