Reference Document

Governance & Audit

How Semper Signum stays auditable, rollback-capable, and compliance-ready at institutional scale.

TL;DR for compliance reviewers
  1. 1.Every output traces to a source filing. No claim exists without provenance.
  2. 2.Model calls are pinned to exact IDs and hashed prompts. Every stage is reproducible.
  3. 3.Three hard-coded conditions halt publication for human review. The system cannot override them.

Need the full packet? Request our compliance packet: MNPI policy, vendor risk questionnaire, and sample audit exports.

01 — Data Lineage

Every output traces back to a filing

Source Filings 10-K, 10-Q, 8-K Extraction Pinned prompt + model Evaluators Score + threshold Rollback Loop Published Report Signed artifact Audit Log Immutable, per-stage

Each report begins with source filings pulled directly from EDGAR and operator-submitted materials. Extraction runs against a pinned model and prompt hash. Every downstream stage is gated by an evaluator that scores output against the source. If the score falls below threshold, the stage rolls back and re-runs before publication. The audit log captures every hop, so any figure in a published report resolves back to the filing passage, the model, and the prompt that produced it.

02 — Model & Prompt Versioning

Every call is pinned, every prompt is hashed

No call in the pipeline runs against a floating model alias. Each stage records the exact model identifier, provider, provider-side revision, sampling parameters, and a SHA-256 hash of the rendered prompt. Re-running any stage six months later against the same inputs, same model ID, and same prompt hash reproduces the reasoning trace within sampling variance. This is the property compliance needs: reproducibility under examination.

The provider queue runs in documented priority order, and customers can pin or substitute any entry with their own API keys. The default order is Bailian Qwen 3.5-Plus first, Fucheers Claude Opus 4.6 second, Fucheers Claude Sonnet 4.6 third. Failover fires on rate limit, provider error, or evaluator-flagged drift on the primary. Every failover is logged with the trigger condition, so a compliance reviewer can see when a secondary model produced a given passage and why the primary was skipped. Customers with internal model preferences or regulated-provider requirements reconfigure the queue at deployment time.

03 — Audit Trail Schema

One row per model call, stored immutably

timestamp        : 2026-04-14T18:22:41.318Z
pipeline_id      : jpm-2026q1-a31f
stage            : valuation.dcf.assumptions
model_id         : anthropic/claude-opus-4.6-20260201
prompt_hash      : 7b4e9c02f1ad3c88e0b1d9a4f6c2e5718a4d0b93
input_hash       : d29a14c67f83b0e5a1c4e2f8d9b0a7c36e8f2b41
output_hash      : 3f8e2a91b4d7c06a8e5f1b9d2c4a7e0f6b3d8c52
eval_score       : 0.41
eval_threshold   : 0.70
action           : ROLLBACK
escalation       : none

timestamp        : 2026-04-14T18:23:09.772Z
pipeline_id      : jpm-2026q1-a31f
stage            : valuation.dcf.assumptions
model_id         : anthropic/claude-sonnet-4.6-20260201
prompt_hash      : 7b4e9c02f1ad3c88e0b1d9a4f6c2e5718a4d0b93
input_hash       : d29a14c67f83b0e5a1c4e2f8d9b0a7c36e8f2b41
output_hash      : a91f4c38b2e6d5078c1f9a3e4b2d6c8f0a1b5d73
eval_score       : 0.88
eval_threshold   : 0.70
action           : PASS
escalation       : none

Every row is appended, never mutated. Compliance can reconstruct any published figure by replaying the stages that fed it, compare evaluator scores across time to detect model-quality regressions, and prove that any ROLLBACK or RETRY reached a passing state before the report shipped. Exports are available as Parquet or JSONL with SHA-256 chained row hashes, so tampering after the fact is detectable at the schema level.

04 — Human-in-the-Loop Gating

Three conditions that require a reviewer before publication

The system is designed to publish autonomously on clean runs. It is also designed to stop and escalate the moment output crosses a risk threshold. Three gates are hard-coded and cannot be overridden by the model itself.

  1. Fair value deviates more than 40% from consensus When a valuation stage produces a fair value that falls outside a 40% band around the consensus estimate range, the pipeline halts. A human reviewer either signs off on the divergence with a written rationale or sends the stage back with instructions.
  2. Evaluator score remains below threshold after two retries Any valuation stage that fails its evaluator twice in a row is escalated. The system does not keep retrying into a stall. Two strikes and a human reviews the failure pattern before any further attempt.
  3. Stage output contradicts a cited SEC filing passage A dedicated contradiction checker compares each generated claim against the filing passage it cites. Any contradiction flagged above confidence threshold halts publication and routes the stage to a reviewer, with both the claim and the cited passage surfaced side by side.
05 — Rollback and Recovery

Failed stages re-run; failed pipelines escalate

When an evaluator returns a score below threshold, the stage is rolled back. Re-execution uses a different model from the failover queue, a revised prompt, or both, depending on the failure signature. Both attempts are written to the audit log with their evaluator scores, so the trail shows every version the system produced and which one advanced.

Retry budget is bounded. After the configured maximum retries (default three), the pipeline stops and escalates to a human reviewer. The report does not publish. It sits in a quarantine state until the reviewer clears it, overrides it with a written note, or marks it for manual completion. This is how the system refuses to ship a report it cannot stand behind, without silently discarding the work done up to that point.

06 — Transparent Limits

What we do not do

  • No MNPI. We do not ingest material non-public information. All research is produced from public sources, client-provided materials, or data the client has licensed and authorized for use. Source provenance is logged per report.
  • No PII or client-portfolio data. Customer data, retail-client information, and portfolio holdings never enter the pipeline.
  • No training on customer data. Every customer prompt is scoped to a single pipeline run and discarded after report generation. Nothing from a customer session feeds model fine-tuning, evaluator tuning, or prompt libraries.
  • No unsupported private-company financial claims. Private-company research is available when based on public filings, client-provided documents, or permissioned data with clear source provenance and client approval. We do not fabricate or scrape private financials from undisclosed sources.
  • No guarantee of 100% accuracy. We guarantee every output is traceable and every failure is catalogued. Accuracy comes from evaluators catching regressions, not from marketing language.
  • No shipping past failed evaluators. A report either publishes clean or sits in quarantine for human review. The system has no override path that bypasses final-stage evaluation.
07 — Security Posture

Controls, retention, and deployment

Deployment
Hybrid. Orchestration runs in your VPC or ours, customer-configurable.
Model providers
Bailian, Anthropic. Customer can pin or substitute via own API keys.
Data retention
90 days on logs. 0 days on source material after report generation. Both configurable.
Encryption
TLS 1.3 in transit. AES-256 at rest on audit logs and artifacts.
Access control
SSO via customer IdP (SAML 2.0, OIDC). Role-based scoping at pipeline and report level.
SOC 2
Roadmap 2026. Interim controls documented and auditable on request.
Deletion SLA
7 business days for full export. 30 days for full deletion on cancellation.
Incident response
Named point of contact. Disclosure within 72 hours of confirmed material incident.

Ready for a security review packet?

We share a full controls document, sample audit exports, and a redacted incident-response runbook with qualified institutional reviewers under NDA.