Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.ayliea.com/llms.txt

Use this file to discover all available pages before exploring further.

AI Security Assessment (AISS)

AISS — the Ayliea AI Security Standard — is the open methodology behind every AI Security assessment on the platform. The full specification is published at github.com/Ayliea/aiss under the Creative Commons Attribution 4.0 license, so you can verify any score from the spec, fork the standard for your own internal use, or propose changes through the public RFC process. The current production version is v1.2.3: 10 control domains, 56 sub-controls, and 9 framework crosswalks to NIST CSF 2.0, ISO/IEC 27001:2022, NIST AI RMF, NIST AI 600-1, CIS Controls v8.1, EU AI Act, Colorado AI Act, OWASP LLM Top 10, and MITRE ATLAS.
AISS is the only framework available on the Free tier, so any organization can run their first assessment without payment or sales contact.

Why an open standard

Most AI security scoring is opaque — proprietary algorithms with no way for an auditor to reproduce the math. AISS takes the opposite position: every score is fully derivable from the answers you provide and the published spec. Anyone with the standard and your answers can recompute your score line-by-line. That transparency matters in three places:
  • In the audit room — hand auditors the published JSON spec; they verify framework crosswalks, scoring rubric, and sub-control citations against their own reference frameworks.
  • At the boardroom — board-ready reports cite a standard your board’s outside counsel can read directly.
  • In your security program — fork AISS internally, extend it for your environment, or propose upstream changes through the public RFC process.

The 10 control domains

The framework spans 87 questions across 10 AI-specific control domains. Each domain maps to one or more clauses in the source frameworks above.
Organizational policies, roles, and accountability structures for AI adoption and oversight. Includes acceptable-use policies, executive AI literacy, AI ethics committees, and the connection between AI use and the broader risk-management program.
Discovery, inventory, classification, and lifecycle management for every AI tool, model, agent, and integration in your environment. Covers AI development tool governance (Copilot, Cursor, etc.) and shadow-AI detection.
Controls for data flowing to, from, and within AI systems. Classification, encryption, minimization, retention, RAG and vector-store security, and synthetic-content provenance.
Authentication, authorization, privileged-access management, and least-privilege access to AI tools and services. Covers human-to-AI, AI-to-system, and agent-to-agent authorization.
Vendor assessment, contract review, model provenance, and continuity planning for third-party AI services. Includes assessing model origin, training-data lineage, and sub-processor exposure.
Human oversight, accuracy monitoring, bias detection, content disclosure, automated quality controls, and synthetic-content marking (per EU AI Act Article 50 and similar deepfake-disclosure laws).
Incident planning, reporting, classification, containment, post-incident review, and regulatory notification for AI-specific events — including prompt-injection attacks, data leakage via AI output, model compromise, and agent guardrail failures.
Usage logging, anomaly detection, prompt/response logging, performance monitoring, and log integrity for AI systems. Connects AI activity to your existing SIEM and incident-response workflows.
Security awareness programs, role-based training, acceptable-use communication, threat awareness (phishing-via-AI, social engineering of AI agents), and executive AI literacy.
Model access controls, adversarial testing, versioning, integrity verification, prompt-injection defense, and agentic AI guardrails (per MITRE ATLAS T0080–T0112 and OWASP LLM06 Excessive Agency).

Glass-Box Score breakdown

For every AISS assessment, the results page includes a per-domain expandable breakdown. Each AC-N panel shows the questions answered, weight per question, points earned vs. maximum, MITRE ATLAS technique mappings, framework crosswalks, and a deep-link to the matching domain markdown in the public Ayliea/aiss repository. Any auditor can hand-calculate the score from the published spec plus your answers — no proprietary algorithm involved. A separate AISS Coverage page at /aiss rolls all 10 domains into a single posture grid so you can see your overall AI security posture in one view, with each domain card linking directly to the corresponding spec page.

Propose a change

If a question or scoring rubric doesn’t match your environment’s reality, every question row in the Glass-Box drilldown has a pencil icon that opens a pre-filled GitHub issue against Ayliea/aiss. The RFC issue template captures the control ID, question text, your selected option, and points earned automatically — practitioners shape the standard through the same public process they use to consume it.