Skip to main content
Risk Classification is available on the Pro plan and above. AI-assisted narrative analysis requires the Business plan.
Risk classification assigns a formal risk tier to each system in your registry. This determines what governance requirements apply — what oversight controls, documentation, and human review processes you need to have in place. Classification runs against two complementary frameworks: the EU AI Act and the NIST AI RMF.

Auto-classification

When you open the classification view for a registered system, Ayliea analyzes the use cases and data flows you provided during registration and suggests a risk tier automatically. The suggestion is based on the categories of use, the type of decisions the system influences, and the population it affects. Auto-classification is a starting point, not a final determination. Review the suggestion and accept it or override it with your own assessment.

EU AI Act tiers

The EU AI Act organizes AI systems into four tiers based on the risk they pose:
TierDescription
Unacceptable riskSystems that pose a clear threat to fundamental rights. These are prohibited under the EU AI Act.
High riskSystems used in critical areas such as employment, essential services, law enforcement, or education. Subject to the most extensive compliance requirements.
Limited riskSystems with specific transparency obligations — for example, chatbots that interact with people must disclose that the user is talking to an AI.
Minimal riskSystems that pose little or no risk. Most AI tools fall here. No mandatory compliance obligations apply.

NIST AI RMF assessment

Alongside the EU AI Act classification, you can record an impact and likelihood assessment aligned to the NIST AI Risk Management Framework. This gives you a two-dimensional view of risk: how severe the potential harm is, and how likely it is to occur. The NIST assessment does not produce a pass or fail result — it produces a risk profile that informs your governance priorities.

Accepting or overriding the auto-classification

After reviewing the suggested tier, select Accept to confirm it, or select Override to record a different tier. Overrides require a written rationale, which is saved with the classification record. This rationale is available during audits to explain why your team chose a different tier than the system suggested.

Governance requirements

Once a system is classified, the governance requirements triggered by that classification are listed on the system detail view. For example, a High-risk system under the EU AI Act will show requirements such as:
  • Conformity assessment before deployment
  • Human oversight mechanism documentation
  • Post-market monitoring plan
  • Incident reporting obligations
These requirements are informational — they describe what your organization needs to have in place, and you track completion status directly on the system record.

AI-assisted analysis

On the Business plan, you can generate a narrative risk report for any classified system. The report summarizes the classification rationale, the applicable governance requirements, and recommended next steps in plain language — useful for sharing with non-technical stakeholders or for inclusion in audit documentation.

Framework selection

From the Classification settings, you can configure which frameworks apply to your organization. If your organization is not subject to the EU AI Act, you can disable that framework and rely solely on NIST AI RMF, or vice versa.

Classification history

Every classification event — including the original auto-suggestion, any overrides, and changes over time — is recorded in an append-only history log on the system record. This log cannot be edited or deleted, providing an accurate audit trail of how each system’s risk profile has changed.