aicomply.
POL-AI-001

Artificial Intelligence Policy

This policy establishes the organization's overarching framework for the responsible development, deployment, and operation of Artificial Intelligence (AI) systems in compliance with the EU AI Act and aligned with the organization's values, risk appetite, and strategic objectives.

Effective Date

2025-08-01

Next Review

2026-08-01

Policy Owner

Jane Doe

Status

Draft

Artificial Intelligence Policy

Document Type: Policy
Policy Number: POL-AI-001
Version: 1.0
Effective Date: 2025-08-01
Next Review Date: 2026-08-01
Review Frequency: Annually
Owner: Jane Doe, Chief Strategy & Risk Officer (CSRO)
Sponsor: John Smith, Chief Executive Officer (CEO)
Approved By: Board of Directors
Status: Draft
Distribution: Published on Corporate Policy Portal - Accessible to all employees


DOCUMENT CONTROL

ElementDetails
Policy TitleArtificial Intelligence Policy
Policy NumberPOL-AI-001
Version1.0
Effective Date2025-08-01
Next Review Date2026-08-01
Review FrequencyAnnually
OwnerJane Doe, Chief Strategy & Risk Officer (CSRO)
SponsorJohn Smith, Chief Executive Officer (CEO)
Approved ByBoard of Directors
DistributionCorporate Policy Portal (all employees)
ClassificationInternal Use Only

1. PURPOSE

This policy establishes the organization's overarching framework for the responsible development, deployment, and operation of Artificial Intelligence (AI) systems in compliance with the EU AI Act and aligned with the organization's values, risk appetite, and strategic objectives.


2. SCOPE

2.1 Applicability

This policy applies to:

  • All AI systems developed, deployed, or operated by the organization
  • All employees, contractors, third parties, and business partners involved in AI activities
  • All business units, functions, and geographic locations
  • Both product AI systems (sold to customers) and internal AI systems

2.2 AI Systems Covered

In Scope:

  • High-risk AI systems (as defined by EU AI Act Annex III)
  • Limited-risk AI systems with transparency obligations
  • Minimal-risk AI systems
  • General-purpose AI models
  • AI systems developed in-house
  • AI systems procured from third parties

Out of Scope:

  • Prohibited AI practices (not permitted under any circumstances)

3. POLICY STATEMENT

The organization is committed to:

  1. Compliance: Ensuring full compliance with the EU AI Act and all applicable AI regulations

  2. Responsible AI: Developing and deploying AI systems that are safe, transparent, fair, accountable, and respect human rights

  3. Risk Management: Implementing comprehensive risk management throughout the AI system lifecycle

  4. Human Oversight: Maintaining appropriate human oversight of AI systems, particularly high-risk systems

  5. Transparency: Being transparent about AI use with customers, employees, and stakeholders

  6. Continuous Improvement: Continuously monitoring, evaluating, and improving AI systems

  7. Ethical Use: Using AI in alignment with organizational values and ethical principles


4. GOVERNANCE STRUCTURE

4.1 AI Governance Committee

Composition:

  • Chief Strategy & Risk Officer (CSRO) - Chair
  • Chief Technology Officer (CTO)
  • Chief Data Officer (CDO)
  • Chief Legal Officer (CLO)
  • Product Directors
  • AI Act Program Manager

Responsibilities:

  • Owns this policy and all supporting standards
  • Approves AI strategy and roadmap
  • Reviews and approves high-risk AI systems
  • Monitors AI Act compliance
  • Escalates critical issues to Executive Committee / Board

Meeting Cadence: Monthly

4.2 AI Act Program Manager

Responsibilities:

  • Implements this policy and supporting standards
  • Coordinates AI Act compliance activities
  • Maintains AI system inventory
  • Reports compliance status to AI Governance Committee
  • Manages AI Act compliance program

4.3 Business Unit Accountability

Each business unit developing or deploying AI systems is accountable for:

  • Compliance with this policy and supporting standards
  • Risk management for their AI systems
  • Resource allocation for AI compliance
  • Escalation of issues and risks

5. POLICY REQUIREMENTS

The organization shall comply with the following requirements:

5.1 AI System Classification

Requirement: All AI systems must be classified according to EU AI Act risk categories

Supporting Standard: STD-AI-001 - AI System Classification Standard

Key Activities:

  • Assess against prohibited practices (Article 5)
  • Assess against Annex III high-risk categories
  • Determine transparency obligations
  • Document classification decision
  • Obtain legal review and approval

5.2 AI Risk Management

Requirement: Establish and maintain a comprehensive AI risk management system throughout the AI system lifecycle

Supporting Standard: STD-AI-002 - AI Risk Management Standard

Key Activities:

  • Identify AI-related risks (bias, safety, security, privacy, etc.)
  • Assess risk likelihood and impact
  • Implement risk mitigation measures
  • Monitor risks continuously
  • Maintain AI risk register

Applicable To: All AI systems, mandatory for high-risk systems (Article 9)


5.3 Data Governance

Requirement: Ensure AI training, validation, and testing datasets meet quality, relevance, and representativeness standards

Supporting Standard: STD-AI-003 - AI Data Governance Standard

Key Activities:

  • Define data quality requirements
  • Examine datasets for bias
  • Implement bias mitigation measures
  • Document data lineage
  • Maintain data governance records

Applicable To: All AI systems, mandatory for high-risk systems (Article 10)


5.4 Technical Documentation

Requirement: Create and maintain comprehensive technical documentation per EU AI Act Annex IV

Supporting Standard: STD-AI-004 - AI Technical Documentation Standard

Key Activities:

  • Document system design and architecture
  • Document training methodology and data
  • Document testing and validation
  • Document risk management activities
  • Maintain version control

Applicable To: High-risk AI systems (Article 11, Annex IV)


5.5 Record Keeping and Logging

Requirement: Implement automated logging of AI system operations with a retention period appropriate to the intended purpose, at minimum six months (Article 19(1)), or longer as determined by organizational requirements and applicable national law

Supporting Standard: STD-AI-005 - AI Logging and Record Keeping Standard

Key Activities:

  • Log all AI system operations
  • Capture input data, output decisions, confidence levels
  • Implement tamper-proof logging
  • Retain logs for a minimum of six months, or longer per organizational or national requirements
  • Enable log searchability and analysis

Applicable To: High-risk AI systems (Article 12)


5.6 Transparency and Information Provision

Requirement: Provide clear, comprehensive information to deployers and users about AI system capabilities and limitations

Supporting Standard: STD-AI-006 - AI Transparency Standard

Key Activities:

  • Create instructions for use
  • Document system capabilities and limitations
  • Implement explainability features
  • Provide transparency notices
  • Maintain user documentation

Applicable To: High-risk AI systems (Article 13), systems with transparency obligations (Article 50)


5.7 Human Oversight

Requirement: Design AI systems to enable effective human oversight and intervention

Supporting Standard: STD-AI-007 - AI Human Oversight Standard

Key Activities:

  • Define human oversight measures
  • Implement human-in-the-loop or human-on-the-loop controls
  • Train oversight personnel
  • Document oversight activities
  • Monitor oversight effectiveness

Applicable To: High-risk AI systems (Article 14)


5.8 Accuracy, Robustness, and Cybersecurity

Requirement: Ensure AI systems achieve appropriate levels of accuracy, robustness, and cybersecurity

Supporting Standard: STD-AI-008 - AI Accuracy, Robustness & Security Standard

Key Activities:

  • Define accuracy requirements and metrics
  • Test for robustness and resilience
  • Implement cybersecurity controls
  • Conduct adversarial testing
  • Monitor performance continuously

Applicable To: High-risk AI systems (Article 15)


5.9 Quality Management System

Requirement: Establish and maintain a Quality Management System (QMS) for AI systems

Supporting Standard: STD-AI-009 - AI Quality Management Standard

Key Activities:

  • Define quality policies and procedures
  • Implement quality controls throughout lifecycle
  • Conduct internal audits
  • Manage non-conformities and corrective actions
  • Maintain QMS documentation

Applicable To: High-risk AI systems (Article 17)


5.10 Conformity Assessment

Requirement: Undergo conformity assessment before placing high-risk AI systems on the market

Supporting Standard: STD-AI-010 - AI Conformity Assessment Standard

Key Activities:

  • Select conformity assessment procedure (Annex VI or VII)
  • Conduct internal assessment or engage notified body
  • Address non-conformities
  • Issue EU Declaration of Conformity
  • Affix CE marking

Applicable To: High-risk AI systems (Articles 40-49)


5.11 Registration and Notification

Requirement: Register high-risk AI systems in EU database before market placement

Supporting Standard: STD-AI-011 - AI Registration Standard

Key Activities:

  • Register in EU database (Article 49)
  • Provide required information
  • Update registration for substantial modifications
  • Maintain registration records

Applicable To: High-risk AI systems (Article 49)


5.12 Post-Market Monitoring

Requirement: Establish and maintain post-market monitoring system for AI systems in operation

Supporting Standard: STD-AI-012 - AI Post-Market Monitoring Standard

Key Activities:

  • Collect and analyze performance data
  • Monitor for issues and incidents
  • Conduct periodic reviews
  • Implement corrective actions
  • Report monitoring results

Applicable To: High-risk AI systems (Article 72)


5.13 Incident Management

Requirement: Report serious incidents and malfunctions to competent authorities

Supporting Standard: STD-AI-013 - AI Incident Management Standard

Key Activities:

  • Define serious incidents
  • Establish incident reporting process
  • Report to authorities within 15 days
  • Investigate root causes
  • Implement corrective actions

Applicable To: High-risk AI systems (Article 73)


5.14 AI Literacy

Requirement: Ensure all staff dealing with AI systems have appropriate AI literacy

Supporting Standard: STD-AI-014 - AI Literacy and Training Standard

Key Activities:

  • Provide AI Act compliance training
  • Train on AI risks and limitations
  • Train on human oversight responsibilities
  • Assess training effectiveness
  • Maintain training records

Applicable To: All staff involved with AI systems (Article 4)


6. PROHIBITED AI PRACTICES

The organization strictly prohibits the following AI practices per EU AI Act Article 5:

  1. Subliminal manipulation - AI systems deploying subliminal techniques to materially distort behavior
  2. Exploitation of vulnerabilities - AI systems exploiting vulnerabilities of specific groups
  3. Social scoring - AI systems that evaluate or classify natural persons based on social behavior or personal characteristics, with the social score leading to detrimental or unfavourable treatment
  4. Real-time remote biometric identification in public spaces - For law enforcement (with limited exceptions)
  5. Biometric categorization using sensitive characteristics - Inferring race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation
  6. Emotion recognition in workplace and education - Except for medical or safety reasons
  7. Scraping of facial images - Untargeted scraping from internet or CCTV
  8. Predictive policing based on profiling - AI systems for making risk assessments of natural persons to predict criminal offence risk based solely on profiling or personality traits (with exception for systems supporting human assessment based on objective facts)

Any proposal to develop or deploy AI systems in these categories must be rejected immediately and escalated to Legal and AI Governance Committee.


7. COMPLIANCE MONITORING

7.1 AI System Inventory

  • Maintain comprehensive inventory of all AI systems
  • Update inventory monthly
  • Include role classification (Provider/Deployer) and risk classification

7.2 Compliance Metrics

Track and report monthly:

  • Number of AI systems by risk category
  • Compliance status for each AI system
  • Open gaps and remediation status
  • Incidents and serious incidents
  • Training completion rates

7.3 Audits

  • Internal audits: Annually by Internal Audit
  • External audits: As required by conformity assessment
  • Regulatory inspections: As requested by competent authorities

8. ENFORCEMENT

8.1 Non-Compliance

Non-compliance with this policy may result in:

  • Suspension of AI system development or deployment
  • Disciplinary action up to and including termination
  • Regulatory penalties and fines
  • Reputational damage

8.2 Escalation

  • Minor issues: Escalate to AI Act Program Manager
  • Significant issues: Escalate to AI Governance Committee
  • Critical issues: Escalate to Executive Committee / Board
  • Regulatory issues: Escalate to Legal and notify authorities as required

9. EXCEPTIONS

9.1 Exception Process

Exceptions to this policy require:

  1. Written exception request with business justification
  2. Risk assessment of the exception
  3. Compensating controls (if applicable)
  4. Approval by AI Governance Committee
  5. Legal review and approval
  6. Documentation and tracking

9.2 Exception Limitations

Exceptions cannot be granted for:

  • Prohibited AI practices (Article 5)
  • Mandatory requirements for high-risk systems without compensating controls
  • Requirements that would result in regulatory non-compliance

10. POLICY REVIEW AND MAINTENANCE

10.1 Review Frequency

  • Annual review: By AI Governance Committee
  • Ad-hoc review: When regulations change or significant issues arise

10.2 Version Control

  • All versions maintained in policy repository
  • Changes tracked and communicated to stakeholders
  • Training updated when policy changes

11. RELATED DOCUMENTS

11.1 Supporting Standards

Standard IDStandard NameOwner
STD-AI-001AI System Classification StandardAI Act Program Manager
STD-AI-002AI Risk Management StandardAI Risk Manager
STD-AI-003AI Data Governance StandardChief Data Officer
STD-AI-004AI Technical Documentation StandardCTO
STD-AI-005AI Logging and Record Keeping StandardIT Security
STD-AI-006AI Transparency StandardProduct Directors
STD-AI-007AI Human Oversight StandardAI Risk Manager
STD-AI-008AI Accuracy, Robustness & Security StandardCTO
STD-AI-009AI Quality Management StandardQuality Director
STD-AI-010AI Conformity Assessment StandardLegal
STD-AI-011AI Registration StandardLegal
STD-AI-012AI Post-Market Monitoring StandardProduct Directors
STD-AI-013AI Incident Management StandardAI Risk Manager
STD-AI-014AI Literacy and Training StandardHR Director

11.2 Related Policies

  • Enterprise Risk Management Policy
  • Data Protection and Privacy Policy
  • Information Security Policy
  • Third-Party Risk Management Policy
  • Quality Management Policy

11.3 External References

  • EU AI Act (Regulation (EU) 2024/1689)
  • ISO/IEC 42001 - AI Management System
  • NIST AI Risk Management Framework
  • OECD AI Principles

12. DEFINITIONS

AI System: Machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

High-Risk AI System: AI system listed in Annex III of the EU AI Act or meeting specific criteria defined in Article 6.

Provider: Natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.

Deployer: Natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Serious Incident: Incident or malfunctioning of an AI system that directly or indirectly leads to death, serious damage to health, serious and irreversible disruption of critical infrastructure, infringements of fundamental rights, or serious damage to property or the environment.


13. EXCEPTIONS PROCESS

13.1 Exception Request Procedure

Exceptions to this policy may be granted only under the following conditions:

Step 1: Exception Request Submission

  • Submit written exception request to Policy Owner (CSRO)
  • Include business justification and risk assessment
  • Propose compensating controls (if applicable)
  • Specify duration of exception (temporary or permanent)

Step 2: Risk Assessment

  • AI Risk Manager assesses risk of granting exception
  • Legal reviews regulatory compliance implications
  • Security reviews security implications (if applicable)

Step 3: Approval

  • Minor exceptions: Approved by Policy Owner (CSRO)
  • Significant exceptions: Approved by AI Governance Committee
  • Critical exceptions: Approved by Board of Directors

Step 4: Documentation and Tracking

  • Document exception in Exception Register
  • Assign exception owner
  • Set review date
  • Track compensating controls

13.2 Exception Limitations

Exceptions cannot be granted for:

  • Prohibited AI practices (Article 5) - No exceptions permitted
  • Mandatory requirements for high-risk systems without compensating controls
  • Requirements that would result in regulatory non-compliance
  • Requirements that would violate fundamental rights

13.3 Exception Review

All exceptions must be reviewed:

  • Temporary exceptions: At expiry date
  • Permanent exceptions: Annually
  • All exceptions: When circumstances change

14. ENFORCEMENT AND DISCIPLINARY PROCESS

14.1 Non-Compliance Consequences

Non-compliance with this policy may result in:

For Individuals:

  • Verbal warning
  • Written warning
  • Performance improvement plan
  • Suspension
  • Termination of employment
  • Legal action (if applicable)

For Business Units:

  • Suspension of AI system development or deployment
  • Mandatory remediation plan
  • Budget restrictions
  • Escalation to Executive Committee

For the Organization:

  • Regulatory penalties and fines: up to EUR 35 million or 7% of global turnover (whichever higher) for prohibited practices; up to EUR 15 million or 3% for other violations; up to EUR 7.5 million or 1% for information violations (Article 99). SMEs: the lower of the two caps applies.
  • Legal liability
  • Reputational damage
  • Loss of customer trust
  • Market access restrictions

14.2 Violation Reporting

Reporting Channels:

  • Direct manager
  • AI Act Program Manager
  • Legal & Compliance
  • Ethics Hotline (anonymous)
  • Whistleblower channel

Protection:

  • No retaliation for good-faith reporting
  • Confidentiality maintained
  • Whistleblower protection per applicable laws

14.3 Investigation Process

  1. Report received - Logged in Incident Register
  2. Initial assessment - Severity and scope determined
  3. Investigation - Facts gathered, evidence collected
  4. Findings - Root cause identified
  5. Disciplinary action - Appropriate consequences applied
  6. Corrective action - Process improvements implemented
  7. Closure - Incident closed and lessons learned documented

15. KEY PERFORMANCE INDICATORS (KPIs)

The following KPIs will be tracked monthly and reported to the AI Governance Committee:

KPITargetMeasurementReporting Frequency
AI System Inventory Completeness100%% of AI systems registeredMonthly
Risk Classification Completion100%% of AI systems classifiedMonthly
High-Risk System Compliance100%% of high-risk systems compliantMonthly
Conformity Assessment Completion100%% of high-risk systems assessed before marketQuarterly
EU Database Registration100%% of high-risk systems registeredQuarterly
Serious Incident Reporting100% within 15 days% of incidents reported on timeMonthly
AI Literacy Training Completion100%% of staff trainedQuarterly
Policy Compliance Rate≥ 95%% of audited controls passingQuarterly
Open Gaps Closure Rate≥ 80%% of gaps closed by target dateMonthly
Prohibited AI Practices0# of prohibited practices identifiedMonthly

16. DEFINITIONS AND GLOSSARY

TermDefinition
AI SystemMachine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (EU AI Act Article 3(1)).
High-Risk AI SystemAI system listed in Annex III of the EU AI Act or meeting specific criteria defined in Article 6.
ProviderNatural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the system into service under its own name or trademark, whether for payment or free of charge (EU AI Act Article 3(3)).
DeployerNatural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity (EU AI Act Article 3(4)).
DistributorNatural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market (EU AI Act Article 3(7)).
ImporterNatural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country (EU AI Act Article 3(6)).
Serious IncidentIncident or malfunctioning of an AI system that directly or indirectly leads to: (a) death or serious damage to health; (b) serious and irreversible disruption of critical infrastructure; (c) infringements of fundamental rights protected under Union law; (d) serious damage to property or the environment (EU AI Act Article 3(49)).
Substantial ModificationChange to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment and affects compliance with the requirements or results in a modification to the intended purpose (EU AI Act Article 3(23)).
Placing on the MarketFirst making available of an AI system on the Union market (EU AI Act Article 3(9)).
Putting into ServiceSupply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose (EU AI Act Article 3(11)).
Conformity AssessmentProcess of verifying whether the requirements set out in Title III, Chapter 2 of the EU AI Act relating to a high-risk AI system have been fulfilled (EU AI Act Article 3(20)).
CE MarkingMarking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 and other applicable Union legislation harmonising the conditions for the marketing of products providing for its affixing (EU AI Act Article 3(24)).
BiasSystematic difference in treatment of certain objects, people, or groups in comparison to others.
ExplainabilityAbility to explain the reasoning behind an AI system's outputs or decisions.
Human OversightMeasures, including technical measures, to ensure that an AI system is effectively overseen by natural persons during the period in which it is in use.
RiskCombination of the probability of occurrence of harm and the severity of that harm (ISO 31000).

17. REGULATORY AND LEGAL MAPPING

This policy implements requirements from the following regulations and standards:

Regulation / StandardArticles / ClausesRequirement
EU AI Act (Regulation (EU) 2024/1689)Article 4AI literacy
Article 5Prohibited AI practices
Article 6Classification of high-risk AI systems
Article 9Risk management system
Article 10Data and data governance
Article 11Technical documentation
Article 12Record-keeping
Article 13Transparency and provision of information
Article 14Human oversight
Article 15Accuracy, robustness and cybersecurity
Article 17Quality management system
Articles 43-48Conformity assessment
Article 49Registration
Article 50Transparency obligations
Article 72Post-market monitoring
Article 73Reporting of serious incidents
ISO/IEC 42001:2023Clause 4-10AI management system
GDPR (Regulation (EU) 2016/679)Article 22Automated decision-making
Article 35Data protection impact assessment
ISO 31000:2018AllRisk management principles
NIST AI Risk Management FrameworkAllAI risk management

18. APPROVAL AND AUTHORIZATION

18.1 Approval Signatures

RoleNameTitleSignatureDate
Prepared BySarah JohnsonAI Act Program Manager_________________________
Reviewed ByJane DoeChief Strategy & Risk Officer (CSRO)_________________________
Reviewed ByMichael BrownChief Legal Officer (CLO)_________________________
Reviewed ByDavid LeeChief Technology Officer (CTO)_________________________
Approved ByJohn SmithChief Executive Officer (CEO)_________________________
Ratified ByBoard of Directors-_________________________

18.2 Effective Date

This policy becomes effective on 2025-08-01 following Board of Directors ratification.


19. REVISION HISTORY

VersionDateAuthorChangesApproval Date
0.12025-06-01AI Act Program ManagerInitial draft-
0.22025-06-15AI Act Program ManagerIncorporated stakeholder feedback-
0.32025-07-01AI Act Program ManagerLegal review incorporated-
1.02025-08-01AI Act Program ManagerFinal version approved2025-07-25

20. DISTRIBUTION AND ACCESSIBILITY

20.1 Distribution

This policy is distributed to:

  • All employees (via Corporate Policy Portal)
  • All contractors and consultants working with AI systems
  • All business partners involved in AI activities
  • Board of Directors
  • External auditors (upon request)

20.2 Accessibility

Primary Location: Corporate Policy Portal (https://policies.company.com)
Backup Location: SharePoint - Governance & Risk Management folder
Format: PDF and Markdown
Languages: English (primary), other languages as required by local regulations

20.3 Acknowledgment

All employees must acknowledge receipt and understanding of this policy within 30 days of:

  • Policy effective date (for existing employees)
  • Hire date (for new employees)
  • Policy update (for all employees)

Acknowledgment is tracked in the Learning Management System (LMS).


END OF POLICY


APPENDIX A: POLICY IMPLEMENTATION ROADMAP

Phase 1: Foundation (Months 1-3)

  • Establish AI Governance Committee
  • Appoint AI Act Program Manager
  • Create AI system inventory
  • Classify all AI systems
  • Develop supporting standards (STD-AI-001 through STD-AI-014)

Phase 2: Risk Management (Months 4-6)

  • Implement AI risk management framework
  • Conduct risk assessments for all high-risk systems
  • Establish risk register
  • Implement initial risk controls

Phase 3: Compliance Implementation (Months 7-12)

  • Implement technical documentation
  • Implement logging and record keeping
  • Implement human oversight measures
  • Conduct conformity assessments
  • Register high-risk systems in EU database

Phase 4: Continuous Improvement (Ongoing)

  • Post-market monitoring
  • Incident management
  • Continuous risk monitoring
  • Regular audits and reviews
  • Policy and standard updates

This policy provides the overarching framework. Each supporting standard provides detailed requirements, control objectives, control requirements, and procedures.