Platerforme

Solutions

Comparer

Ressources

Platerforme

Solutions

Ressources

FR

Updated On:

23 septembre 2025

AI Policy

INTRODUCTION

At Cynoia (hereinafter referred to as organization), we recognize the importance of implementing robust policies and standards to govern the development, deployment, and maintenance of Artificial Intelligence (AI) modules within our Software as a Service (SaaS) products.

SCOPE

This policy applies to all AI modules developed, integrated, or utilized within our SaaS products. It encompasses the entire lifecycle of AI, including design, development, testing, deployment, monitoring, and maintenance.

PRINCIPLES

  • Transparency: The organization's AI systems and algorithms shall be transparently documented, allowing stakeholders to understand their functions and potential impacts.

  • Accountability: The company shall establish clear lines of responsibility for the development, deployment, and maintenance of AI systems, ensuring accountability for outcomes.

  • Fairness and Bias Mitigation: The organization shall actively work to identify and mitigate biases within AI systems to ensure fairness and equity in decision-making processes.

  • Privacy and Data Protection: AI initiatives shall adhere to strict privacy and data protection standards, ensuring user data is handled securely and ethically.

  • Safety and Reliability: The company shall prioritize the safety and reliability of AI systems, conducting thorough testing and validation to minimize the risk of errors or failures.

  • Human Oversight: Human oversight and intervention mechanisms shall be integrated into AI systems to monitor performance, address issues, and ensure alignment with ethical and legal standards.

  • Continuous Improvement: The Organization shall promote continuous improvement in AI technologies and practices, fostering innovation while maintaining alignment with ethical principles and regulatory requirements.


AI DEVELOPMENT PROCESS

  • Risk Assessment: Before developing or integrating AI modules, a thorough risk assessment is conducted to identify potential risks related to safety, security, privacy, ethics, and compliance.

  • Data Governance: We adhere to data governance best practices, including data quality assessment, data protection measures, and data usage policies, to ensure the integrity and reliability of AI training data.

  • Model Development: AI models are developed using transparent and interpretable techniques, and efforts are made to mitigate biases and ensure fairness in model outcomes.

  • Testing and Validation: Rigorous testing and validation procedures are employed to assess AI systems' performance, reliability, and safety across diverse use cases and user demographics.


DEPLOYMENT AND MONITORING

  • User Education: Users are provided with clear instructions on how to use AI features, understand their limitations, and report any issues or concerns they encounter.

  • Continuous Monitoring: AI systems are continuously monitored for performance degradation, security vulnerabilities, and unintended consequences, with mechanisms in place to prompt corrective actions as needed.


COMPLIANCE AND AUDIT

  • Regulatory Compliance: We stay abreast of relevant regulations and industry standards regarding AI and adapt our practices accordingly to maintain compliance.


CONTINUOUS IMPROVEMENT

  • Feedback Mechanisms: We solicit feedback from users, stakeholders, and external experts to identify areas for improvement and innovation in our AI systems and governance processes.

  • Research and Development: We invest in research and development initiatives to advance the state-of-the-art in AI technologies, focusing on ethical and responsible AI.


PROCESSES FOR HANDLING DEVIATION AND EXCEPTIONS

  • Identification of Deviations: Any deviation from the AI policy shall be promptly identified through regular monitoring, audits, or incident reporting mechanisms.

  • Risk Assessment: The team shall conduct a thorough risk assessment to determine the potential impacts of the deviation on stakeholders, including users, employees, and the broader community.

  • Decision-Making Process: Based on the risk assessment, the team shall determine whether the deviation requires immediate corrective action, temporary mitigation measures, or an exception to the policy.

  • Corrective Action or Mitigation: If the deviation poses significant risks or violates ethical or legal standards, immediate corrective action shall be taken to address the issue. This may involve modifying AI algorithms, updating policies and procedures, or implementing additional safeguards.

  • Exception Handling: In cases where a deviation is deemed necessary or unavoidable, the team shall document the rationale for granting an exception to the policy. This documentation shall include the specific circumstances, risk assessment findings, and proposed mitigation measures.

  • Approval and Review: Exceptions to the AI policy shall require approval from the CTO responsible for overseeing AI initiatives. Additionally, all deviations and exceptions shall be periodically reviewed to ensure compliance and effectiveness of mitigation measures.


EMPLOYEE RESPONSIBILITIES

Employee(s) shall:

  • Before publication of information to any forum accessible without authentication, analyze it for aggregation and sensitive data generation (SDG) risk.

  • If there is any doubt as to whether sensitive data generation (SDG) is a risk, request risk acceptance.

  • Determine if the system is certified before use.

  • Assume that an Information system(s) is not a Certified system(s) unless confirmed otherwise.

  • Only log in to Information system(s) using organizational email addresses, credentials, and accounts.

  • NOT treat AI tools as infallible, especially in the case of security-related questions or in security-sensitive situations.

  • Opt out of data sharing and retention with the tool's vendor to the maximum extent possible.

  • Report any indications that an artificial intelligence system is behaving maliciously.

  • NOT knowingly deploy any system that has a net negative impact on the expected lifespan of a human being.

  • NOT knowingly violate any applicable law or regulation.

  • Determine if nonconformity(ies) exists and initiate the Nonconformity procedure.

  • If appropriate, initiate the incident response procedure.

Updated On:

23 septembre 2025

AI Policy

INTRODUCTION

At Cynoia (hereinafter referred to as organization), we recognize the importance of implementing robust policies and standards to govern the development, deployment, and maintenance of Artificial Intelligence (AI) modules within our Software as a Service (SaaS) products.

SCOPE

This policy applies to all AI modules developed, integrated, or utilized within our SaaS products. It encompasses the entire lifecycle of AI, including design, development, testing, deployment, monitoring, and maintenance.

PRINCIPLES

  • Transparency: The organization's AI systems and algorithms shall be transparently documented, allowing stakeholders to understand their functions and potential impacts.

  • Accountability: The company shall establish clear lines of responsibility for the development, deployment, and maintenance of AI systems, ensuring accountability for outcomes.

  • Fairness and Bias Mitigation: The organization shall actively work to identify and mitigate biases within AI systems to ensure fairness and equity in decision-making processes.

  • Privacy and Data Protection: AI initiatives shall adhere to strict privacy and data protection standards, ensuring user data is handled securely and ethically.

  • Safety and Reliability: The company shall prioritize the safety and reliability of AI systems, conducting thorough testing and validation to minimize the risk of errors or failures.

  • Human Oversight: Human oversight and intervention mechanisms shall be integrated into AI systems to monitor performance, address issues, and ensure alignment with ethical and legal standards.

  • Continuous Improvement: The Organization shall promote continuous improvement in AI technologies and practices, fostering innovation while maintaining alignment with ethical principles and regulatory requirements.


AI DEVELOPMENT PROCESS

  • Risk Assessment: Before developing or integrating AI modules, a thorough risk assessment is conducted to identify potential risks related to safety, security, privacy, ethics, and compliance.

  • Data Governance: We adhere to data governance best practices, including data quality assessment, data protection measures, and data usage policies, to ensure the integrity and reliability of AI training data.

  • Model Development: AI models are developed using transparent and interpretable techniques, and efforts are made to mitigate biases and ensure fairness in model outcomes.

  • Testing and Validation: Rigorous testing and validation procedures are employed to assess AI systems' performance, reliability, and safety across diverse use cases and user demographics.


DEPLOYMENT AND MONITORING

  • User Education: Users are provided with clear instructions on how to use AI features, understand their limitations, and report any issues or concerns they encounter.

  • Continuous Monitoring: AI systems are continuously monitored for performance degradation, security vulnerabilities, and unintended consequences, with mechanisms in place to prompt corrective actions as needed.


COMPLIANCE AND AUDIT

  • Regulatory Compliance: We stay abreast of relevant regulations and industry standards regarding AI and adapt our practices accordingly to maintain compliance.


CONTINUOUS IMPROVEMENT

  • Feedback Mechanisms: We solicit feedback from users, stakeholders, and external experts to identify areas for improvement and innovation in our AI systems and governance processes.

  • Research and Development: We invest in research and development initiatives to advance the state-of-the-art in AI technologies, focusing on ethical and responsible AI.


PROCESSES FOR HANDLING DEVIATION AND EXCEPTIONS

  • Identification of Deviations: Any deviation from the AI policy shall be promptly identified through regular monitoring, audits, or incident reporting mechanisms.

  • Risk Assessment: The team shall conduct a thorough risk assessment to determine the potential impacts of the deviation on stakeholders, including users, employees, and the broader community.

  • Decision-Making Process: Based on the risk assessment, the team shall determine whether the deviation requires immediate corrective action, temporary mitigation measures, or an exception to the policy.

  • Corrective Action or Mitigation: If the deviation poses significant risks or violates ethical or legal standards, immediate corrective action shall be taken to address the issue. This may involve modifying AI algorithms, updating policies and procedures, or implementing additional safeguards.

  • Exception Handling: In cases where a deviation is deemed necessary or unavoidable, the team shall document the rationale for granting an exception to the policy. This documentation shall include the specific circumstances, risk assessment findings, and proposed mitigation measures.

  • Approval and Review: Exceptions to the AI policy shall require approval from the CTO responsible for overseeing AI initiatives. Additionally, all deviations and exceptions shall be periodically reviewed to ensure compliance and effectiveness of mitigation measures.


EMPLOYEE RESPONSIBILITIES

Employee(s) shall:

  • Before publication of information to any forum accessible without authentication, analyze it for aggregation and sensitive data generation (SDG) risk.

  • If there is any doubt as to whether sensitive data generation (SDG) is a risk, request risk acceptance.

  • Determine if the system is certified before use.

  • Assume that an Information system(s) is not a Certified system(s) unless confirmed otherwise.

  • Only log in to Information system(s) using organizational email addresses, credentials, and accounts.

  • NOT treat AI tools as infallible, especially in the case of security-related questions or in security-sensitive situations.

  • Opt out of data sharing and retention with the tool's vendor to the maximum extent possible.

  • Report any indications that an artificial intelligence system is behaving maliciously.

  • NOT knowingly deploy any system that has a net negative impact on the expected lifespan of a human being.

  • NOT knowingly violate any applicable law or regulation.

  • Determine if nonconformity(ies) exists and initiate the Nonconformity procedure.

  • If appropriate, initiate the incident response procedure.

Updated On:

23 septembre 2025

AI Policy

INTRODUCTION

At Cynoia (hereinafter referred to as organization), we recognize the importance of implementing robust policies and standards to govern the development, deployment, and maintenance of Artificial Intelligence (AI) modules within our Software as a Service (SaaS) products.

SCOPE

This policy applies to all AI modules developed, integrated, or utilized within our SaaS products. It encompasses the entire lifecycle of AI, including design, development, testing, deployment, monitoring, and maintenance.

PRINCIPLES

  • Transparency: The organization's AI systems and algorithms shall be transparently documented, allowing stakeholders to understand their functions and potential impacts.

  • Accountability: The company shall establish clear lines of responsibility for the development, deployment, and maintenance of AI systems, ensuring accountability for outcomes.

  • Fairness and Bias Mitigation: The organization shall actively work to identify and mitigate biases within AI systems to ensure fairness and equity in decision-making processes.

  • Privacy and Data Protection: AI initiatives shall adhere to strict privacy and data protection standards, ensuring user data is handled securely and ethically.

  • Safety and Reliability: The company shall prioritize the safety and reliability of AI systems, conducting thorough testing and validation to minimize the risk of errors or failures.

  • Human Oversight: Human oversight and intervention mechanisms shall be integrated into AI systems to monitor performance, address issues, and ensure alignment with ethical and legal standards.

  • Continuous Improvement: The Organization shall promote continuous improvement in AI technologies and practices, fostering innovation while maintaining alignment with ethical principles and regulatory requirements.


AI DEVELOPMENT PROCESS

  • Risk Assessment: Before developing or integrating AI modules, a thorough risk assessment is conducted to identify potential risks related to safety, security, privacy, ethics, and compliance.

  • Data Governance: We adhere to data governance best practices, including data quality assessment, data protection measures, and data usage policies, to ensure the integrity and reliability of AI training data.

  • Model Development: AI models are developed using transparent and interpretable techniques, and efforts are made to mitigate biases and ensure fairness in model outcomes.

  • Testing and Validation: Rigorous testing and validation procedures are employed to assess AI systems' performance, reliability, and safety across diverse use cases and user demographics.


DEPLOYMENT AND MONITORING

  • User Education: Users are provided with clear instructions on how to use AI features, understand their limitations, and report any issues or concerns they encounter.

  • Continuous Monitoring: AI systems are continuously monitored for performance degradation, security vulnerabilities, and unintended consequences, with mechanisms in place to prompt corrective actions as needed.


COMPLIANCE AND AUDIT

  • Regulatory Compliance: We stay abreast of relevant regulations and industry standards regarding AI and adapt our practices accordingly to maintain compliance.


CONTINUOUS IMPROVEMENT

  • Feedback Mechanisms: We solicit feedback from users, stakeholders, and external experts to identify areas for improvement and innovation in our AI systems and governance processes.

  • Research and Development: We invest in research and development initiatives to advance the state-of-the-art in AI technologies, focusing on ethical and responsible AI.


PROCESSES FOR HANDLING DEVIATION AND EXCEPTIONS

  • Identification of Deviations: Any deviation from the AI policy shall be promptly identified through regular monitoring, audits, or incident reporting mechanisms.

  • Risk Assessment: The team shall conduct a thorough risk assessment to determine the potential impacts of the deviation on stakeholders, including users, employees, and the broader community.

  • Decision-Making Process: Based on the risk assessment, the team shall determine whether the deviation requires immediate corrective action, temporary mitigation measures, or an exception to the policy.

  • Corrective Action or Mitigation: If the deviation poses significant risks or violates ethical or legal standards, immediate corrective action shall be taken to address the issue. This may involve modifying AI algorithms, updating policies and procedures, or implementing additional safeguards.

  • Exception Handling: In cases where a deviation is deemed necessary or unavoidable, the team shall document the rationale for granting an exception to the policy. This documentation shall include the specific circumstances, risk assessment findings, and proposed mitigation measures.

  • Approval and Review: Exceptions to the AI policy shall require approval from the CTO responsible for overseeing AI initiatives. Additionally, all deviations and exceptions shall be periodically reviewed to ensure compliance and effectiveness of mitigation measures.


EMPLOYEE RESPONSIBILITIES

Employee(s) shall:

  • Before publication of information to any forum accessible without authentication, analyze it for aggregation and sensitive data generation (SDG) risk.

  • If there is any doubt as to whether sensitive data generation (SDG) is a risk, request risk acceptance.

  • Determine if the system is certified before use.

  • Assume that an Information system(s) is not a Certified system(s) unless confirmed otherwise.

  • Only log in to Information system(s) using organizational email addresses, credentials, and accounts.

  • NOT treat AI tools as infallible, especially in the case of security-related questions or in security-sensitive situations.

  • Opt out of data sharing and retention with the tool's vendor to the maximum extent possible.

  • Report any indications that an artificial intelligence system is behaving maliciously.

  • NOT knowingly deploy any system that has a net negative impact on the expected lifespan of a human being.

  • NOT knowingly violate any applicable law or regulation.

  • Determine if nonconformity(ies) exists and initiate the Nonconformity procedure.

  • If appropriate, initiate the incident response procedure.

Où les grandes équipes travaillent

FR

Nous sommes en cours d’obtention des certifications ISO/CEI 27001, ISO/CEI 42001 et SOC 2 afin de sécuriser la collaboration de votre équipe.

© 2025 Cynoia. Tous droits réservés.

Où les grandes équipes travaillent

FR

Nous sommes en cours d’obtention des certifications ISO/CEI 27001, ISO/CEI 42001 et SOC 2 afin de sécuriser la collaboration de votre équipe.

© 2025 Cynoia. Tous droits réservés.