Responsible and Ethical AI

Our commitment to developing AI systems that are safe, transparent, fair, and beneficial to all users while respecting fundamental human rights and democratic values.

Last Updated: December 24, 2025

Our Commitment to Responsible AI

At Helium, we believe that artificial intelligence has the power to transform how people work, create, and solve problems. With this power comes profound responsibility. We are committed to developing and deploying AI systems that are safe, transparent, fair, and beneficial to all users while respecting fundamental human rights and democratic values.

Our approach to responsible AI is not merely a compliance exercise—it is foundational to everything we build. We recognize that AI systems can have far-reaching impacts on individuals, communities, and society at large. Therefore, we have established comprehensive principles, governance frameworks, and operational practices to ensure our AI technologies serve humanity's best interests.

Our Core Principles

1. Human-Centered Design and Autonomy

We design AI systems that augment human capabilities rather than replace human judgment. Our technology empowers users to make informed decisions while maintaining meaningful human oversight and control. We believe AI should enhance human creativity, productivity, and problem-solving abilities while preserving individual autonomy and dignity.

Our Commitments:

  • Maintain human-in-the-loop oversight for high-impact decisions
  • Provide clear mechanisms for users to understand, question, and override AI recommendations
  • Design interfaces that make AI assistance transparent and controllable
  • Ensure users retain ownership and control over their data and AI-generated outputs
  • Respect user preferences regarding AI assistance levels and automation

2. Fairness and Non-Discrimination

We are committed to building AI systems that treat all individuals fairly and do not perpetuate or amplify societal biases. We actively work to identify, measure, and mitigate bias in our training data, algorithms, and outputs across dimensions including race, gender, age, disability, religion, sexual orientation, and socioeconomic status.

Our Commitments:

  • Conduct regular bias audits and fairness assessments across our AI systems
  • Use diverse and representative datasets for training and testing
  • Implement technical safeguards to detect and mitigate discriminatory outcomes
  • Establish clear processes for users to report potential bias or unfair treatment
  • Continuously monitor deployed systems for disparate impacts on different user groups
  • Engage diverse stakeholders in the design and evaluation of our AI systems

3. Transparency and Explainability

We believe users have the right to understand how AI systems work and how decisions affecting them are made. We strive to make our AI systems as transparent and explainable as possible, providing clear information about capabilities, limitations, and decision-making processes.

Our Commitments:

  • Clearly disclose when users are interacting with AI systems
  • Provide meaningful explanations of AI-generated recommendations and decisions
  • Document our AI systems' capabilities, limitations, and intended uses
  • Make information about our AI models, training data, and methodologies available where appropriate
  • Offer users insight into factors influencing AI outputs relevant to them
  • Maintain comprehensive documentation of our AI development and deployment processes

4. Privacy and Data Protection

We recognize that AI systems often require access to personal data, and we take our responsibility to protect user privacy extremely seriously. We implement privacy-by-design principles, minimize data collection, and provide users with meaningful control over their information.

Our Commitments:

  • Collect only data necessary for specified, legitimate purposes
  • Implement strong technical and organizational security measures to protect user data
  • Provide clear, accessible privacy notices explaining our data practices
  • Enable users to access, correct, delete, and port their personal data
  • Obtain explicit consent before using personal data for AI training or new purposes
  • Comply with GDPR, CCPA, DPDP Act, and other applicable data protection regulations
  • Conduct Data Protection Impact Assessments (DPIAs) for high-risk AI processing activities
  • Never sell user data to third parties

5. Safety, Security, and Reliability

We design AI systems to be safe, secure, and reliable under expected operating conditions. We implement rigorous testing, monitoring, and incident response procedures to identify and address potential harms before they occur.

Our Commitments:

  • Conduct comprehensive safety testing before deploying AI systems
  • Implement robust security measures to prevent unauthorized access and adversarial attacks
  • Monitor deployed systems continuously for unexpected behaviors or failures
  • Maintain incident response procedures to address safety or security issues rapidly
  • Provide clear guidance on appropriate use cases and known limitations
  • Regularly update and improve our AI systems based on real-world performance data
  • Establish clear accountability for AI system performance and outcomes

6. Accountability and Governance

We maintain clear lines of accountability for our AI systems and their impacts. We have established governance structures, policies, and processes to ensure responsible AI development and deployment throughout our organization.

Our Commitments:

  • Designate clear ownership and accountability for each AI system
  • Maintain comprehensive records of AI system development, testing, and deployment
  • Conduct regular ethics reviews and impact assessments
  • Establish clear escalation procedures for ethical concerns or potential harms
  • Engage external experts and stakeholders in reviewing our AI practices
  • Publish regular transparency reports on our AI systems and their impacts

Compliance with Global AI Standards

Helium's responsible AI practices align with and exceed requirements established by leading international frameworks and regulations.

1

EU AI Act Compliance

We classify our AI systems according to the EU AI Act's risk-based framework and implement appropriate safeguards.

  • Prohibited Practices: We do not develop AI for social scoring, biometric identification
  • High-Risk Systems: Implement conformity assessments and human oversight
  • Transparency: Clear disclosure of AI-generated content
2

GDPR and Data Protection

Our AI systems comply with the General Data Protection Regulation and other data protection laws.

  • Lawful basis for all personal data processing
  • Data minimization and purpose limitation
  • Rights to access, rectification, erasure, and data portability
  • Data Protection Impact Assessments for high-risk processing
3

ISO/IEC AI Standards

We align our practices with international AI management standards.

  • ISO/IEC 42001:2023 AI management system framework
  • ISO/IEC 42005:2025 AI system impact assessment
  • ISO/IEC 27001 Information security management
  • ISO/IEC 23894 Risk management for AI systems
4

OECD AI Principles

We embrace the OECD's principles for trustworthy AI.

  • Inclusive growth, sustainable development, and well-being
  • Human-centered values and fairness
  • Transparency and explainability
  • Robustness, security, and safety
  • Accountability
5

UNESCO Recommendation

We support UNESCO's ethical framework emphasizing human rights and dignity.

  • Proportionality and do no harm
  • Safety and security
  • Right to privacy and data protection
  • Multi-stakeholder and adaptive governance
  • Responsibility and accountability

Our AI Development Lifecycle

1. Design and Planning

Define clear objectives and success metrics aligned with user needs. Identify potential risks, harms, and ethical considerations. Conduct stakeholder consultations and impact assessments. Establish appropriate governance and oversight mechanisms. Document intended uses, capabilities, and limitations.

2. Data Collection and Preparation

Ensure data collection complies with privacy regulations and ethical standards. Assess data quality, representativeness, and potential biases. Implement data minimization and anonymization where appropriate. Document data sources, collection methods, and preprocessing steps. Obtain necessary consents and establish lawful bases for processing.

3. Model Development and Training

Select appropriate algorithms and architectures for the use case. Implement fairness constraints and bias mitigation techniques. Conduct iterative testing for accuracy, fairness, and robustness. Document model architecture, hyperparameters, and training procedures. Evaluate performance across diverse user groups and scenarios.

4. Testing and Validation

Conduct comprehensive testing including edge cases and adversarial scenarios. Perform bias audits and fairness assessments. Evaluate explainability and transparency of model outputs. Test security measures and resilience to attacks. Validate performance against established benchmarks and success criteria.

5. Deployment and Monitoring

Implement gradual rollout with monitoring and feedback mechanisms. Provide clear user documentation and guidance. Establish continuous monitoring for performance, fairness, and safety. Maintain incident response procedures for rapid issue resolution. Collect user feedback and conduct regular reviews.

6. Maintenance and Improvement

Regularly update models based on new data and feedback. Conduct periodic audits and reassessments. Address identified issues and emerging risks promptly. Document changes and maintain version control. Communicate updates and improvements to users.

User Rights and Control

We empower users with meaningful rights and control over AI systems.

1

Right to Information

Users have the right to know when they are interacting with AI systems and to receive clear information about the purpose, functionality, data processing, decision-making processes, limitations, and potential risks of the system.

2

Right to Explanation

Users have the right to receive meaningful explanations of AI-generated decisions or recommendations that significantly affect them, including the main factors influencing the output, the logic and reasoning behind the decision, and the confidence level or uncertainty of the output.

3

Right to Human Review

Users have the right to request human review of AI-generated decisions that have significant legal, financial, or personal impacts, including access to a human decision-maker, opportunity to present additional information, and a clear appeals process.

4

Right to Opt-Out

Users have the right to opt out of certain AI-powered features or automated decision-making, including the ability to disable AI assistance, use non-AI alternatives where available, and control data used for AI training or personalization.

5

Right to Data Control

Users have comprehensive rights regarding their personal data used by AI systems, including access to their data and AI-generated profiles, correction of inaccurate information, deletion of personal data (right to be forgotten), data portability in machine-readable formats, and the ability to object to certain types of processing.

Continuous Improvement and Accountability

Regular Audits and Assessments

  • Quarterly internal audits of AI systems and practices
  • Annual third-party assessments by independent experts
  • Continuous monitoring of system performance and impacts
  • Regular review of policies and procedures
  • Benchmarking against industry best practices and emerging standards

Stakeholder Engagement

  • User feedback mechanisms and surveys
  • Consultation with affected communities and advocacy groups
  • Collaboration with academic researchers and ethics experts
  • Participation in industry working groups and standards bodies
  • Engagement with regulators and policymakers

Training and Education

  • Mandatory responsible AI training for all employees
  • Specialized training for AI developers and product teams
  • Regular updates on emerging risks and best practices
  • Ethics workshops and case study discussions
  • Resources for users to understand and engage with AI systems

Reporting Concerns

We encourage users, employees, and stakeholders to report concerns about our AI systems.

What to Report

  • Potential bias or discrimination
  • Privacy or security concerns
  • Safety or reliability issues
  • Misuse or harmful applications
  • Violations of our principles or policies
  • Suggestions for improvement

Our Response Process

  • Acknowledgment: We acknowledge all reports within 48 hours
  • Investigation: We conduct thorough investigation of reported concerns
  • Action: We take appropriate corrective action when issues are identified
  • Communication: We provide updates on investigation status and outcomes
  • Follow-up: We implement systemic improvements to prevent recurrence

All reports are taken seriously, and we prohibit retaliation against anyone who raises concerns in good faith.

Contact Us

For questions, feedback, or concerns about our responsible AI practices:

Helium AI

Neural Arc Inc.

Email: [email protected]

Website: https://he2.ai

This document reflects our current understanding and commitment to responsible AI. We will update it regularly as our practices evolve and as new standards and regulations emerge.