AI Policy Statement
This AI Policy explains how Kwirl designs, deploys, and governs AI-enabled features across our apps. It is intended to support responsible use, user trust, and regulatory readiness in line with privacy, anti-discrimination, consumer protection, and workplace obligations.
Effective date: 01 April 2026
1) Scope
This policy applies to all Kwirl AI systems, including ranking assistance, content suggestions, workflow automation, analytics insights, and any third-party AI models integrated into our services.
2) Core Commitments
- Human oversight: AI supports decisions; it does not replace accountable human judgment.
- Transparency: We identify AI-assisted functionality where materially relevant to outcomes.
- Fairness: We test for bias and continuously monitor for unfair or discriminatory outcomes.
- Safety and security: We apply technical and organizational controls to reduce misuse risk.
- Privacy by design: We minimize personal data and apply purpose-limited processing.
3) Permitted and Prohibited Uses
Permitted: productivity assistance, triage support, summaries, and workflow optimization.
Prohibited: unlawful discrimination, deceptive profiling, unauthorized surveillance, or use outside approved product purposes.
4) Automated Decision-Making
Kwirl does not rely solely on fully automated AI decisions for final hiring outcomes in high-impact contexts. Where AI contributes to recommendations, authorized users remain responsible for review and final decision-making.
5) Data Handling and Privacy Controls
- Data minimization and purpose limitation are applied to AI feature design.
- Access controls, encryption, and secure processing pipelines are used to protect data.
- Retention periods are limited to business, legal, and security requirements.
- Personal data used for AI-related processing is handled in line with our Privacy Policy.
6) Third-Party AI Providers
Where third-party AI services are used, Kwirl performs due diligence, contractual controls, and risk-based assessment. Third-party processing must meet applicable security, privacy, and compliance standards.
7) Model Quality, Testing, and Monitoring
- Pre-release testing for quality, reliability, and safety.
- Ongoing monitoring for drift, bias, harmful outputs, and abnormal behavior.
- Documented incident response and rollback procedures for high-risk issues.
8) User Responsibilities
Customers and users are responsible for reviewing AI-assisted outputs before action, complying with employment and discrimination laws, and not using the platform for unlawful or unethical profiling.
9) Rights, Questions, and Complaints
Users may request information, raise concerns, or submit complaints about AI-assisted outcomes by contacting us at Contact us. We investigate and respond in line with applicable law and internal governance procedures.
10) Policy Governance and Updates
This policy is reviewed periodically and updated as laws, standards, and product capabilities evolve. Material changes will be reflected on this page with an updated effective date.
