Responsible Use of AI Guide
Introduction
Adopting AI responsibly is vital for businesses of all sizes. This checklist, adapted from the Australian Government’s AI policy (Reference 1), offers tailored guidance based on company size to help organisations integrate AI effectively while mitigating risks and upholding ethical standards. Whether you’re a small business or a growing enterprise, the checklist outlines practical steps for accountability, training, transparency, and ongoing monitoring, ensuring AI adoption aligns with your resources and goals.
Appendices for Support
- Appendix A: Basic AI Policy Handbook Template.
- Appendix B: Checklist for Responsible Persons.
- Appendix C: Public-Facing AI Transparency Statement Template.
These templates serve as starting points for drafting your organisation’s AI policies and related documents.
Disclaimer
This document, along with its references and appendices, provides guidance on establishing AI policies for propella.ai’s clients. Propella.ai assumes no responsibility or liability for any errors or omissions in the content.
AI Policies by Company Size
1-20 Employees
- Accountability and Leadership:
- Designate a single person responsible for AI policy and oversight.
- Ensure this individual is equipped to oversee basic AI use and risk identification.
- Basic AI Training:
- Provide foundational AI training to all employees to foster a general understanding of AI use and potential impacts.
- Encourage self-directed learning for employees involved in AI-related tasks.
- Transparency and Ethics:
- Publish a simple, public-facing AI transparency statement (e.g., on your website) if AI use is significant in customer-facing operations.
- Review this statement annually focusing on transparency and basic protective measures.
- Ongoing Monitoring and Feedback:
- Implement informal feedback channels to monitor AI impacts on operations and customer interactions.
- Regularly review and adjust AI practices to ensure alignment with ethical and business standards.
21-100 Employees
- Accountability and Leadership:
- Assign one or two designated individuals or a small team to oversee AI policy, implementation, and risk management.
- Establish clear communication for employees to report AI-related issues or risks.
- Foundational and Role-Specific Training:
- Provide foundational AI training to all employees, with additional training for those directly working with AI.
- Consider adding specific training for roles handling procurement, development, or management of AI systems.
- Transparency and Public Statement:
- Publish an AI transparency statement that covers the business’s AI usage, compliance, and risk measures.
- Annually update the statement with any new practices or policies around stakeholder protection.
- Feedback and Review:
- Set up informal or semi-formal feedback mechanisms to monitor the effects of AI on employees and customers.
- Conduct light biannual reviews of AI practices to ensure ethical alignment and effective risk management.
101-200 Employees
- Accountability and Leadership:
- Designate a small team or create an AI oversight committee responsible for policy adherence, risk evaluation, and regular reviews.
- Ensure that designated individuals are aware of industry standards and available for AI-related questions or escalations.
- Training Programs:
- Implement foundational AI training for all employees within six months.
- Offer role-specific training for staff involved in AI development, procurement, or deployment to ensure understanding of both technical and ethical considerations.
- Transparency and Public Accountability:
- Publish a detailed AI transparency statement covering usage, compliance, effectiveness, and stakeholder protections.
- Annually to update the statement and align with evolving standards.
- Feedback and Continuous Improvement:
- Establish formal feedback channels for employees and customers to report AI-related issues.
- Conduct biannual reviews of AI practices, policies, and feedback to improve ethical alignment and operational effectiveness.
- Risk Assurance:
- Participate in industry-led or government-backed AI assurance programs if available and feasible.
200+ Employees
- Accountability and Leadership:
- Appoint an AI governance team or committee to oversee policy implementation, ensure risk management, and conduct ongoing monitoring.
- Designate a primary point of contact for AI-related escalations, industry collaboration, and adherence to standards.
- Comprehensive Training:
- Implement foundational AI training for all employees and mandatory, role-specific training for those in procurement, development, and management of AI systems.
- Require refresher training every quarter to keep teams updated on evolving AI trends and risks.
- Transparency and Public Accountability:
- Develop a publicly accessible and detailed AI transparency statement covering all aspects of AI use, ethics, compliance, and protective measures.
- Review and update this statement annually, with a focus on new AI initiatives, compliance achievements, and customer protection practices.
- Feedback, Monitoring, and Audits:
- Establish formal feedback channels and conduct regular surveys to assess AI impact on stakeholders.
- Conduct structured annual audits of AI applications to ensure ethical use, risk mitigation, and alignment with the company’s evolving objectives.
- Risk Assurance and Cross-Industry Collaboration:
- Engage actively in AI assurance programs and collaborate with industry groups to remain aligned with best practices.
- Participate in cross-industry capacity-building programs to maintain and develop AI expertise within the organisation.
Reference
- Policy for the responsible use of AI in government: https://www.digital.gov.au/sites/default/files/documents/2024-08/Policy%20for%20the%20responsible%20use%20of%20AI%20in%20government%20v1.1.pdf
Appendix A: Basic AI Policy Handbook
Purpose of the AI Policy
This AI policy outlines how our organisation adopts and uses Artificial Intelligence (AI) responsibly and ethically. It aims to enhance productivity, improve decision-making, and ensure transparent, fair, and accountable use of AI technologies in our operations.
Core Principles
- Accountability: Clear roles are assigned to oversee AI use, ensuring decisions and outcomes can be explained and justified.
- Transparency: AI applications and decisions will be clear, understandable, and communicated to stakeholders where relevant.
- Ethical Use: AI will be used responsibly, avoiding harm, ensuring fairness, and mitigating risks like bias or inaccuracy.
- Privacy and Security: Data privacy and security will be upheld at all times, following relevant legal and ethical standards.
- Continuous Improvement: AI practices and policies will be reviewed regularly to align with technological advancements and stakeholder feedback.
Responsibilities
- Leadership and Accountability:
- A designated person or team oversees the implementation, risk management, and monitoring of AI applications.
- Ensures AI adoption aligns with business goals and ethical standards.
- Training:
- Employees are provided with training to understand basic AI principles and their role in responsible AI use.
- Risk Management:
- AI applications are evaluated for risks, including bias, inaccuracies, or unintended outcomes.
- High-risk use cases are identified and addressed promptly.
- Monitoring and Feedback:
- AI systems are monitored for performance, fairness, and alignment with organisational values.
- Stakeholder feedback is integrated into the continuous improvement of AI practices.
Practical Guidelines
- When Using AI Tools:
- Understand the tool’s purpose and limitations.
- Avoid sharing sensitive or confidential information unless explicitly permitted.
- Decision-Making:
- AI should support, not replace, human decision-making.
- Ensure critical decisions are reviewed and validated by qualified personnel.
- Bias and Fairness:
- Regularly review AI outcomes to ensure they are unbiased and fair.
- Address any identified disparities promptly.
- Compliance:
- Adhere to all relevant laws and regulations concerning data, AI use, and industry-specific requirements.
Review and Updates
This policy will be reviewed annually to ensure it remains relevant and effective. Stakeholders are encouraged to provide feedback on AI applications and their impact to foster continuous improvement.
Contact Information
For questions or concerns about this policy or AI use in our organisation, please contact:
[Designated AI Accountability Person/Team Name]
[Email Address]
[Phone Number]
Appendix B: Checklist for Responsible Persons
Basic AI Use and Risk Identification
The designated responsible person for AI accountability should ensure the following areas are regularly assessed to maintain responsible and ethical AI use:
1. Purpose and Scope of AI Use
- Define Objectives: Is the AI system being used for a clear, defined purpose (e.g., productivity improvement, decision support)?
- Understand the Scope: Does the system operate within its intended scope without overextending its functionality?
2. Data Privacy and Security
- Data Use: Is the data used by the AI system appropriately anonymised, secure, and compliant with privacy laws (e.g., Australian Privacy Principles)?
- Confidentiality: Are policies in place to prevent sensitive or personal information from being unintentionally exposed or misused?
- Access Control: Is access to the AI system and its data restricted to authorised personnel only?
3. Accuracy and Reliability
- Input Validation: Are the inputs used in the AI system accurate, complete, and relevant to the purpose?
- Output Monitoring: Are the system’s outputs regularly checked for errors, inconsistencies, or inaccuracies?
- Testing: Is the system tested periodically to confirm it performs as expected?
4. Bias and Fairness
- Review for Bias: Have the training data and system outputs been reviewed to ensure they do not unfairly disadvantage any group?
- Mitigation: Are there mechanisms in place to address and mitigate any identified biases?
- Fair Decision-Making: Are the AI outputs reviewed for fairness, especially in critical decisions affecting stakeholders?
5. Transparency
- Explainability: Can the AI system’s decisions or outputs be explained in a way that stakeholders can understand?
- Documentation: Are the AI system’s purpose, design, and limitations documented for internal and external reference?
6. Risk Identification
- High-Risk Use Cases: Have any potential high-risk use cases (e.g., decisions involving safety, ethics, or significant financial impact) been identified and evaluated?
- Impact Assessment: Has the potential impact of the AI system on stakeholders, customers, and operations been assessed?
- Error Handling: Are there protocols in place to detect, report, and correct errors in a timely manner?
7. Compliance
- Regulatory Alignment: Does the system comply with all relevant laws, regulations, and industry standards?
- Ethical Standards: Does the use of AI align with the organisation’s ethical principles and values?
8. Stakeholder Feedback and Continuous Improvement
- Feedback Collection: Are employees, customers, or partners encouraged to provide feedback on AI system performance and impact?
- Review Frequency: Are regular reviews conducted to improve the AI system based on stakeholder feedback?
- Updates: Are updates or modifications made to the system to align with new regulations, feedback, or technological advancements?
9. Monitoring and Maintenance
- Ongoing Monitoring: Is the system monitored continuously to identify issues, performance degradation, or unintended consequences?
- Version Control: Are changes to the AI system documented and tracked for accountability?
Appendix C: Public-Facing AI Transparency Statement Template
[Organisation Name] AI Transparency Statement
At [Organisation Name], we are committed to using Artificial Intelligence (AI) responsibly, ethically, and transparently to deliver value to our customers and stakeholders. This statement outlines our approach to AI adoption and use.
How We Use AI
We use AI to:
- Enhance the efficiency and accuracy of our services.
- Support decision-making and improve customer experiences.
- Drive innovation and streamline internal processes.
Our AI systems are designed to assist, not replace, human expertise, ensuring that critical decisions are reviewed and validated by qualified personnel.
Our Commitment to Responsible AI
We follow these principles to ensure the ethical and responsible use of AI:
- Transparency:
- We aim to make our AI processes understandable to our customers and stakeholders.
- We clearly communicate when and how AI is used in our services.
- Accountability:
- We assign clear roles to oversee AI systems and ensure their ethical use.
- All AI outputs are reviewed to ensure they align with our business goals and values.
- Privacy and Security:
- We prioritise the protection of your personal data and ensure compliance with relevant laws, including Australian Privacy Principles.
- Sensitive data is anonymised and securely stored when processed by AI systems.
- Fairness:
- We regularly review our AI systems to minimise bias and promote fairness in outcomes.
- Continuous Improvement:
- We monitor the performance of our AI systems and adapt to technological advancements, stakeholder feedback, and regulatory changes.
Your Rights and Feedback
If you have questions about how AI is used in our organisation or wish to raise concerns, please contact us at:
[Email Address] | [Phone Number]
- We welcome your feedback to help us improve our AI practices.
Regular Reviews
This statement will be reviewed and updated annually to reflect advancements in AI technology, feedback from stakeholders, and changes in legal and ethical standards.
[Organisation Name] is committed to ensuring that our AI systems work for the benefit of our customers and stakeholders, enhancing trust, fairness, and innovation.