used by lattice

AI Usage Policy Template

Use this policy sample to customize your guidance for employees about how to use AI in the workplace in a way that's respectful, safe, and innovative.

Published
April 26, 2024
Reading time
[#]
 minutes
Author
Halah Flynn
Senior Content Marketing Manager
Table of contents

Download template

By submitting your information, you agree to Lattice's Terms of Service and Privacy Policy. You can opt out anytime.

What Should an AI Usage Policy Include?

An AI policy should include guidelines for all the use cases and answers to anticipated questions about the following: 

  • Scope and Purpose: Who the policy applies to and why.
  • AI Governance: Clarity on oversight, roles, and responsibilities.
  • Ethical Guidelines: Definitions about fairness, transparency, and accountability.
  • Usage Guidelines: How AI should be used when handling data, sensitive information, and consent.
  • Development and Acquisition: Best practices for developing AI and selecting AI technology tools and vendors.
  • Training and Awareness: Requirements for employee training on ethics, technical capabilities, and risks.
  • Monitoring and Compliance: When and how audits or incident reporting may be conducted.
  • Reporting and Whistleblowing: Rules for employees to report unethical uses of AI or policy violations.

What is an AI Usage Policy? 

An AI usage policy is a set of guidelines for a company and its employees that governs how artificial intelligence (AI) technologies can be used. It typically details ethical use, managing risks, and complying with legal and regulatory standards. 

Implementing an AI usage policy isn’t just a procedural step; it’s a commitment to integrating AI into the workplace in a way that is respectful, safe, and innovative. It's about moving forward thoughtfully and enthusiastically with the incredible potential of AI.

Purpose of an AI Usage Policy

An AI policy should ensure that AI technologies are used responsibly, aligning with the company’s commitment to ethical business practices. It supports compliance with all relevant laws and regulations while fostering innovation and maintaining public trust.

  • Promoting Ethical Practices: AI is a powerful tool, and with great power comes great responsibility. An AI usage policy helps ensure that a company uses AI ethically, respecting values like fairness, transparency, and accountability. It’s about making sure our technological advances are always in harmony with our commitment to doing good.
  • Staying Ahead of the Law: As laws around technology evolve, a robust AI usage policy ensures that a company remains compliant with the latest regulations. This isn’t just about following rules—it’s about leading the industry in best practices and setting standards that others aspire to. A thoughtfully crafted AI usage policy guides us in identifying and managing legal risks proactively, ensuring that our innovations are safe and secure for everyone.
  • Building Trust: Using AI responsibly helps build trust with customers, partners, and the public. A clear AI usage policy shows that a company is serious about upholding high standards and earning that trust every day.
  • Empowering Innovation: With a clear AI usage policy, employees are well-informed about how they can use AI tools creatively, and within boundaries. This boosts confidence and competence, enabling everyone to contribute more effectively and creatively to our projects.
  • Ensuring Accountability: By defining specific roles and responsibilities, an AI usage policy ensures that there is clarity and governance around AI initiatives. This helps maintain high standards and fosters a culture of accountability and excellence.

Sample AI Usage Policy

Disclaimer: This sample policy is for informational purposes only and does not constitute legal advice. It is a generic template that may not suit your specific circumstances. When adopting or revising a policy, consult legal counsel to ensure compliance with all applicable laws and regulations.‍

Our Commitment 

[Company Name] recognizes the extraordinary potential for innovation offered by artificial intelligence and machine learning technologies (“AI”) and anticipates a growing interest in the use of AI in our business operations. At the same time, [Company Name] also recognizes that the use of AI can create significant risks for the Company and believes it is essential to establish clear, values-based guidelines for the ethical and responsible use of AI. 

[Company Name] has created this Policy to reflect its commitment to using AI in a way that promotes fairness, accountability, and transparency while complying with all applicable laws, regulations, and ethical standards.

As AI is a rapidly evolving technology, [Company Name] will regularly review and update this Policy to reflect technological advancements, legal developments, and industry best practices.

Scope and Purpose

  • Scope: This policy applies to all employees, contractors, vendors, and partners who design, develop, implement, manage, or interact with artificial intelligence (AI) systems at [Company Name]. It also reflects the standards [Company Name] expects from third parties using or developing AI systems in collaboration with and on behalf of the Company.
  • Purpose: The purpose of this policy is to ensure that AI technologies are used in a responsible, ethical, and legally compliant manner. It aims to maximize the benefits of AI while mitigating risks associated with AI deployment in business operations.

AI Governance

  • Oversight: Our AI Governance Committee is composed of members from IT, legal, compliance, and ethics departments. This committee is responsible for overseeing the implementation of AI policies, ensuring compliance with ethical standards, and reviewing AI projects for alignment with organizational values and objectives.
  • Roles and Responsibilities: Clear roles and responsibilities will be defined for all employees and stakeholders involved in AI projects. This includes data scientists for model development, IT staff for system integration, and business units for application of AI solutions.

Ethical Guidelines

[Company Name] is committed to the ethical and legal use of AI in all aspects of its business operations. Employees are expected to observe the following principles when using AI:

  • Respect for Human Rights: Employees must ensure that AI applications respect human rights, including privacy, freedom of expression, and non-discrimination. [Company Name]’s use of AI will incorporate best practices for promoting fairness, accessibility, and inclusivity. To the extent practicable, any use of AI should assess the likelihood of bias and implement appropriate controls to mitigate the inherent risks of bias in the use of AI. Because the use of AI technology is relatively novel in the commercial context, and best practices and controls to mitigate potential abuse of sensitive information are rapidly evolving, any system that incorporates AI should be regularly reviewed (not less than annually) to ensure that the latest best practices and legal protections are effectively implemented and up to date.
  • Fairness: AI systems must be designed and operated to avoid unfair bias, ensuring that decisions are justified and non-discriminatory.
  • Transparency and Accountability: Employees must ensure that the implementation of AI is transparent to users, including prospects, customers, and other stakeholders. Further, employees must be able to explain the rationale behind the use of AI in decision-making processes. Depending on the context, affirmative consent or documentation may be required.

Usage Guidelines

The following guidelines are intended to govern employees’ use of generative AI platforms, tools,
and add-ons including but not limited to ChatGPT, DALL-E, and Codex.

  • General Use: Personal accounts should not be used for Company-related purposes. Accounts created with Company email addresses/credentials are to be used solely for authorized Company-related activities that comply with this company policy. Usage must comply with the Company's ethical standards, including the Company Code of Conduct and related policies.
  • Acceptable Use: AI technologies should be used to improve efficiency and decision-making within the boundaries of ethical practices, enhancing customer and employee experiences without compromising individual rights or privacy.
  • Consent: Explicit consent must be obtained when personal data is used for AI processing, in accordance with privacy laws and regulations.

Data Privacy and Protection: 

All data used in AI systems must be handled according to data protection laws, ensuring data integrity, security, and confidentiality. Employees must ensure that they protect data privacy and security when using AI. The use of AI tools and applications must comply with the Company’s data privacy and data security policies. 

  • Users must not use any Internal, Personal, Company Confidential, or Customer Confidential Information with generative AI tools except as expressly permitted by this Policy. Please see the Company Data Classification Policy for more details.
  • Users must ensure that any AI-generated data is not shared with unauthorized individuals or entities.

Internal, Personal, Company Confidential, or Customer Confidential Information may be used in the context of generative AI only if:

  • [Company Name] holds an enterprise or comparable business license for the generative AI tool that has undergone a complete Procurement, Privacy, and Information Security review;
  • The use is compliant with the terms of the license agreement;
  • The license agreement includes intellectual property protections and confidentiality terms or restrictions satisfactory to [Company Name];
  • The use will not cause [Company Name] to violate any customer agreement or applicable law or regulation (as confirmed in writing by [Company Name] Legal Team); and
  • The generative AI model will not be able to “learn” from the dataset in such a way that permits information sharing with external parties, directly or indirectly.

Development and Acquisition

  • Development Practices: AI development must follow best practices that ensure safety, security, and effectiveness, undergoing rigorous testing before deployment.
  • Vendor Selection: AI technology vendors must be evaluated based on their compliance with security standards and ethical practices, ensuring they meet [Company Name]'s requirements.
  • Proprietary Software: The output of generative AI tools may include materials subject to a third-party’s copyright or patent protections. Because it may be difficult or impossible in practice to determine the existence of these intellectual property rights, users should obtain written approval from Legal before using generative AI tools in the context of a project for which [Company Name] will seek patent or copyright protection.

Training and Awareness

  • Training Programs: Mandatory training sessions for all employees involved in AI projects, focusing on ethical use, data handling, and compliance with this policy.
  • Ongoing Updates: Regular updates and communications about the ethical use of AI, emerging risks, and any changes in relevant laws or regulations.

Monitoring and Compliance

  • Regular Audits: Conduct regular audits to ensure AI systems comply with internal policies and external regulations.
  • Incident Response: Implement an incident response plan to address misuse or malfunction of AI technologies, including mechanisms for mitigation and correction.
  • Legal Compliance: Employees must comply with all applicable laws and regulations governing the use of AI. This includes compliance with data protection and privacy laws, intellectual property laws, and anti-discrimination laws.

Reporting and Whistleblowing

Users must follow all IT and cybersecurity policies when using generative AI tools. Any violations can and should be reported promptly, with the following guidelines. 

  • Violations: Any employee who violates this Policy may face disciplinary action up to and including termination of employment.
  • Reporting Mechanisms: Users should report any security incidents or suspected breaches immediately according to the [Company Name] Security Incident Response Policy.
  • Anti-Retaliation: The company will protect whistleblowers from retaliation, supporting an open and ethical workplace environment.

Frequently Asked Questions

How often are AI systems audited for compliance and performance?

AI systems are audited annually or as required by significant updates to the systems or relevant laws and regulations. The AI governance committee may also initiate audits in response to reported concerns or emerging risks.

What steps should I take if I believe an AI system is producing biased results?

Report the issue immediately to your supervisor or through our anonymous reporting system. Provide as much detail as possible so that the issue can be investigated thoroughly. The AI governance committee is responsible for reviewing such reports and taking appropriate action.

Are contractors and temporary employees also required to follow the AI Usage Policy?

Yes, all individuals who access or interact with our AI systems, including contractors and temporary employees, must comply with the AI Usage Policy. Relevant training and access to policy documentation will be provided during onboarding.

How does the company ensure that AI technologies are used for appropriate purposes?

The company maintains a strict approval process for all AI projects, which includes a review of the proposed use case to ensure it aligns with our ethical guidelines and business objectives. Ongoing monitoring and audits are conducted to ensure continued compliance.

Can I use AI to monitor the performance of other employees?

No, AI should not be used to unilaterally monitor individual employees without their knowledge and consent unless legally mandated. Any monitoring that involves AI must be transparent, justified by legitimate business needs, and compliant with privacy laws and our internal policies.

Who can I contact for more information about our AI Usage Policy?

For more information, please contact the AI governance committee or your department's HR business partner. They can provide guidance on policy details, training resources, and how the policy applies to specific scenarios in your role.

Frequently asked questions

Who is responsible for overseeing the implementation of the AI Usage Policy?

The AI governance committee is responsible, with support from the IT and legal departments.

What should I do if I notice unethical use of AI?

Report it immediately through our designated reporting channels outlined in the policy.

Are there specific training programs available for AI system users?

Yes, [Company Name] provides comprehensive training covering ethical use, data security, and compliance aspects of our AI systems.

How often are AI systems audited for compliance and performance?

AI systems are audited annually or as required by significant updates to the systems or relevant laws and regulations. The AI governance committee may also initiate audits in response to reported concerns or emerging risks.

What steps should I take if I believe an AI system is producing biased results?

Report the issue immediately to your supervisor or through our anonymous reporting system. Provide as much detail as possible so that the issue can be investigated thoroughly. The AI governance committee is responsible for reviewing such reports and taking appropriate action.

Are contractors and temporary employees also required to follow the AI Usage Policy?

Yes, all individuals who access or interact with our AI systems, including contractors and temporary employees, must comply with the AI Usage Policy. Relevant training and access to policy documentation will be provided during onboarding.

How does the company ensure that AI technologies are used for appropriate purposes?

The company maintains a strict approval process for all AI projects, which includes a review of the proposed use case to ensure it aligns with our ethical guidelines and business objectives. Ongoing monitoring and audits are conducted to ensure continued compliance.

Can I use AI to monitor the performance of other employees?

No, AI should not be used to unilaterally monitor individual employees without their knowledge and consent unless legally mandated. Any monitoring that involves AI must be transparent, justified by legitimate business needs, and compliant with privacy laws and our internal policies.

Who can I contact for more information about our AI Usage Policy?

For more information, please contact the AI governance committee or your department's HR business partner. They can provide guidance on policy details, training resources, and how the policy applies to specific scenarios in your role.