Explanation: The best way for an IT governance board to establish standards of behavior for the adoption of artificial intelligence (AI) is to direct the creation and approval of an ethical use policy. An ethical use policy is a document that defines the principles, values, and guidelines for the responsible and ethical design, development, and deployment of AI systems and applications within the enterprise. An ethical use policy can help to ensure that AI is aligned with the enterprise’s mission, vision, goals, and values, and that it respects the rights, dignity, and interests of all stakeholders, including customers, employees, partners, regulators, and society at large. An ethical use policy can also help to address the potential risks, challenges, and impacts of AI on various aspects such as privacy, security, fairness, accountability, transparency, trustworthiness, human dignity, human agency, social good, etc. According to ISACA’s article on Developing an Artificial Intelligence Governance Framework1, “an ethical use policy is essential for any enterprise that wants to adopt AI in a responsible and sustainable manner. An ethical use policy can help to establish trust and confidence in AI among the stakeholders and customers, and to avoid or mitigate any negative consequences or harms that may arise from AI.” Furthermore, according to ISACA’s article on Governance of Responsible AI: From Ethical Guidelines to Legal Frameworks2, “an ethical use policy can provide a common framework and language for the governance of AI across different domains, sectors, and regions. An ethical use policy can also facilitate the compliance with existing laws and regulations that may apply to AI.” Therefore, directing the creation and approval of an ethical use policy is the best way for an IT governance board to establish standards of behavior for the adoption of AI.