Listen to this post

There is a great YouTube video that has been circulating the internet for half a decade that reimagines a heuristically programmed algorithmic computer “HAL-9000” as Amazon Alexa in an iconic scene from “2001: A Space Odyssey.” In the clip, “Dave” asks Alexa to “open the pod bay doors,” to which Alexa responds with a series of misunderstood responses, from looking for “cod” recipes to playing The Doors on Spotify. The mock clip adds a bit of levity to an otherwise terrifying picture of AI depicted in the original scene.

The field of artificial intelligence has rapidly evolved since the 1968 film that depicted HAL. Today, artificial intelligence is embedded in many of our day-to-day activities (such as Alexa). As artificial intelligence continues to grow, more companies are looking to create policies and procedures to govern the use of this technology.

As we embark on this new “odyssey,” what makes a smart, thoughtful, and well-reasoned artificial intelligence policy? With new AI terms such as “bootstrap aggregating” and “greedy algorithms,” how do companies ensure that policies are useful and understandable to all employees who need to be apprised of them?

Why Do Organizations Need an AI Policy?

As artificial intelligence seeps into our workday, the first step for organizations is to determine what legal and regulatory standards should be considered. For example, using sensitive and proprietary data can implicate data privacy and information security concerns or increase certain risks if AI is used in a particular way. In the same vein, the use of artificial intelligence can introduce bias, discrimination, or ethical considerations that should be taken into account in any comprehensive AI policy.

A thoughtful AI policy seeks to mitigate these risks while allowing the business to innovate and incorporate the latest technologies to maximize efficiency. A robust AI policy aims to stay abreast of the rapid pace of AI evolution while reducing the potential for regulatory or legal risks. Thus, an AI policy is no longer a futuristic concept but a strategic imperative, given the challenges associated with integrating AI into business operations.

What Type of AI Policy Do You Need?

Typically, we see three distinct types of AI policies: enterprise-level (corporate) policy, third-party or vendor-level policy, and product-level policy.

  • The enterprise-level (corporate) policy, or a set of guidelines and regulations, ensures an organization’s ethical, legal, and responsible use of AI technologies. An enterprise-level policy is essential when an organization heavily relies on AI across its operations and business units.
  • The third-party or vendor-level policy provides a framework for vetting and onboarding AI vendors. A vendor-level policy becomes increasingly crucial when an organization outsources AI solutions or integrates third-party AI into its workflow.
  • The product-level policy outlines applicable use criteria for specific types of AI or specific AI products. A product-level policy is essential when an organization offers distinct AI-powered products or services or uses specific AI tools with unique capabilities and risks.

Components of an Effective AI Policy

Creating an accessible and effective AI policy requires a multifaceted approach. This includes translating complex AI terms into plain language, developing user-friendly guides with visual aids, and offering tailored training and education for all staff levels.

Encouraging interdepartmental feedback and collaboration ensures the policy’s relevance and alignment with technological advances. Utilizing established frameworks such as the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) further aids in defining and communicating policies, making complex AI concepts approachable, and fostering a policy that resonates across the organization.

A Case Study: Creating an AI Policy Using AI RMF 1.0

NIST recently unveiled the AI RMF, a vital guide for responsibly designing, implementing, utilizing, and assessing AI systems. The core of the AI RMF is a critical component that fosters conversation, comprehension, and procedures to oversee AI risks and cultivate dependable and ethical AI systems. This core is structured around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Utilizing the core functions of the AI RMF in creating an AI policy can enable organizations to build a foundation for trustworthy AI systems that are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy-enhanced, and fair with managed biases.

  • Govern: The AI policy must clearly define roles and responsibilities related to AI risk management, establish guiding principles, procedures, and practices, and foster a culture centered on risk awareness. This includes outlining processes for consistent monitoring and reporting on AI risk management performance, ensuring the policy remains relevant and practical.
    • Ensure Strong Governance: Establish clear roles, responsibilities, and principles for AI risk management, fostering a culture that values risk awareness and ethical integrity.
  • Map: This foundational stage defines the AI system’s purpose, scope, and intended use. Organizations must identify all relevant stakeholders and analyze the system’s environment, lifecycle stages, and potential impacts. The AI policy should reflect this mapping, detailing the scope of AI’s application, including geographical and temporal deployment, and those responsible for its operation.
    • Start with Clear Mapping: Outline the scope, purpose, stakeholders, and potential impacts of AI systems, ensuring that the policy reflects the detailed context of AI’s deployment.
  • Measure: The policy must specify the organization’s approach to quantifying various aspects of AI systems, such as inputs, outputs, performance metrics, risks, and trustworthiness. This includes defining the metrics, techniques, tools, and frequency of assessments. The policy should articulate principles and processes for evaluating AI’s potential impact on financial, operational, reputational, and legal aspects, ensuring alignment with organizational risk tolerance.
    • Implement Robust Measurement Procedures: Define clear metrics and methodologies to evaluate AI systems, considering various dimensions, including risks, performance, and ethical considerations.
  • Management: This phase requires the organization to implement measures to mitigate identified risks. The policy should outline strategies for managing AI-related risks, incorporating control mechanisms, protective measures, and incident response protocols. These strategies must be harmonized with the organization’s overall risk management approach, providing precise AI system monitoring and maintenance guidelines.
    • Build Effective Management Strategies: Develop strategies for managing AI risks, ensuring alignment with broader organizational objectives, and integrating control mechanisms and responsive protocols.

Conclusion

Creating a solid AI policy is a systematic process that requires thoughtful integration of ethical principles, transparent practices, risk management, and governance. Implementing the AI RMF can enable a holistic approach to AI policy creation, aligning with ethical integrity and robust security. By balancing the multifaceted characteristics of trustworthy AI, organizations can navigate complex tradeoffs, making transparent and justifiable decisions that reflect the values and contexts relevant to their operations. The AI RMF serves as a vital guide to managing AI risks responsibly and developing AI systems that are socially and organizationally coherent, enhancing the overall effectiveness and impact of AI operations.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Erin Jane Illman Erin Jane Illman

Erin Illman is a dynamic problem solver with a strong understanding of U.S. and international private-sector privacy laws and regulations and the legal requirements for the transfer of sensitive personal data to/from the United States, the European Union and other jurisdictions. She regularly…

Erin Illman is a dynamic problem solver with a strong understanding of U.S. and international private-sector privacy laws and regulations and the legal requirements for the transfer of sensitive personal data to/from the United States, the European Union and other jurisdictions. She regularly advises clients on CCPA, GLBA, HIPAA, COPPA, CAN-SPAM, FCRA, security breach notification laws, and other U.S. state and federal privacy and data security requirements, and global data protection laws. In addition to providing proactive privacy and information security compliance and legal advice, Erin manages privacy-related enforcement actions and litigation. Her practice includes representing companies in reactive incident response situations, including insider cybersecurity threats, electronic and physical theft of trade secrets, and investigation, analysis, and notification efforts with respect to security incidents and breaches.

Photo of Sinan Pismisoglu Sinan Pismisoglu

Sinan Pismisoglu advises clients on product development, privacy and security compliance, AI ethics, SaaS contracting, Big Data, data licensing and ownership, supply chain and vendor management, and incident preparedness and response. He solves complex cybersecurity, information security, compliance, and operational issues beginning with…

Sinan Pismisoglu advises clients on product development, privacy and security compliance, AI ethics, SaaS contracting, Big Data, data licensing and ownership, supply chain and vendor management, and incident preparedness and response. He solves complex cybersecurity, information security, compliance, and operational issues beginning with early planning and prevention through detection, remediation, and crisis management. Sinan collaborates with engineering teams to create compliance-integrated risk management frameworks, governance, and ethics programs for emerging technologies such as AI/ML, cybersecurity, IoT, and cloud models.