Generative AI and Associations: Five Key Legal Issues to Consider
AS GENERATIVE ARTIFICIAL intelligence (AI), such as ChatGPT, continues to evolve and become more commonplace, many trade and professional associations are turning to AI technology to enhance their operations and decision-making processes and benefit their members. However, as with any emerging technology, the use of AI by associations raises a number of important legal issues that must be carefully considered and worked through.
Data Privacy
One of the primary legal issues associated with the use of AI by associations is data privacy. AI systems rely on vast amounts of data to train and improve their algorithms, and associations must ensure that the data they collect is used in accordance with applicable federal, state, and international privacy laws and regulations. Associations must be transparent with their members about how their data will be collected, used, and protected, and must obtain the necessary member consents to use and share sensitive data. Remember that data (such as confidential membership information) that is inputted into an AI system such as ChatGPT will, in most cases, no longer remain confidential and protected and will be subject to the AI system’s most-current terms of use/service. As such, associations should not allow its staff, volunteer leaders, or other agents to input into an AI system any personal data, data constituting a trade secret, data which is confidential or privileged, or data that may not otherwise be disclosed to third parties.
Intellectual Property
Intellectual property is a key legal issue that associations must consider when using AI. AI systems can generate new works of authorship, such as software programs, artistic works, and articles and white papers; associations must ensure that they have the necessary rights and licenses to use and distribute these works, as well as being transparent about who/what created such works. Take steps to ensure that AI-generated content is not, for instance, registered with the Copyright Office as the association’s own unless it has been sufficiently modified to become a product of human creation and an original work of authorship of the association. Associations also must be mindful of any third-party intellectual property rights that may be implicated by their use of AI, such as copyrights or patents owned by AI vendors, developers, or others, and ensure that they do not infringe any third-party copyright, patent, or trademark rights. Finally, as stated above, be mindful not to permit the inputting into an AI system of any confidential or otherwise-protected content (such as trade secrets or information subject to a nondisclosure obligation or the attorney client privilege), as such content will no longer be protected and confidential.
Discrimination
Another legal issue to consider is discrimination. AI systems can inadvertently perpetuate bias and discrimination, particularly if they are trained on data that reflects historic biases or inequalities. Associations must ensure that their AI systems do not discriminate on the basis of race, ethnicity, national origin, gender, age, disability, or other legally protected characteristics, and must take steps to identify and address any biases that may be present in their algorithms. For instance, the use by large employers of AI systems to help screen applicant resumes and even analyze recorded job interviews is rapidly growing. If AI penalizes candidates because it cannot understand a person’s accent or speech impediment, for instance, that could potentially lead to illegal employment discrimination. While this will only become a legal issue in certain contexts (such as the workplace), the use of AI has the potential to create discriminatory effects in other association settings (such as membership and volunteer leadership) and needs to be carefully addressed.
Tort Liability
Associations must consider the potential tort liability issues that may arise from their use of AI. If an AI system produces inaccurate, negligent, or biased results that harm members or other end users, the association could potentially be held liable for any resulting damages. Associations must therefore ensure that their AI systems are reliable and accurate, and that all resulting work product (such as industry or professional standards set by an association) is carefully vetted for accuracy, veracity, completeness, and efficacy.
Insurance
Associations need to ensure that they have appropriate insurance coverage in place to protect against potential liability claims in all of these areas of legal risk. Note that traditional nonprofit directors and officers (D&O) liability, commercial general liability, and cyber insurance policies may be – and likely are – insufficient to fully protect associations in all of these areas. Associations also should explore acquiring an errors and omissions liability/media liability insurance policy to fill those coverage gaps.
In conclusion, while the use of AI by associations presents numerous opportunities and benefits, there are a number of legal issues that need to be carefully considered before going too far down the AI path. Among other things, associations must ensure that they are transparent with their members about the use of their data, obtain necessary intellectual property rights and licenses and avoid infringing others’ rights, address any potential biases in their algorithms, protect themselves against potential tort liability claims, and secure appropriate insurance coverage to protect against these risks.
As the work of associations involves both staff and member leaders, adopting and distributing appropriate policies governing AI usage by staff, officers, directors, and committee members is critical, as is policing compliance with such policies. Similar clauses should be built into employee handbooks and contracts with staff, contractors, and members (including agreements with volunteer speakers, authors, and board and committee members).
With careful planning and attention to these issues, associations can use everdeveloping AI technology to enhance their operations, programs, and activities, better serve their members, and further advance their missions.
A version of this article originally appeared in the Summer 2023 issue of The Executive, a publication of the California Society of Association Executives.