AI Stewardship for The Fit-for-Purpose Association Board: Part III
AUTHOR’S ATTESTATION: This article was written entirely by Jeff De Cagna AIMP FRSA FASAE, a human author, without using generative AI.
As of this article’s publication date (8/2/24), there are 1977 days remaining in The Turbulent Twenties, and 152 days until this decade’s midpoint on January 1, 2025.
In the first two parts of this series, I have 1) reconfirmed the stewardship obligations of fit-for-purpose association boards to be pursued in concert with staff partners and other contributors, 2) emphasized the need for fit-for-purpose boards to prioritize ongoing preparation for ethical decision-making in all contexts, including AI, 3) made direct connections between the three stewardship imperatives of fit-for-purpose association boards and the advancement of AI stewardship, and 4) identified six critical guiding provocations for fit-for-purpose boards to consider as they craft an approach to ethical, purposeful, and responsible AI adoption with care.
In Part III, I want to ensure that fit-for-purpose directors and officers, staff partners, and all association stakeholders can forthrightly discuss the stakes of the AI conversation for our community and our world. As this series draws to a close, my advice to readers remains the same: use the “future number” continuum, along which 1 is a highly pessimistic orientation toward the future and 6 is a highly optimistic orientation, to calibrate a more dynamic “3-4” orientation while reviewing and reflecting on the content of this column.
Understanding the Stakes: Doom, Catastrophe, and Damage
In this series, I have argued that artificial intelligence is a disruptive sociotechnical force that increases the instability of a BANI world and makes nearly every human being on Earth more vulnerable. While the experience of vulnerability in some measure is intrinsic to the human condition, the aggressive, hype-driven rhetoric of many AI advocates unnecessarily (and perhaps sometimes deliberately) makes matters worse. To develop a clearer point of view on the danger present in our current context, fit-for-purpose boards should consider three levels of AI threat.
- Doom—My use of the term “doom” refers to the Silicon Valley construct “p[doom],” which is a numerical shorthand (quantified on a 0-100 scale) that expresses a person’s level of concern regarding a human extinction event created by AI. P[doom] stands for “probability of doom” or, as Clint Rainey described it in a Fast Company article published late last year, “P(doom) offers an unequivocal way to show where you stand on [the] pressing existential question” of whether AI eventually will become powerful enough to escape our control and kill us all. For a significant number of AI advocates, including some of the field’s most prominent figures, p[doom] is a non-trivial concern. In my view, while p[doom] is much closer to zero than 100 right now, we cannot totally disregard AI’s long-term threat to humanity.
- Catastrophe—In August 2023, noted AI skeptic Gary Marcus published a Substack essay in which he argues that instead of p[doom], we should focus on the greater possibility of “p(catastrophe) – the chance that an incident…kills (say) one percent or more of the population…” According to Marcus, “[c]atastrophe is a lot easier to imagine. I can think of a LOT of scenarios where that could come to pass (e.g., bad actors using AI to short the stock market could try, successfully, to shut down the power grid and the internet in the US, and the US could wrongly blame Russia, and conflict could ensure and escalate into a full-on physical war).” The recent CrowdStrike outage, which was related to the failure of cybersecurity software and not specifically about AI, is nonetheless a good illustration of why vigilance for potential AI-related catastrophe is essential.
- Damage—Extrapolating from the previous points, I believe we must all focus our attention on p[damage], which I define as the chance that the adoption of AI technologies will inflict serious and lasting harm on human beings, something we know is already happening. Writing for Scientific American last year, Alex Hanna and Emily Bender argue that rather than existential threats, we must first concentrate on preventing real-world harms:
“End-of-days hype surrounds many AI firms, but their technology already enables myriad harms, including routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.”
Hanna and Bender’s list only scratches the surface of AI’s current detrimental impact, and does not include AI’s increasingly substantial and serious environmental effects. My continuing concern, however, is that we are now deeply conditioned to accept every technological advancement as an absolute good with the accompanying expectation of immediate implementation. This reflexive response may be overwhelming our ability to recognize and accept how unconsidered AI adoption is now inflicting potentially unrepairable damage on the human beings whose lives and livelihoods these technologies alter for the worse.
The Focus of AI Stewardship
To guide their reflection on the perspectives and provocations shared in this series, I want to challenge fit-for-purpose association boards to focus their AI stewardship on the introduction of beneficial friction that slows the pace of AI adoption in their organizations to ensure it is ethical, purposeful, and responsible today and for the future. No matter where an association is right now with AI, the board must take this action. When it comes to AI, fit-for-purpose boards must reject the dangerous and self-serving Silicon Valley mantra to “move fast and break things” and fully embrace the wisdom of Dr. Joy Buolamwini, author of Unmasking AI: My Mission to Protect What Is Human in a World of Machine and “move slow and fix things.” The immediate implementation the following three human-centered practices is a good place to start:
- Informed consent—In the context of AI, fit-for-purpose association boards must view the informed consent of stakeholders as non-negotiable and requiring more than an opt-in checkbox on a web form or a digitally-signed license agreement. Indeed, the emphasis must be on confirming that both words receive proper attention and action. For example, “informed” should begin with providing AI ethics literacy training to every stakeholder and making the successful completion of that training a pre-requisite for contribution to the association’s AI-related work. “Consent” demands that all association stakeholders have a voice in shaping the direction of AI initiatives, especially with respect to determining prohibited and unacceptable use cases that might undermine data protections, violate personal privacy, or create other forms of harm.
- Intentional relationships—Fit-for-purpose boards know that their associations must enter into business arrangements with AI technology and service providers for the development of AI models, access to computing power, and other necessary support. The board’s full commitment to ethical, purposeful, and responsible AI adoption makes it vital for the identification and framing of these relationships to be intentional in every respect. Ahead of the ASAE AI Summit earlier this year, I published a LinkedIn article in which I shared six questions for association decision-makers to pose to the technology company representatives who were speaking during the event. These questions are below and remain on point as a basis for fit-for-purpose association boards to work with staff partners in the wise selection of providers in a manner consistent with both their fiduciary responsibilities and the duty of foresight:
1) What is your company’s official policy on the ethical, purposeful, and responsible development/adoption of AI technologies?
2) What specific actions has your company taken to fulfill the requirements of that policy?
3) What specific expertise and support will your company provide to ensure our association remains focused on its pursuit of the ethical, purposeful, and responsible development/adoption of AI technologies?
4) How will your company support our association’s commitment to full disclosure and transparency in documenting and reporting on our AI activities?
5) What do you regard as your company’s biggest substantive weakness with respect to the ethical, purposeful, and responsible development/adoption of AI technologies, and why?
6) What is your company’s most significant fear regarding the detrimental or destructive use of AI technologies in both associations and society overall, and why?
- Integrated transparency—As described in the questions above, a crucial element of effective AI governance is creating the highest level of transparency in every aspect of how associations pursue their AI work. The need for integrated transparency cannot be an afterthought since it is inherent to both informed consent and intentional relationships as all contributors, decision-makers, partners, and stakeholders work together to render their AI endeavors fully visible and subject to necessary and careful scrutiny. Wherever and however AI is being deployed in their associations, fit-for-purpose boards must establish a comprehensive and holistic policy framework that requires the integration of internal documentation and external reporting measures to create full transparency for ongoing board monitoring and decision-making.
We Must Get This Right
The stakes of AI adoption for the association community could not be higher. At this moment, the probability of doom feels remote, but cannot be ignored, while the possibility of catastrophe feels unlikely, but cannot be dismissed. It is the reality of damage, however, that must be front and center in the thinking of fit-for-purpose boards as they work to introduce beneficial friction into their associations’ pursuit of AI and steward their associations toward ethical, purposeful, and responsible AI adoption today and for the future. On behalf of today’s stakeholders and tomorrow’s successors, we must get this right.
Next Column
The Duty of Foresight column will return in late September. Until then, please stay well and thank you for reading.
MAJOR ANNOUNCEMENT: It is a true honor to collaborate with the Association Forum of Chicagoland to present The Fit-for-Purpose Association Board Director Learning Series—the association community’s first-ever dedicated learning and development experience specifically for current and aspiring board directors—which begins in September 2024. To learn more about the Series, please visit the Association Forum site. (FYI, there is a registration fee to participate in the Series and I am being compensated for my involvement as the Series designer and instructor.)
Jeff can be reached at [email protected], on LinkedIn at jeffonlinkedin.com, or on Twitter/X @dutyofforesight.
DISCLAIMER: The views expressed in this column belong solely to the author.