AI Stewardship for The Fit-for-Purpose Association Board: Part II
AUTHOR’S ATTESTATION: This article was written entirely by Jeff De Cagna AIMP FRSA FASAE, a human author, without using generative AI.
As of this article’s publication date (7/18/24), there are 1992 days remaining in The Turbulent Twenties, and 167 days until this decade’s midpoint on January 1, 2025.
In Part I of this series, I reaffirmed the indisputable stewardship obligation of fit-for-purpose association boards: they must do everything they can to leave their organizations (and the related systems for which they are responsible) better than how they found them for the benefit of stakeholders and successors. This clear directive includes deep and consistent collaboration with staff partners and other contributors to craft and implement a human-centered approach to AI stewardship for their organizations.
To accelerate the process of building new capacity within our community’s organizations, this Part II column will share direct connections between the three stewardship imperatives of fit-for-purpose association boards and critical considerations that must be addressed in the advancement of AI stewardship. Once again, my advice to readers is to use the “future number” continuum, along which 1 is a highly pessimistic orientation toward the future and 6 is a highly optimistic orientation, to calibrate a more dynamic “3-4” orientation while reviewing and reflecting on the content of this column.
The Three Stewardship Imperatives and AI
Throughout the month of June, I published a series of columns in this space on “The Three Stewardship Imperatives of Fit-for-Purpose Association Boards.” Taken together, these three imperatives form the foundation for individual and collective stewardship action by fit-for-purpose association boards, including with respect to the ethical, purposeful, and responsible adoption of artificial intelligence by their organizations.
- Attention as responsibility—The first stewardship imperative for fit-for-purpose boards is to regard their attention as the highest form of responsibility to the association. As I argued in that column, “[w]ithout the sustained focus of irreplaceable human cognitive capabilities to the board’s work, there can be no fiduciary duty, thoughtful judgment, or effective decision-making. In short, there can be no agency.” In the context of AI stewardship, this imperative demands the creation of an AI-specific board development framework that association boards can use to pursue a disciplined, rigorous, and shared intentional learning process of continuous sense-making, meaning-making, and decision-making around AI and its implications for their organizations, stakeholders, and successors.
- Adaptation as renewal—The second stewardship imperative for fit-for-purpose boards is to pursue adaptation as a continuous process of individual, collective, and organizational renewal. In this column, I wrote, “[t]o navigate…unforgiving conditions, fit-for-purpose boards must collaborate with their staff partners and other key contributors to prioritize the comprehensive and consistent adaptation of their organizations.” With respect to AI stewardship, adaptation as renewal must include exercising care and safeguarding the humanity of stakeholders and successors. Fulfilling this imperative demands that fit-for-purpose boards practice critical thinking and engage in actively questioning AI advocates’ claims and supporting narratives to reckon with orthodox beliefs and closely examine underlying agendas.
- Anticipation as resilience—The third stewardship imperative for fit-for-purpose boards is to develop a consistent and robust capacity for anticipation to build more resilient organizations for the long term. This column presented futurist Jamais Cascio’s BANI (Brittle/Anxious/Non-linear/Incomprehensible) framework as “an invaluable resource for fit-for-purpose association boards to use as they seek to fulfill the anticipation as resilience stewardship imperative.” As a matter of AI stewardship, the anticipation as resilience imperative challenges fit-for-purpose association boards to choose the duty of foresight and build a consistent practice of learning with the future to create new capacity for assessing and navigating the long-term global impact of artificial intelligence technologies (for both better and worse) on unfolding BANI conditions.
Thinking About AI Stewardship
In Part I of this series, I argued that fit-for-purpose association boards must prioritize ethical decision-making to demonstrate their legitimacy to current stakeholders and stand up for their successors’ futures. Moreover, I shared how boards can build their capabilities for ethical thinking on all issues, including artificial intelligence, by exploring four “right vs. right” dilemmas: 1) truth/loyalty, 2) individual/community, 3) short-term/long-term, and 4) justice/mercy. This important learning work will help fit-for-purpose boards develop a strong ethical point of view in service of their overall association stewardship.
To augment their ethical preparation for AI stewardship, I invite fit-for-purpose association boards to reflect on and discuss the following six purposeful guiding provocations as they craft an approach to ethical, purposeful, and responsible AI adoption.
- AI is neither enemy nor friend—AI cannot be our enemy because it is insentient and thus lacks any understanding of both the word and the underlying antagonism it describes. Indeed, most publicly-available generative AI interfaces are set up to please human beings to build trust and encourage greater use. But as I shared in Part I of this series, AI is a sociotechnical force, which means the real threats come from how human beings develop and deploy these technologies, and how we interact with them. With that in mind, we must always keep in mind that the choice to anthropomorphize AI does not make it our friend.
- FOMO is an illegitimate basis on which to adopt artificial intelligence technologies—The fear of missing out (FOMO) is a powerful extrinsic motivator. In the case of AI, many advocates use it relentlessly in combination with the inevitability narrative, i.e., AI is here to stay so you should start using it right away, and other fear-based assertions, such as AI won’t take your job, but someone using AI will. By definition, however, FOMO is a purely emotional and reflexive justification for what must be a well-considered and fact-based decision made, in the case of our associations, by fit-for-purpose association boards.
- Relevance cannot be a factor in the decision to adopt artificial intelligence—Within the association community, there is an “emergent orthodoxy” forming that suggests our organizations can make themselves “more relevant” in the eyes of stakeholders by using AI technologies. As with FOMO, the enduring preoccupation with establishing our short-term relevance cannot enter into the decision-making process for adopting technologies that have already created harm and are among the riskiest ever created in human history. The long-term stakes for our successors’ futures are simply too high.
- We need more than policy to create AI governance—While implementing AI-related policies is a necessary and beneficial action for fit-for-purpose boards to take, effective AI governance requires more. Not only do fit-for-purpose boards need to set a higher standard of stewardship, governing, and foresight [SGF] performance with respect to AI, they need to collaborate with their CEOs to put in place the external and internal advisory and decision-making structures required to evaluate the ongoing risks and harms of AI adoption in a competent and holistic manner.
- We must not permit AI to disorient us—Writing for The Atlantic in May, Charlie Warzel closes an article about his efforts to make sense of AI with the following words. “Disorientation. That’s the main sense of this era—that something is looming just over the horizon, but you can’t see it. You can only feel the pit in your stomach.” Grappling with AI’s myriad consequences has stretched our ability to cope, but now we must choose to not allow AI to disorient us any further. For fit-for-purpose association boards, the three stewardship imperatives shift the focus from coping to intentional learning and long-term action.
- We do not have to acquiesce to AI—No matter how emphatically advocates make their claims about AI’s inevitability, our acquiescence to their demands is not required. In Part I of this series, I shared the three core elements of stewardship: agency, vulnerability, and wayfinding. These core elements live inside each of us and we can nurture them to focus, protect, and guide our decision-making on how and when to use AI. For fit-for-purpose association boards, the work of AI stewardship is a powerful expression of their commitment to put human beings first in a world of expanding technological dependence.
Next Column
In the final column in series, I will recommend actions that fit-for-purpose association boards can take to advance their AI stewardship. Until then, please stay well and thank you for reading.
MAJOR ANNOUNCEMENT: It is a true honor to collaborate with the Association Forum of Chicagoland to present The Fit-for-Purpose Association Board Director Learning Series—the association community’s first-ever dedicated learning and development experience specifically for current and aspiring board directors—which begins in September 2024. To learn more about the Series, please visit the Association Forum site. (FYI, there is a registration fee to participate in the Series and I am being compensated for my involvement as the Series designer and instructor.)
Jeff can be reached at [email protected], on LinkedIn at jeffonlinkedin.com, or on Twitter/X @dutyofforesight.
DISCLAIMER: The views expressed in this column belong solely to the author.