36 Technology • Individual associations—As I have written previously, emotion-based rationalizations such as the fear of missing out (FOMO) and our community’s enduring preoccupation with relevance cannot be factors in determining any association’s use of GenAI technologies. The decision-making processes underlying the ethical, purposeful, and responsible adoption of every AI technology that associations deploy must be fully legitimate and beyond reproach. While many associations have helpfully put GenAI usage and other policy elements in place over the last two years, much more work is required. Associations need to create robust AI governance practices and structures that enable an ongoing collaboration between and among their boards of directors, CEOs and staff partners, AI technology providers, and other internal and external contributors. There are three central questions on which these multi-stakeholder AI governance systems must focus their attention at all times: 1) what are the actual, unconsidered, and unintended consequences, risks, and harms of GenAI adoption for our association and its stakeholders, 2) what uniquely beneficial and purposeful impact can GenAI deliver to our association and its stakeholders, and 3) what responsible actions will we take in every aspect of AI adoption to ensure that we realize our ethical and purpose-based priorities? When it comes to GenAI, anything less than the highest level of vigilance from our community’s staff and voluntary decision-makers is unacceptable. • The fields that associations serve—Developing durable and effective AI governance within organizations across the association community will help facilitate their ability to function as vocal champions for more holistic discussions regarding GenAI’s actual and potential impact on the industries, professions, and fields they represent and serve. Through sustained activity within their industry and professional ecosystems, associations can renew their historical role as convenors of high-stakes conversations with the clear intention to assist in determining and gaining support for actionable principles, next practices, and other requirements for ethical, purposeful, and responsible GenAI adoption that is centered on human beings instead of technology. A specific and invaluable contribution associations may be able to make to frame a different kind of GenAI adoption dialogue within their fields is the identification of “confidence-building measures,” a term borrowed from the language of diplomacy. In this context, confidence-building measures refers to mutually-beneficial agreements or actions that can help dispel misunderstanding, nurture greater solidarity, and create a more trusted context for long-term coordination, cooperation, and collaboration among diverse stakeholders who may otherwise remain more committed to advancing their short-term and self-interested AI agendas. • Our country and our world—We must appreciate that, for now, the actions taken by individual associations (and our community) to strengthen AI governance and unite stakeholders behind the pursuit of ethical, purposeful, and responsible AI adoption will constitute nearly all of the guardrails our organizations will have available when (inevitably) something goes terribly wrong. This is true because, at the moment, the United States has no comprehensive national regulatory protections in place for AI. While President Biden issued an executive order on AI last year and there has been considerable AI-related activity in state legislatures, action in Congress has been stalled and the outcome of the US presidential election (still unknown at the time this article was written) may well determine if there is any kind of national AI regulation in the near future. The European Union’s AI Act, which was adopted earlier this year and will come into full effect by 2026, will have some impact on the major US-based AI creators, but it does not offer the same level of protection for the US population as domestic regulation would provide. Under these circumstances, there is clear message for our community: associations must use their advocacy capabilities to work for the passage of smart and effective AI regulation at the local, state, and federal levels. Contrary to the widely-held orthodox belief that regulation impedes innovation, a thoughtful and well-crafted framework can be a critical catalyst for more beneficial AI innovation by ensuring that it serves the interests of humanity. We Cannot Wait Any Longer Since before it arrived, I have referred to this decade as The Turbulent Twenties, a descriptor that has proved to be on point and then some. Within 60 days, the second half of an already-fraught decade will begin, and the many concerning ways in which the future of artificial intelligence might unfold over the next 60 months and beyond will have a direct and formidable impact on the futures of associations, the fields they serve, both our country and world, and the well-being of our stakeholders and successors. During this unsettling period of transition, associations must reclaim their agency to shape those futures by acting decisively to reorient our community’s current GenAI conversation toward a focus on ethical, purposeful, and responsible adoption. We must act right now. We cannot wait any longer. Jeff De Cagna AIMP FRSA FASAE, executive advisor for Foresight First LLC in Reston, Virginia, challenges association boards and CEOs to collaborate on ethical, purposeful, and responsible AI adoption. Jeff has earned credentials in AI and AI ethics from Competent Boards, Diligent Institute, the Institute of Artificial Intelligence Management, the London School of Economics, Stanford University, and the University of Michigan. In 2019, Jeff became the 32nd recipient of the Academy of Leaders Award, ASAE’s highest individual honor awarded to consultants or industry partners for outstanding contributions to the association community. Jeff can be reached at [email protected], on LinkedIn at jeffonlinkedin.com, or on Twitter/X @dutyofforesight. DISCLAIMER: The views expressed in this column belong solely to the author.
RkJQdWJsaXNoZXIy Nzc3ODM=