I’ve been adding AI and marketing automation topics to my presentations since February of 2018, getting a lot of strange looks, blank stares, and odd expressions, which could be a reaction to my presentation style or something else….I’m going to go with something else: the daunting topic of what AI will do to our work and lives.
According to Matt White, an AI expert, researcher, and educator: ”Generative AI will be the most disruptive technological innovation since the advent of the personal computer and the inception of the Internet with the potential to create 10s of millions of new jobs, permanently alter the way we work, fuel the creator economy, and displace or augment 100s of millions of workers in roles from computer programmers to computer graphics artists, photographers, video editors, digital marketers and yes, even journalists.”
The recent uprising at OpenAI, the American-based artificial intelligence research organization known for developing ChatGPT, is evidence that this technology is advancing faster than anticipated and that we must be prepared for it. If you delve into the details of OpenAI’s advanced mathematical algorithm Project Q* (pronounced Q Star), you’ll see that this is the case. Your staff teams are already using it more than you know, and the industries and professions that associations represent will be impacted dramatically in months to come. Associations must be educated and trained to support their members and provide the services needed to adapt to this transformative technology. So, let’s break it down.
Your staff teams are already using it more than you know, and the industries and professions that associations represent will be impacted dramatically in months to come.
In 2021, I started my education and training with the Marketing AI Institute, taking the Piloting AI for Marketers course, which taught me the Use Case approach to testing and adopting AI tools. We at Bostrom had already done a ton of work on our culture to build an open, safe-to-fail, agile environment, knowing that digital transformation would come at a swift pace. I am so grateful for the work environment and the openness our entire team has shown to embrace it. So now, with all the amazing, fascinating, and frightening tools and content on AI coming at us, we took a step back, took a deep breath, and came at it with these five pragmatic steps to stay calm, sane, and prepared in this new journey of transformation.
I must acknowledge the Marketing AI Institute, Ethan Mollick, many YouTube videos, and current research for the approach below.
Step 1: Educate and Train Your Entire Staff
Everyone in your association will need education and training on the power and risks involved in using AI tools like ChatGPT and thousands of others coming out daily. Every platform we work on today – Microsoft 365, Google, Canva, etc. – has already incorporated AI and is coming out with incredibly powerful tools in their future releases. It is imperative that you educate and train everyone – now. The hardest part of this is overcoming fear and the standard obstacles to adopting new technologies in human behavior, such as denial, anger, anxiety, exploration, and acceptance.
Step 2: Create a Cross-Functional AI Council
Bostrom established a Digital Transformation Task Force in 2019, which has now been reinvented as our company’s AI Council. This group of department heads and association management area team members build Use Case scenarios and test AI tools better to understand the power of AI in our work. They also create and oversee the policies and procedures mentioned in the next step.
Step 3: Develop Guiding Principles, Policies, and Procedures
You don’t need to start from scratch here. We found resources and adapted them to our needs, building “Guiding Principles for AI Use at Bostrom” that defined the ethical, legal, and fundamental principles upon which we made our policies and procedures. These guiding principles addressed Data Collection & Use, Bias Mitigation, Enhanced Human Supervision, Risk Management, Compliance, Continuous Improvement, and Client Responsibility.
Our “Policies” document then defined our actual assessment, adoption, and use of AI tools for both Bostrom and our clients. This document focused on Transparency and Accountability, Fairness and Non-Discrimination, Safety, Reliability and Sustainability, Privacy, Data Protection and Intellectual Property Protection, Human Accountability and Governance, Staff Knowledge and Capabilities, and Continuous Monitoring & Improvement.
Finally, our “Processes and Procedures Guide” defined the practices to put in place to meet the expectations established by our policies. This guide focused on Staff Education, Reporting/Tracking, Monitoring, Partner Relationships, and Tools Assessments.
Step 4: Conduct Regular Impact Assessments
AI systems should be checked periodically to ensure benefits outweigh inherent risks as well as implicit and explicit related costs, which should be outlined in Use Case scenarios. The Continuous Monitoring & Improvement Policy noted above should be operationalized for both internal association management work and association members’ use of tools if possible.
Step 5: Establish and Integrate an AI Roadmap
One key goal for the AI Council you create is to ready your association and its workforce for what’s to come. Think about the cultural and organizational changes you must make and how those align with your core values and mission.
The times ahead will be bumpy but stay focused on your association’s purpose and remain open to the possibilities and the changes needed to leverage AI with human supervision and influence. Links to some of my favorite resources are below, and we will post much more on this topic as pragmatically as possible.
Bostrom’s AI Use Case Template
Access Google Sheets Version | Access Microsoft Excel Version