Jerry Zandstra (JZ): I want to begin by thanking both of you for your time and your minds. Both of you have been through many learning technological advances over the past two decades, and you’ve each led multiple teams for learning and development in large organizations. My first question is about the scope and scale of artificial intelligence (AI). Is this merely a next iteration of tools available to L&D professionals or is this something bigger?
Chris Casement(CC): I would call this a paradigm shift. Many of the tools released over the last few decades have improved our profession’s ability to build engaging and interactive learning experiences. I remember using Storyline for the first time and thinking: How can this make learning more engaging and interactive? My first engagement with augmented and then virtual reality were memorable for sure, and close to 10 years ago. I still remember what I did in my first virtual escape room! Now that’s retention! And I think Generative AI has the potential to truly disrupt our industry and how people learn—think of this just like simulations 25 years ago…it can only scale and get easier to use.
Johnny Hamilton (JH): I would agree. The past innovations have been amazing, but they have been mostly used in incremental ways. In other words, they have moved the industry forward in small steps through new tools. AI, however, is not just an incremental change. It is currently altering not just what we can do but how we think about our roles as learning leaders. Chris is right. This is a paradigm shift. And we as learning professionals need to shift as well.
CC: Definitely, it’s a Crossing the Chasm moment in time for learning professionals—when early adopters are going to be leading the pack in the next 2-5 years.
JZ: I think many of us use the term “AI” as all-inclusive as if all content generated by a computer is the same. Should we be more specific or defined in our terminology??
JH: For sure. There are important distinctions to be made between Artificial Intelligence (AI), and Machine Learning (ML). ML is actually a subset of AI and focuses on the development of computer programs that can access data and learn for themselves. Programmers build algorithms to analyze data and learn from it to make predictions or decisions.
CC: People at times are a bit afraid of AI, but there are very practical applications that many of us use everyday. When you use a search engine like Google, you are engaging with predictive algorithms to optimize your search. Platforms like Netflix and Amazon use Machine Learning to feed you options tailored to your choices. Siri, Alexa, and even self-driving cars are all good examples of how voice activated agents interact with ML to get you the right information—when and where you need it.
JZ: That’s helpful for Machine Learning. How is that different from Artificial Intelligence?
CC: AI is the bigger category and contains ML but brings in capabilities like reasoning, problem-solving, and understanding human language. To put it more simply, AI mimics human intelligence.
JH: What I find most compelling for learning professionals is Generative AI (GenAI) that uses Large Language Models (LLM). GenAI allows users to add a perspective or frame around a concept.
JZ: What do you mean by that?
JH: I can provide a GenAI tool like ChatGTP with a specific role for the content it will be creating, the context in which it will be used, and the format in which I want to receive it. So I can be VERY specific about the literacy levels of specific learners, their experience level, number of words, and even the style in which I want it delivered to them. I can ask it to create paragraphs of text, images, video animations, slide decks, and more. And for all of these, the more context I give it, the better the results will be. This is how content and learning experiences are personalized for you. When you want a definition, you can now have both the standard version and one that applies specifically to you and your situation—both of which improve comprehension and learning outcomes.
JZ: What frameworks exist for considering AI in learning?
CC: That is a really good question. Johnny can speak to the model he uses for knowing when to engage with new technology. In terms of creating great instruction and engaging learning experiences with AI, start with an industry standard model—one that’s used regardless of the authoring tool: ADDIE (Analyze, Design, Develop, Implement, Evaluate). AI can help with each step and I would argue that each step in the ADDIE model remains necessary—but we need to do it differently, and business leaders want us to do it faster! ADDIE is great for traditional modalities, but when you’re using new tools like GenAI and need a more iterative approach in your design, SAM (Successive Approximation Model) is a good fit. It creates space where you can test and get feedback on the design, which is especially useful when your design team and your learners are using it for the first time. Both of these approaches focus on developing learning solutions. But Johnny and I like to focus on more than just learning, we ask how learning can drive the business forward. We use several frameworks from My Baseline Builder (MBB) to do that. One MBB Framework is Learn-Practice-Apply, which is an adaptation of the 70/20/10 approach. The emphasis here is a shift away from just the development and delivery of learning content to critical practice and applying what has been learned within the flow of your work.
JH: The idea of the Learn-Practice-Apply Framework is to get people practicing and applying the skills they’ve learned within the rhythm and flow of work. I’ve used this approach effectively in my work, both to lead teams and as a consultant. But I want to come back to something Chris mentioned. How do you know when to use AI? More specifically, how do organizations know when to begin to train people on how to use it and to make that investment? We use the MBB Framework Here, Near, and Far to explore this question. This is an approach we’ve used for almost all new technology implementations. For “Here”, I mean something that my team can already do with a known outcome and no risk or additional investment. “Near” is something that we need to put some effort into to build our capabilities, but it won’t take too long or too much of a budget. “Far” requires great investments of time, money, and effort and carries much risk, as well as potential reward. This framework is helpful to know what type of learning innovation you are using and where you are in the adoption of new technologies. The Here, Near and Far Framework helps a lot with planning and skill development on our learning teams.
JZ: What are some of the best AI tools available to learning experience designers?
JH: I get asked this question a lot. I can’t imagine what my answer will be in one or two years given how many resources are pointed at developing AI tools at the moment. The previews of Google’s Gemini look amazing. But for now, ChatGPT, Bard, and Bing are the most significant tools for people in the learning space. There are many other AI-enabled tools that have excellent features, but the three I’ve mentioned are able to create content, transform content from one format to another, and do analysis on existing content.
CC: I would add Vyond, Kaltura, Powtoon to the list for image and video creation. The first samples I saw from Vyond were not terribly impressive but the software has improved rapidly. Tools like these can be really useful in creating drafts. It can almost be like brainstorming where you quickly create draft animations or images, choose the direction that works best to meet your needs, and then move to create the material you need. This is how I see generative AIs working in parallel with the human touch.
JZ: What are some of the dangers of AI for learning?
CC: I just mentioned one. Image creation via AI means that the software is looking at all the images available to it and pulling from them to create something based on the instructions of the user. I remember when MidJourney was doing this, and it became pretty obvious really quickly that intellectual property rights were at immediate risk. New images are being created but they are based on the work of a lot of other people. That is why I suggested thinking about AI as a way to create drafts or brainstorm ideas. Your work, at the end of the day, should be yours. There are several current examples of this challenge. Universal Music Group is suing an AI provider called Claude. When asked to write a song about the death of Buddy Holly, Claude created a nearly exact replica of Don McLean’s American Pie. Instructed to write a song about a boy moving from Philadelphia to Bel Air, the software wrote the theme song of the television show, The Fresh Prince of Bel-Air.
JH: Another danger is “garbage in, garbage out.” AI remains a tool and, like every tool, it is only as good as the person using it. It was an early mistake to think that AI could do all the work. The real skill is knowing how to use it. In the big scheme of things, AI doesn’t actually create anything. The person using it is the creator. AI remains the tool. As those who have some experience using AI already, we know and have experienced this. Content creators and end users need to develop skills for how to make the best use of the tools. It is akin to when PowerPoint or Camtastia first came out. Both tools were impressive but the real usefulness of both resides completely in the hands of the people using them.
JZ: Does AI fundamentally change learning science?
JH: That’s not an easy question to answer. It certainly changes how learning is created. We can more quickly move through the steps of creating a learning experience because AI can handle some steps faster than we can. It also provides far more detailed and exacting information to instructional designers. AI can make iterations move much faster. But there are limits. I think a good benchmark is to expect AI in its current form to get you 60% of the solution you need very quickly. By having a conversation with AI, which has access to more knowledge and information than any human on the planet, you can get to a solution faster.
CC: I think AI alters our design and development processes, and as a learning leader, I think about that everyday. But I’m not sure it alters us as learners—we still need more data and experiences to truly evaluate that. Keeping this practical, making more content available faster, creating it more easily, does not necessarily mean learners are able to process it any faster. Social learning and the limits of cognition still apply. On the flip side, AI can create highly personalized learning experiences. Think of looking up a word in a dictionary. You might get ten (10) definitions which can be somewhat helpful. AI can take that much further by adopting the definition to your language, your reading level, and your experience level. It can also put you into your specific context and provide you with an example that fits your setting (think AR, VR, Chatbots). Doing all this can improve how and where learners “experience training” by bringing learners closer to the content that they need to use in a specific context to get something done.
JZ: What are some samples of using AI in learning?
CC: I’ll start with good examples I’ve seen for learners. AI can provide simulated conversations for learners in which they can try new skills and have AI respond accordingly. This is an excellent use of AI in sales training. I’ve experienced AI that automatically adjusts learning to the literacy level of the learner from middle school to graduate school. Assessments can be built by AI so that as the learner answers one question, another is created to test the true depth of their knowledge and skill level, and then decide what path they proceed down to learn more.
JH: Since Chris talked about learners, I’ll address what I’ve seen and experienced among designers and developers. Some of the best uses are things like taking large amounts of source content and transforming the data into learner objectives, assessments, and other structured content. AI is a great place to begin when writing a script for an animation or a video. Again, the directives to AI need to be clear and concise and it will only get you to a certain percentage point of completion, but it is a wonderful way to ideate.
JZ: Are there any considerations for global organizations in using AI?
JH: The first consideration is legal. Different regions of the world have very different laws about the protection of property rights. At this moment, the European Union seems to be leading the way. Others are struggling to catch up. As with most innovation, the law is following the tool and right now, the gap between the two is significant. Many global companies are cautious about using AI until the law is able to catch up because they do not want to expose themselves to litigation. At the same time, those companies worry about being left behind in the AI revolution, knowing that we are very early in this new game.
CC: I see real opportunities to be more inclusive in our language—what a lot of people call localization. AI can localize content much cheaper and quicker than could be done by developers. Imagine a single learning experience in which the language is translated and localized, the characters in the animation looks like the learners, and the background and foreground in the animation look like an office setting in a specific culture. Designers and developers are still heavily involved, but AI can do a lot of the heavy lifting in creation.
JZ: Johnny and Chris, I am grateful for your willingness to share your thoughts and experiences. I can only imagine the conversation we will be having in a few short years. I suspect some of what we think and anticipate now will be very different from reality. I’m excited about the things to come.
JH: Jerry—it was great to be here.
CC: Thanks for having us, I look forward to our next conversation.