
Wow, there has been a lot of interest in the last few months around learning technologies and AI as we look towards learning analytics. In this session, we will explore some real life highs and lows of testing various products to create a winning formula for learning analytics for your L&D function.
Are you looking for ways to engage your audience and show the impact of your training? Look no further! Mallori Steele and Tino Simon will share 5 simple yet effective tips to help you engage your learning audience and determine the impact of your training.
We talk a lot about measurement and hear businessy terms like “ROI” and “KPIs” and other real cool sounding terms that make us sound legitimate in the business world. But, there is one problem – most of it is just smoke and mirrors. Most organizations don’t measure anything. If they do measure something, they are measuring the wrong things. What’s more, they rarely include L&D in the conversation about what and how to measure – anything. So, what do we do?
Learning analytics are continually on the rise, but organizations often struggle with what should be measured. In L&D, measuring learning activity to determine effectiveness or impact of training is used. But are metrics collected from analytics sufficient to determine training effectiveness or areas of improvement? Is measuring satisfaction enough to tell us about the effectiveness of training? In fact, both activity and satisfaction measures have limited value in demonstrating the effectiveness of a learning experience.
In this session, we will discuss key terms and research on training evaluation and learning analytics. You will leave the session with a research-based proposed solution to training evaluation, and basic understanding of the drawbacks of some popular models such as the net promoter score (NPS) and Kirkpatrick model, or learner satisfaction measures.