Episode 01: How Organizations Can Measure Progress in Data & AI Literacy

Listen on Spotify, Apple Podcasts, YouTube, or wherever you get your podcasts!

How do you know if your data and AI literacy program is actually working?

In this inaugural episode of The Data Literacy Show, hosts Ben Jones (CEO of Data Literacy) and Alli Torban (Senior Data Literacy Advocate) tackle one of the most pressing questions for organizations starting their data and AI literacy journeys: How do you measure success?

You’ll discover:

  • The difference between objective and subjective assessments—and when to use each.
  • How maturity models and progress tracking help you map out and achieve your goals.
  • The three pillars of measurement—quantity, quality, and perception—and practical ways to monitor each.

Ben and Alli also share real-world insights from their experience training thousands of people and helping organizations launch their data literacy programs.

📊 Ready to measure your team’s data and AI literacy? Tune in for key takeaways and details about the re-launch of our Data Literacy Score team-based assessment.

Subscribe now to never miss an episode, and let’s make data accessible for everyone!

What’s our Data Literacy Score?

The Data Literacy Score™ is an assessment that has helped organizations evaluate their data maturity level from the perspective of those who know the organization the best – the employees themselves. Historically, each team member takes our comprehensive survey, which has always consisted of 50 Likert-style questions across seven categories. We aggregate the responses to calculate an overall organizational score out of 500, with options to segment by role or team. The survey also includes open-ended feedback and multi-select questions about top strengths and barriers. The final output is a thorough report and set of actionable recommendations on improving data literacy.

Read more about The Data Literacy Scoreâ„¢, and schedule a time with our COO Becky Jones to see how we can help!

Episode 01: How Organizations Can Measure Progress in Data & AI Literacy (The Data Literacy Show) | Data Literacy  

TRANSCRIPT:

Alli Torban 0:02
Welcome to the data literacy show, the podcast that helps organizations build, measure and level up their data and AI literacy.

Ben Jones 0:10
I’m Ben Jones, co founder and CEO of data literacy, where we are on a mission to help you learn the language of data and AI through tailored trainings and assessments and more

Alli Torban 0:22
And I’m Alli Torban, the senior data literacy Advocate here at data literacy.

Ben Jones 0:26
Wait more than that, though, right? You’re also, oh, the host of this show, plus another one called Data viz today. So I’m actually super excited right now, because I’ve always wanted to start a podcast, and we work with Alli, who is, of course, one of the best in the business, so she’s going to help me learn the ropes. And so this is episode one. I’m really excited, a little nervous,

Alli Torban 0:49
Our inaugural episode right here we go podcasting. So I have been bringing data viz to the podcasting waves. Now we’re bringing data literacy to the podcasting waves. So we’re very excited. This first episode is about something we are very passionate about, which is measurement. How can organizations measure progress in their data in AI literacy programs, it’s really tough, and there’s no question that every org needs to increase their data literacy for their employees. But how do you know if your data literacy program is working, what should you be measuring? What are some of the early signs of success, and what are some of the signs that you should be looking for to give you an indication that you need to pivot, right?

Ben Jones 1:31
So, you know, it can kind of feel like a lot, maybe, to navigate. And so, you know, over the past five years, we’ve been conducting a specific kind of assessment. And so we’ve been doing a lot of trainings as well. And so and some of the organizations we’ve been partnering with have been launching their own internal data literacy goals. And so along the way, we’ve we’ve noticed some clear patterns, maybe, among teams that succeed, and how they measure success and progress

Alli Torban 1:58
We’re going to share all that with you today. So let’s get into it. Okay, so, Ben, let’s start with a hypothetical. Let’s say I am about to launch a data and AI literacy program at my organization. I probably need to start by getting some sort of baseline of what my team knows and doesn’t know, right, right?

Ben Jones 2:18
You know, it’s one of the earliest things you want to do, maybe not the first thing, I think it helps to also, you know, establish a burning platform or a case for change, and to already start to align kind of influential supporters on the inside, but really early on, before you go very far into designing any kind of training program or any kind of initiatives you want to measure your baseline, your current state. So there’s a few ways to do that, and we like to think of it sort of like lenses. You’re looking at your organization. Think of it. It’s on a petri dish. You know, you have a few different lenses to look at that dish to get a better sense of what’s in there. And so one of the lenses would be an objective assessment, right, where you’re giving people right and wrong questions for them to answer, to figure out where are their strong suits and their weaknesses in terms of, you know, what they know. So, skills based, knowledge based, we’ve got quizzes, we’ve got competency frameworks and things like that. So we want to be careful with those, because people can get pretty nervous about tests. You know, test anxiety is real so and you want to, pretty early on, assuage them that you’re not trying to rank them, this isn’t about their performance, maybe even make it anonymous so that you simply assess where you are as an organization. What’s really great about that too, is that you can now use that to figure out maybe where some specific training needs are the most poignant. Maybe there’s some topics that your own team members seem to struggle with in terms of maybe getting a lot of of the answers wrong to those kinds of questions. So that’s what’s useful about objective assessments, I think. And then, of course, the other side of that would be a more subjective assessment, where you simply, sometimes just even have conversations with people, ask them what’s working well, open ended kinds of questions. You know, listen carefully, but then eventually you probably want to maybe send out some kind of more formalized survey, where you try to gage people’s thoughts and feelings about how well data and AI is being used in the organization today. So you know that could be a survey, it could be some kind of a form or an interview, and you’re looking at the objective perspectives of the organization. Some might think that, because it’s subjective, it’s not valuable. I don’t agree at all. I think it’s really useful as an executive to keep your finger on the pulse, you know, of what people think and how they feel about things. Also, as you ask them these kinds of questions, you’re going to learn a lot about the blockers and the barriers. And so this can be super useful to get that early on, and then you can then begin to design programs that address some of those exact. Barriers that they identify in this early current states measurement process that you go through. Yeah,

Alli Torban 5:06
I can see how getting the objective assessment and the subjective assessment together would give you a good view of the baseline knowledge of everyone on your team, like the objective assessment you might use a particular tool on your team, and maybe there’s one superstar on there who’s helping everybody solve all the challenges with the tool. But you don’t really realize that until you have an objective assessment and each person is taking the test and you realize, oh, maybe I need some more formal training with this particular tool. Because I didn’t realize this one person is like a 10 out of 10, and everyone else is being helped along.

Ben Jones 5:44
Yeah, that’s a great point. It would be an example of, like an individual act of heroism, you know, on the part of one data superstar, like you mentioned. But that might be masking an overall team deficiency there. And you definitely want to, as you increase in maturity, make some of these skills and the knowledge that goes along with them more systemic where it’s you know, not everyone has to be great at everything. We’ll be clear about that. But at the same time, you don’t want to depend on a small number of people to do all the work for everyone else. You really want to make sure you lift their abilities as well. So you’re right. The assessment might uncover some weaknesses that are hidden under the surface, where you may have been noticing things are going well. But again, it could be on precarious ground, and all it takes is that person to leave the organization or even shift roles, and all of a sudden the performance just falls off, right? So you want to look for those kinds of threats, I guess, or weaknesses that are on the team, but maybe not obvious, right?

Alli Torban 6:50
And it might be easy to assume that, oh, there’s this low level of maturity knowing this particular tool and one person is pulling the team along. It might be easy to assume that nobody wants to be trained on the team, but when you have the subjective assessment and you’re getting people’s opinions, maybe it services that, hey, I want to be trained, but I don’t feel like I’m getting any opportunities to do that. So that would bring that to light a subjective assessment. It’s

Ben Jones 7:21
a really common sentiment we hear all the time when we assess organizations and work with them, when we ask for this subjective lens, we find that, you know, lack of availability of good data training is right up there with one of the top barriers that people identify. And so of course, this is with organizations that are coming to us, selecting themselves really, out of all of the organizations that exist to say, hey, we want to improve. They’re probably doing so early because they have yet to roll out training. So there’s also that to consider, but certainly for the organizations that we’ve been working with around the world, and this could be nonprofits for profits, government agencies. We hear this all the time. You know, there’s just a lack of great data training, and they’re asking for it, and sometimes that can be helpful to allow them to say that and reflect that back to them, and then to meet that need with something that that actually provides them what they’re asking for,

Alli Torban 8:19
right? If anybody is knows data at all, you know that subjective assessments can be really tough to analyze, and can you just give an idea of how we do that with our data literacy score, just so people understand maybe what an easier way to assess subjective assessments would be

Ben Jones 8:41
sure? Yeah. So we have actually a 50 scoring statements that we put out there. The first one is like an overarching statement, basically gaging how they feel about things overall. When I say things, I mean, how effectively is the organization leveraging data today and then the next 49. Out of 50 are seven questions in seven key categories. So those seven categories are purpose, where you try to understand if data is moving the needle or if it’s for side projects. We’ve all seen that. Number two would be ethics. Are we using data in helpful ways rather than harmful ways? There’s a category dedicated to just the data itself, the data quality, the freshness, timeliness, etc. There’s a technology category where we start talking about the tools and the platforms and how they’re working together. Then we get into the people side of it, their knowledge and skills of their team members. And then we talk about a category that’s sometimes forgotten, I think is processed, but it’s very important, which is, are our repeatable processes actually leveraging data? Or maybe they’re tuning them out so you can have the most powerful software and great data in the world, but if you don’t update your processes to leverage it, then you might actually be setting everyone up. For a lot of frustration. So the process angle is very important. And then lastly, we talk about culture. You know, are we setting up the kind of environment here that’s rewarding, enabling data wins, setting up little communities where people can share and kind of compare notes? So those are our seven categories. So there’s seven scoring questions in each of those so between zero to 10, how well do these statements apply to me and my team? And then we average that out, and we show them where some of the high and the low points are. So that’s our scoring portion. We have an AI component we’ve just now recently launched, which also adds some additional questions to some of those same seven categories to try to get a similar look into the AI literacy of the team, you know, but that’s kind of the biggest thing, kind of like the backbone of our subjective portion of our assessment, yeah,

Alli Torban 10:49
that gives this idea of, what are some of the things you should be including in a subjective assessment. And there are, there is a question for just free form typing, right? And yes, that’s important, because you need to give people a chance to say whatever is on their mind and read them and see if there’s any pattern. So only human can do that.

Ben Jones 11:11
Yeah. So after the scoring statements, then we actually let them choose three barriers from a list of 17, three strengths from a list of 17 common strengths and barriers. And then we show that back again to the leadership and say, you know, here are the barriers that are commonly being chosen by your team members. And then to your point, the last part of the subjective assessment is to say, tell us your ideas for making things better. So now it’s like the virtual idea box. You know, we come back with some summaries and themes of the suggestions and recommendations of the team members themselves. And of course, they aren’t all data experts. They don’t all have the silver bullet to know exactly what to do, but they do have a lot of key insights about where the pain points are, and so that can be super useful when you get into the beginning stages of a data literacy program is to say, okay, you know what’s what’s frustrating people, and what can we do to alleviate

Alli Torban 12:07
that? Yeah, and we’ll share, in a minute, some of the top, oh yeah, pain points. We’ll share some of our data on that. What we found out people’s top pain points, so you know what to be on the lookout for. But before we get to that, so, yeah, once I get my baseline, I’m doing a mix of objective and subjective assessments. How often do you think I would need to be reassessing my team?

Ben Jones 12:29
I think it’s pretty much yearly. So if you do less than that, you’re risking survey fatigue. It is not a small survey, you know. So it takes people about 12 to 15 minutes to complete it. And so that’s a little bit meaty. And so I wouldn’t do it more than yearly. I really wouldn’t. There are some situations where you may want to push it to 18 months or two years, although two years today, in today’s world, will feel like a lifetime. So turnover in that time, right, right? Which can be good, because then you get different people’s subjective opinions, but at the same time, it’s a lifetime. You know? I mean, so many things are changing nowadays, even just within a year. I probably wouldn’t wait much longer than a year to come back and redo it and to see if you made some progress and moved in the right direction.

Alli Torban 13:18
Yeah, and the when we’re reassessing and we’re able to look at a baseline and reassessment, how should we tracking our progress with that?

Ben Jones 13:30
Well, yeah, so I mean, what we do is we say, Here’s your score, right? So you want to somehow whether you use our assessment or not. Of course, many of the listeners are not going to do that, so that’s great. You can make your own. I’m just saying it is nice to have one top level metric that sort of bubbles everything up to a single maturity level. You know, here is kind of where we are. And what’s nice is, since we’ve been running this for five years, we also get to tell people how they compare, you know, with other organizations out there. And so that’s more of a benchmark, not just a baseline. Okay, how well are we doing? How well are we doing relative to other organizations? But I do like to communicate a single maturity, stage or level, and then that’s kind of broken up for us into, actually, quintiles. So 2020, percentile groups, you know, from zero to 2021, to 40 and up to 100 and so we say kind of here’s where you are. We put those different stages together. And then what we’re trying to say to them is, if you want to get to the next maturity stage, here are some of the things that you might want to address in year one, based on our recommendations, based on what we’re hearing from the team members themselves, and where we see there being some major issues that are popping up. So the high level is great, but for an individual employee, sometimes that’s too high, so you need to be able to break it down into metrics that may be more closely related to them, what they do and their roles inside. Knowledge. So sometimes you can also track the progress of your initiative itself. You know, not just the outcome, did we increase in maturity level overall, but also, what about the steps we’re trying to take to get there? Are we achieving those? Are we successfully moving forward as we’re planning for ourselves? That that’s helpful. Once the design stage has been put in place, and you start to implement your data literacy program, you want to track kind of how well you’re progressing against what you have set out for yourself to accomplish.

Alli Torban 15:31
Yeah, it really is dependent on what you set out to to accomplish at the beginning. And a lot of times, if you just run into, oh, okay, where I’m improving the data literacy of my team without figuring out what exactly, what tasks do I want them to get better at? How am I going to measure if these things work? It’s it’s tough to measure it after the fact. You kind of have to define that first. And a lot of people skip past that. Yep,

Ben Jones 15:59
they definitely do. So, you know, it’s kind of core to data literacy in the first place, to track and set up metrics and then try to follow them. You know, they’re not always perfect, right? And you sometimes need to be careful to because what might happen a lot of times, it’s human nature. If you set a goal for yourself with a metric, sometimes you tend to game the metric, maybe. Or there’s a concept called surrogation, where the metric itself becomes more important than the behavior you’re trying to move or change. So you want to avoid that, so that you’re not trying to move a metric as much as you are trying to do the things that that metric is intending to to capture, right? So it’s just a little bit of a warning for something that commonly happens when you set these big KPIs out there. You want to avoid that mistake and fall into that pitfall where all you care about now is getting the number to look like, what you want it to look like, and then that leads to all sorts of dysfunctional kinds of behaviors on the team. So, right, you want to watch out for, yeah,

Alli Torban 17:03
that doesn’t sound healthy or useful at all, right? No, just trying to push that one number, that’s not actually helpful. Okay, so let’s say I have looked at objective and subjective assessments and I’ve thought, Okay, I’m at this particular maturity level. I have these goals that I want to achieve with my team, maybe that includes some training. So we’re going to do some data or AI training so that we can try to move our team up. This data maturity model. What do you think are some things I should be keeping tabs on to see how effective the training is, and things I should look up for to see if I need to pivot the way that I’m training the team. Oh,

Ben Jones 17:43
I think you just throw training out there and walk away. That’s, yeah, you just add it to the library of the catalog, and you’re done, right? It’s there, go, take it. Go forth and take it. No, that doesn’t work. You can’t just throw training out there. We’ve seen that where it just gets added to a catalog and then it’s crickets. So you really have to, well on the one side, kind of prepare the organization. It’s almost a PR or a marketing pitch, really, if you think about it, to get them excited and interested. And there’s some ways that you can really socialize it and get people happy and excited about what’s there for them to improve their own skills. But then now you have to find out how well it’s going. So we kind of see three measurement pillars. One would be the quantity. So are people actually attending and completing the training that you’ve put out there for them? But it isn’t just about them going through it and checking the box. You also want to check the quality by seeing if, you know, did they improve in some way. Maybe it’s a pre versus a post course Knowledge Check. Maybe you’re seeing some meaningful, measurable skill improvements or process changes that you can reflect back. Are they practically applying the training such that you would say, Hey, that was a high quality training. And I think part of the quality but a little different is the perception of the training. So, you know, you can really get to the bottom of that with simply just satisfaction scores for every course. What did you think of the course its applicability to your role? How about the trainers? How did how did they do? Did they engage and interact and help you, you know, get the most out of it. And so the perception kind of can be useful. So those three kind of pillars usually are useful to say, Okay, how well is this training going?

Alli Torban 19:27
Yeah, it seems like the quantity, quality and perception are kind of like objective and subjective assessments, where they give you different lenses on how the training is going. Because I do know a lot of people focus on participation rates and how many people took the trainings, how many people finished the trainings, and sometimes having a low completion rate can give you a sense of maybe the quality wasn’t so good. Or, you know, maybe there was some other thing you know that wasn’t marketed well enough within your organization. Or it could have been a quality, quality thing is there, is there any kind of completion rate that we’re seeing in the industry that is considered, oh, that’s pretty good, yeah.

Ben Jones 20:09
So we’ve seen it all over the map, but, you know, let’s say you compare it to something like some kind of compliance training, like regulatory or workplace harassment, we’ve kind of gone through those mandatory trainings. So even in those situations, the organization rarely, even a medium sized organization, will rarely get to 100% they’ll be pretty high in the 90s, you know, and every team leader and every the champions associated with it are following up, kind of haranguing and harassing people to get through their official training. So even those situations, you’re not going to really get 100.0 unless it’s a small team. So I actually don’t like mandatory training. I think that it’s just shoving it down people’s throats. You know, I kind of think of it more like, how can we generate pull so that the employees are pulling the training rather than trying to push it on them. And so when you have a situation of pull what you’re allowing people to do is opt into it. You may be incentivizing them or encouraging them, but it’s still up to them. There could be some extenuating circumstances where someone just simply doesn’t have the time for one reason or another, personal or professional, you know, to join the training. So you do need to leave some breathing room there, I think. And also, again, if you haven’t made it so interesting and exciting to them that they want to take it, then that’s also a place to go back and take a look. So in those kinds of opt in or pull based scenarios, you’re probably somewhere around 50 to 70 or 80% adoption. And then completion rates tend to be a bit higher than that, especially when you have cohorts. So they start with a group, they end with a group. You’re going to see higher completion rates with that kind of training than if you just say, well, this is purely on demand. You can just go to some library, start it, stop it. You’re going to see a lot of fallout there, you know, and drop off really, let’s say, I guess people that went through lesson one, maybe lesson two. They didn’t make it all the way to the end. So I like cohorts to try to, you know, give people the they’re helpful in more than ways than one. It isn’t just about completion rates. That is a big part of it. Also they get to have conversations with other people, you know, their co workers, and how they’re seeing it and applying it in their own roles, but that’s where you’re going to get a little bit of improvement there when it’s opt in.

Alli Torban 22:26
Yeah, we’ve seen the completion rates being really high, of course, when it’s live trainings. But even with on demand trainings, we have seen clients be successful when they are meeting internally and kind of make their own cohorts and say, Okay, starting on this date, we’re all going to start this training program, and every Friday from one to 2pm we’re going to come together and talk about what we learned in the training so you kind of did your ad hoc kind of DIY cohort. I

Ben Jones 22:55
bet you’re thinking about the client that I’m thinking about you, and I work with them. We train them. Yeah, I like that approach. They did, you know, it’s sort of, hey, everyone in the cohort, go through the on demand. Let’s get together like you were explaining and talk about it. And so then you and I get involved with them at those checkpoints to have a conversation around the content and its applicability. I love that model, you know, because they have some self study, but then they immediately talk about how they’re applying it. And so it keeps them together as a group, it keeps them interacting and conversing. And we see really high completion rates with that program. Sometimes you don’t have the opportunity to have a trainer that’s sitting with them every step along the way. So you kind of can think about a hybrid approach, maybe where they’re using on demand, but they have a kickoff and a capstone. And those could be webinars, right? They could be still live instructor led, but your book ending the course, or group of courses with real and interactions with real instructors that let them share their own thoughts about what they learned. People love that. And so how can you program in those kinds of interactions around your training or throughout it that give people the chance to, you know, stick with it, you know, yeah, so those are anything you

Alli Torban 24:13
can put in your calendar. I feel like gets people moving forward and people understand deadlines. And it’s for me especially, I love to have something on my calendar and be like, Okay, I’m driving towards this date. I need to have it done by this date. Things like, oh, take this training. It’s gonna be around forever. I’m really just never gonna get to it. So just beautiful thing on people’s calendars, I feel like that is gonna pull up the completion rate way better. It’s

Ben Jones 24:35
like that whole important but not urgent thing. They believe it’s important to learn these new skills, which is why they often let us know that that’s a problem, but they have so many other things that kind of can supersede it in urgency. So you’re right, if you put it on their calendar, and maybe it’s just as simple as they come in in the morning and they realize they have a three o’clock touch base with their group or with a champion or a trainer, and they say, okay, great. Right now I’m going to make sure I get caught up, because I want to make sure I show up to that session and, you know, bring my own ideas and thoughts and have something to share. So it can be a good motivator. I think you’re right to kind of keep it moving along.

Alli Torban 25:12
Yeah, nobody wants to be behind, right?

Ben Jones 25:15
I don’t like the feeling.

Alli Torban 25:18
Okay, so let’s say I am gathering some quantity, quality perception metrics to see how my training is doing. Do you have any recommendations for how I should be reporting that up the chain? Because I’m probably in my own status meetings and I’m having to report to my boss, my boss’s boss, on how my data and AI literacy trainings are going. How do you think I should be approaching that?

Ben Jones 25:40
Well, I think it’s ideal if at least the maturity stage, highest level metric is all the way up at the executive, you know, scorecard level. So the C suite is seeing, where are we in terms of data and AI maturity level. Now, they may not get an update on that for a year, right? But it’s still part of those goals and objectives, even of the executives, because you want them to open up the pathways for learning. You want them reinforcing and encouraging their own team members. So maybe sometimes, what I’ve seen is, you know, you can kind of compare quarterly or monthly, what kinds of percentages you’re seeing from each department. So maybe that the executive leadership team is looking at that, or, if not, then all the way up at the department head level. And it’s nice to create a bit of peer pressure there, right where they’re seeing what their participation rates look like vis a vis one of their peers at the ELT level, and they can kind of get a little competitive and have some fun with it. So I think that they should be looking at that pretty high up. Of course, the team that’s responsible for executing this training, this is probably going to include some kind of a data literacy champion. Oftentimes that person is on the Chief Data Officers team, but also it’s going to have someone in learning development in HR, it may even have someone within the business unit themselves as well. So this core group of people you know, that’s sort of shepherding this whole program through, they want to be looking at this weekly. And so you know, what kinds of scores are we seeing? Are we seeing some courses that maybe aren’t delivering on the value we were expecting. You want them seeing this every week, or at least every month, because that’s their opportunity to jump in and say, Okay, we need to make some kind of a course correction here to improve something that doesn’t seem to be delivering on what we had hoped it would deliver on. So yeah, it kind of depends on your level, how often you’re looking at the numbers and how, but again, all the way at the top, ELT, what’s our maturity score? Let’s incentivize everyone to move it forward, and then once you get down into those more program level metrics, that’s probably going to be viewed more commonly at lower levels of the organization.

Alli Torban 27:52
Yeah, and I could see if you’re tracking perception, like really measuring perception, people just giving some open ended survey responses that can give you a little bit of a hint to that maybe something is wrong with these, these, the trainings that we’re doing, is something is off, and it’s not doing what it’s supposed to. So maybe we should try to pivot, because when you’re reporting it back, like weekly, you don’t want to be like, everything’s fine, everything’s fine, everything’s fine. Oh, nope, everything’s really bad, right? Gonna be like, checking in, checking in frequently,

Ben Jones 28:22
yeah, if the story changes all of a sudden after a year from hunky dory to, you know, not working, then that’s going to raise some questions. So, you know, it’s like they say bad news should travel fast. If there’s a problem with the organ with the program, identify it quickly. Make sure it gets as high up the chain it needs to go and then make those improvements right. So that’s your opportunity to course correct and to steer so that when you get to the end, you do achieve the objective and the overall vision that you’re trying to implement, you know? So I think that that’s the way to do it. And

Alli Torban 28:55
I think sometimes people feel like the perception part, or like getting employee opinions isn’t isn’t so measurable or maybe not so useful. But I have found even something as simple as, hey, rate your confidence in data storytelling from one to 10 before this course, and then rate it after the course and see how it’s gone up. That’s a great thing to be able to report up the chain to your C suite. Maybe because, maybe before they were complaining, hey, when your team presents their data to me, I have no clue what they’re talking about. Okay, now let’s start here that we need our data storytelling to increase. So start with a low score, we get to a high score, and then we report it up, and we’re seeing an improvement on it exactly.

Ben Jones 29:44
And then to say, hey, does this track with what you’re actually seeing? So we’re seeing much more confident data storytellers. How does that compare with the presentations that you’ve been seeing the last month or last quarter? Are you seeing improvement anecdotally? So you kind of compare the data lens. Ends with more anecdotal perspectives and stories that people have about what they’re seeing. Because if you’re showing these great numbers but they say nothing’s changing, then that’s your opportunity to drill into that too, and to say, Okay, why are we seeing positive metrics, but people don’t feel like things are changing. I still believe that is important to kind of compare, you know, what are the numbers saying with what are people’s feelings, anecdotally, right?

Alli Torban
Yeah, yeah. And you can see when there’s there’s something out of alignment. Let’s get to the fun, extra fun part, the data part. We have always loved digging through data. All right, so in our data literacy score, we’ve had 1000s of people take this, and maybe you can share some of the general trends we’re seeing, like, what are the common, biggest struggles that people in companies are having?

Ben Jones
Sure, okay, yeah, let me kind of give a lens there. So, you know, again, I mentioned this subjective portion. So in that first 50 scoring questions that we put out there, you know, we definitely see some statements scoring low. So the one that’s all the way at the bottom, it ranks 50 out of 50th, is that my team has access to one or more thriving data communities with members that connect around data and its value. So that’s very low, which I think is interesting, because it takes some effort, but it’s not rocket science to start getting communities of practice up and running, and there are even external communities you can socialize and let people know about so you can get started without even kind of doing a whole lot to create your own internal communities, although I think you eventually want to go there, but that’s all the way at the bottom number 49 is about training. My organization provides valuable training opportunities to help me and my teammates develop the knowledge and skills necessary, you know, to work with data, so that one’s ranking number 49 out of 50. So people that we’re working with, again, organizations self selecting to start their programs, they’re starting at a point where people say this is just not there as commonly as maybe they want them to say that. Then the next three. So this would be, I guess, 46, seven and eight in rank. They’re all technology related. So those are pain points. There’s a lot of friction with our systems. How they work well together, if we’re updating them or not. You know, we have legacy systems. We have scenarios where vendors are locking you in. So they’re on purpose. They’re difficult. No budget, no budget upgrade. We’re stuck in some old kinds of legacy systems. So yeah, that’s a common place where people, people do complain. So those are the lowest scoring statements on that 50 statement portion. What about out of curiosity? What are some of the things that are seem to be home runs for a lot of the people taking the day literacy score. Well, this is interesting, and again, remember, this is all people’s points of view. So it isn’t saying it’s necessarily truthful, but it is saying it is what they what they’re saying. And we’re big believers in making it anonymous so that and making sure that’s very, very clear. However, even though we go out of our way to do that, four of the five top ranking statements are in the ethics category. So people are saying that, hey, you know, I do feel like my organization is taking data privacy seriously. I do feel like if I see a problem, I can raise my hand, and I don’t have to be afraid of retribution, or, Hey, if we make errors, we fix it, we try to get it right. Those are questions that are in the ethical category. And, I mean, I think that that’s a good thing. You know, if those were really low, I would be pretty worried, because that is such a cornerstone. You don’t really want a lot of talented people if you’re setting them loose in an environment that’s unethical. You almost wouldn’t want them, you know, to have knowledge of skills quite yet, until you make that real clear. So and then the fifth, the one that’s in the top five, that’s not in the ethics category, is an interesting question that we put in there. It’s in the purpose category, and it actually doesn’t have anything to do with data. We just simply try to gage up front. Are you familiar with your team’s purpose and who it is that you’re serving as a team? So the reason why we put that statement in there is that, you know, later on, we ask them how well data is helping them achieve their overall purpose. And so if they don’t even know what their purpose is yet, which can be the case in some more dysfunctional organizations, then I don’t know how data is going to help them achieve it if they don’t even know what it is. So that one is pretty high up. You know, leaders have done a good job, I would say, making sure people know what their purpose is. And people like to say that that is true about them and about their team. So those are the high ranking five.

Alli Torban
Yeah, when you analyze the results of the data literacy score and are reporting it out to companies, has there ever been one where it just blew their mind on like they had no clue what was coming in terms of what was the bottom, bottom questions.

Ben Jones
Yeah, it has happened. And so here’s what might happen. A lot of times, it could be something like this, where they had just gone through some huge project or initiative and they were shocked that they still scored so low in a specific area or category. And so, you know, what comes out of that? A lot of times, is a realization that they need to do a better job disseminating information about what has been improved, helping people just socializing it, you know. And so it almost becomes, then, like a chance for them to sit back and say, what do we need to do to really help people see that this has improved. Evidently, they don’t feel that it has yet. So it could simply be a messaging kind of a challenge for the leadership. It also might be that they it didn’t improve as much as they thought it did, you know, but yeah, we’ve had some times where it’s sort of like anything with data. A lot of times they’ll say, okay, yeah, we could have guessed that. We could have guessed that. But then there’s a few things in there that are curveballs that they didn’t see coming. So those are the gems. Those are where you really focus in and say, Okay, why was this a surprise? And what can we do to figure out where to head next in order to address a pain point that we weren’t even really aware of? And sometimes we’ll, you know, because we ask these categorical questions at the beginning, and one of them is often, are you in leadership or not? So we show them here’s what your team thinks and how they’re scoring it, versus how the leadership is thinking and how they’re scoring it. And so we try to watch out for is a situation where the leaders, if they score the whole team way better than the rest of the teams. That’s an interesting scenario, because there’s a disconnect there, you know, and maybe they have rose colored glasses on about how things are going, and maybe bad news isn’t traveling to their level. So actually, we kind of like it if it’s the other way around, where the team members have a little bit more positive perspectives than the leaders, because that means the leaders are hearing the bad news, that means the leaders are hopefully in touch with some of the problems, and they’re just struggling to get them fixed, so then maybe they may be getting a little weary about it all. So those are some of the other things we look out for and try to reflect back to the teams that we present.

Alli Torban
Yeah, yeah, that’s so interesting. That’s why I love subjective assessments, because you never really know what you’re gonna get, and you can surprised.

Ben Jones
It’s a fascinating lens. You know, we say to them up front, hey, this is not a perfect lens, but it is a very valuable one, and you really don’t want to ignore it. It’s not the only one, but it’s an important one. There’s another. The other piece to it, which is where we have them select, you know, three of the top 17. This one’s really interesting that the one that gets selected the most often, in fact, 40% of people historically have selected this among 17 choices, almost half of everybody. That’s a lot to me. And it is difficulty finding and accessing data that is by far the number one most selected barrier. And I think that’s so interesting, people feel lost. They don’t even really know how to get to the data. And, you know, it’s so important to uncover that, right? I mean, you could do all this training and send people out there, but they still don’t even know how to get the data in the first place. So what good did all that training do for them? You know, right?

Alli Torban
Yeah, yeah, yeah, I have been definitely in a position where, you know, I’m a trained data analyst, I have the background and everything, and we just don’t have the data. Like, where was the data? Oh, it’s in a PDF, and it’s in this PDF, but also this report and this thing over here, and I don’t even know how to get started.

Ben Jones
I mean, it’s a labyrinth for even a medium sized organization, let alone a large one, it’s an absolute labyrinth. And so, you know, you need to kind of give them a map. And actually, that can be part of the training program. Maybe it isn’t about some official, external, third party delivered training. It could just be you getting together with the teams and saying, let’s put together a little module to help people understand where to go to get the data.

Alli Torban
Yeah, we’ve done that before. In some of our trainings, we just bring people together and it’s like, where, hey, where do you get your training? Oh, Jan, or where do you get your data? And someone’s like, oh, Jan always has it. For me, it’s like, oh, Jan has data. Yeah, go to Jan, yeah.

Ben Jones
She’s that data curator, you know, the person bringing it together. And it could be people based and that’s not the that’s not so terrible. I mean, you do want to move to a place where it’s a little bit you use some technological solutions, maybe like a data catalog or something. So it’s not so dependent on Jan, but at the end of the day, the data curator, hey, that’s a very important role. Who is it that’s socializing, presenting, sharing, talking about the data sets that are available with people on the team? I think that’s a good thing to look into and try to assign that as a task, if not a full time role, definitely someone doing that for the rest of the people on the team, yeah, yeah.

Alli Torban
Thanks, Ben, that’s so interesting to see. See the the data behind the data.

Ben Jones
Yep, we’ve learned a lot. It’s really helped us. Even taught us which courses we should be, we should be, you know, creating really. Yeah. So, yeah, yeah.

Alli Torban
We covered a lot today. In this episode, we covered how organizations can measure data and AI literacy progress. You can look at different assessments, objectives, subjective, and then ways you can measure progress. You can look at quantity, quality and also perception. And you can also start your own data in AI literacy programs with just simple skills assessments. So you can start benchmarking your team just as soon as today.

Ben Jones
Yeah, you can get you can get going on this right now. Of course, we’re around if you want to use ours, but this is not an advertisement for that. So, yeah, definitely go about sitting down with your team and saying, How are we going to measure progress? Hopefully, there’s some things in here that have given you some food for thought, right? We’ve just relaunched hours to include some AI data literacy scoring, or sorry, AI literacy scoring as well as that objective component. But it does take some time to build one you can but you want to get started designing that, that diagnostic tool for your team, you know, pretty quickly,

Alli Torban
yeah, yeah, get that baseline. Start reassessing year to year, and you’ll be on your way. So we hope this episode about measurement was helpful for you. If you know someone working on increasing data and I literacy in their organization, send them this episode. Sharing is caring, and remember to subscribe to the show so you never miss an episode. Everything we talked about today is going to be in the show notes, in the episode description, and if you go to data literacy.com you’ll see the podcast there. Too. Awesome.

Ben Jones
Yep, and hey, thank you again, everyone for joining us on our very first episode of the data literacy show. And you know, remember, you know, our perspective here is that data is for everyone. It’s a dialog. We want to welcome everyone to it, and that’s what Alli and I are gonna try to do here on the show. All right, thanks. Bye. Now, thanks everyone. Bye. Bye. You.