In this episode, Holly Owens and Shaunak Roy discuss the transformative role of AI in education, exploring its evolution, implications for learning, and the importance of maintaining human connections. They delve into the ethical considerations surrounding AI, including data security and the need for a collaborative approach in higher education. The conversation emphasizes the potential of AI to enhance engagement and creativity while addressing the challenges it presents.
In this episode, Holly Owens and Shaunak Roy discuss the transformative role of AI in education, exploring its evolution, implications for learning, and the importance of maintaining human connections. They delve into the ethical considerations surrounding AI, including data security and the need for a collaborative approach in higher education. The conversation emphasizes the potential of AI to enhance engagement and creativity while addressing the challenges it presents.
____________
This amazing episode was sponsored by iSpring Solutions
Check out all their products and use the code HOLLY-OWENS-SUITE for 10% off your purchase.
Connect with the hosts: Holly Owens & Nadia Johnson
EdUp EdTech - We make EdTech Your Business!
Thanks for tuning in!
Thanks for joining us on today’s episode of EdUp EdTech! If you enjoyed today’s episode, please visit our website and leave us a rate and review to help us reach even more fantastic audience members like you. Don’t forget to check out our website, visit us on LinkedIn, or hang out with us on Facebook or Instagram to stay up-to-date on the latest EdTech happenings.
Holly Owens (00:01.608)
Hello everyone and welcome to another fantastic episode of EdUp Learning and Development. We're back with Shonick Rory, the CEO and founder of Yellowdig and we are talking about AI this week. So welcome back, Shonick.
Shaunak Roy (00:17.44)
Yep, Holly, always great to be with you. Looking forward to this.
Holly Owens (00:20.308)
Me too. This is a hot topic in the education space. just thinking about starting out and having this conversation, I want to know from you, like, what is your personal take on how AI is impacting education? So give us a low down on what you think as a CEO and founder of an ed tech company about AI. That's like a loaded question right off the bat. I'm sorry. You know, I always put you on the spot.
Shaunak Roy (00:45.038)
I'm kind of thinking where to even start. I think before I jump into education, I think it probably is useful to kind of think about what is AI. Maybe we can define what AI is. Of course, it stands for artificial intelligence. as you know, hasn't been, it's not only since 2022, which is when...
Holly Owens (00:50.58)
Ha!
Holly Owens (01:02.27)
Sounds like a great plan. Yeah.
Shaunak Roy (01:11.138)
ChatGBT was launched and we created this new category of AI, is large language models, LLMs. We use that in our product, but AI has been there for quite some time and our team has been working on various features using AI, you know, way back since like 2020, I would say. So within AI, we have this LLMs. Then we have what we call is machine learning, which is essentially what it, it's a big term, but what it means is.
you know, what can we learn from the data that is being generated in the platform could be any type of platform and using that data to become more smarter in terms of recommending things to our users or giving them better experiences. So we have been, you know, delving in some of that since 2020 and then 2022 came, know, LLM kind of changed the world. Everybody started talking about AI. And now we have this kind of very interesting times where there is like
I have not seen this level of progress on AI since I started working in tech like 20 years back. So there are lots of developments that are happening in the AI side. we kind of pretty much every week we heard about a tool which is being launched or some sort of an improvement in the current models, the LLMs. Chat GPT started with 2022, but now we have over a few dozen models that we are tracking and each of them are kind of improving on a.
Holly Owens (02:35.636)
Sure.
Shaunak Roy (02:38.072)
daily on a weekly basis. So I think for us, AI is essentially a way to really learn and understand what's happening in this space and trying to find things that we can implement in our platform that has two things, which is one is it's going to be useful for our users, instructors, students. That's number one. And number two is it's intentional, which is it's actually going to help with learning, meaning
It's going to drive more human connections, more interactions, more peer to peer connections as opposed to less of that, because you can imagine AI could be used for less talking to people and things like that. So we don't want to do that. We want to do more of what we like to do. So that's a big, you know, that's a big world and, you know, happy to talk more about how exactly we go through this, like, you know, the puzzle piece that's in front of all of us. But, but, but yeah, we, we, we love it. Our team loves it. You know, we love to discuss things and.
come up with new ideas and try different things and see what sticks.
Holly Owens (03:40.148)
Yeah, I really think it's something that, you know, like I still remember back in April, whenever chat GPT came out, when my friends sent me a link and said, hey, look at this. And I started using, I was like, this is incredible. But then like seeing, like I talk about this situation a lot with New York City public schools kind of banning the AI and people are like, we're not going to use this. This is bad. We're not going to, you know, we're not even going to give it a consideration.
for use in our platform or in our areas. But now lots of ed tech companies and lots of companies in general, they market based off the premise that they have AI integrated into their tools. So how does that, you know, like going from this like, no, we're not going to use it to now it's something that we're going to use as a marketing ploy to maybe get you to use our product. You know, how do you feel about that evolution?
you know, as AI is becoming more prominent in our market, like how do we at, and I know Yellowdig, we have an AI pledge, which I can put in the chat. so how do we kind of deal with that situation? How do we look at that?
Shaunak Roy (04:50.61)
Well, I mean, one thing you just mentioned, the AI pledge is quite helpful because, you know, one of the things that we struggle with is what to build using AI. And unless we have a pledge or something that we want to abide by, it kind of really helps us to narrow our focus with the things that are out there. So, you know, happy to chat about the AI pledge. So in terms of AI, whether we should allow AI or not, I think to me,
As you said, the debate was going on for a little while, but now we are at a point where it's pretty much everywhere. And it's kind of impossible to not have it available. Yeah, you can force your students not to use it in a classroom. Yeah, no laptop policies or no cell phone policies can be okay because maybe in the classroom you should just have conversation with real humans. But outside of the classroom and the students are on their own, I mean, they are going to use what they are going to use.
I don't see a world where people can essentially stop people from or stop students to from using AI, you know, like chat, GPT, for example, it's a couple of, you know, 20 bucks a month. It's not that expensive to get an account there and actually have all sorts of very useful tools there. I know there are some companies, they are making their AI products like almost free or close to free, which is even going to make it even more easier for students to kind of get find and get help from. So.
Holly Owens (06:00.019)
Right.
Shaunak Roy (06:15.404)
I think the real question in my mind is what can we do in terms of building tools for education that's going to be helpful in learning where you can pretty much get access to information anywhere from any device. Like when Google came around, like probably 20 years back, mean, that time we could get access to the internet very easily, but now...
Holly Owens (06:30.973)
Yeah.
Shaunak Roy (06:42.548)
with AI, HLGBT and LLMs, I mean, you can pretty much have any information anytime, you know, any way. So,
Holly Owens (06:49.211)
Yeah, we thought Google was something. And I still remember being younger and having my grandparents just to tell me to go downstairs to the Britannica encyclopedias and look stuff up and how outdated that has become now. you know, is AI competing with Google in terms of how accurate information is and how quick you can get? It's insane to think of the back in the day.
when you were going to a book and now we have these models. Anyways, I just wanted to share that experience because I know there are some people in the room that have seen that whole process and as it's evolved.
Shaunak Roy (07:27.042)
Yeah, and just to add to that is now we are venturing in this new world where these models are getting smaller and smaller so that it can actually fit into your cell phone. So Apple, for example, is looking. There's some news around they're working on a model which fits in your cell phone, which means that you can actually have access to a massive amount of intelligence any time without even connecting to the internet. even if you're not online, you can ask questions, get answers at lighting speed.
Holly Owens (07:36.122)
I know, crazy.
Holly Owens (07:49.961)
Yeah.
Shaunak Roy (07:55.51)
I think it's a whole different world. mean, in terms of ignoring AI, I think it's not even an option. But the one thing I want to point out here, which is I think we probably are going to get at some point is there are two scenarios for learning when it comes to learning. One is that, of course, we can have the best chatbot possible, ask questions, get answers quickly, create content, create interesting videos with a...
Holly Owens (08:17.94)
There are the dogs again.
Shaunak Roy (08:24.154)
I think they agree with me. So with the click of a button, you can pretty much create anything, which is good. But the downside of that worldview is that you pretty much don't even need a human to interact, to learn anything. Of course, you can learn from Google, from YouTube, but now you have ChadGBT and others. So the question becomes is, is that sufficient for you to learn?
Holly Owens (08:26.014)
They agree. They definitely agree.
Holly Owens (08:46.632)
Right
Right.
Shaunak Roy (08:51.296)
And the answer what we are pushing for is that no, you also learning is a humanistic process. You got to be interacting with humans, the instructor students, you know, other interesting environments for you to truly learn. how do we design that in the world where we also creating the other side, is anytime from anywhere information. So I think that is a very interesting question to explore.
Holly Owens (09:10.932)
Yeah, for sure. And like these situations now where an AI can go to a meeting for you and answer like you and look like you and feel like you, you know, that's scary, but it's also kind of like just thinking from a person who loves process and saving time. Like think about how much time that would save you and like meetings and stuff. You won't have to go to every meeting. You could pick and prioritize what meetings you go to, but on the other side of that.
Like what if AI just, okay, let me tell a story. I recently put in, so we're using riverside.fm for this recording. And I recently, they have an AI voice feature. So I put in what I wanted it to say for an intro to the podcast. It got stuck on the words instructional design and learning and development. So it sounded exactly like me. I can share the recording with everybody, but it got stuck.
on saying those words in a way that flowed and had the correct intonation. So when we're thinking about how these things are evolving and how they could help us again save time and maybe be in meetings for us, like what also do we have to be cognizant of if we're doing those things? Because they're not perfect. It's not a perfect model.
Shaunak Roy (10:28.364)
Yeah. And, you know, I mean, the whole development of AI agents or the agentic development, is happening where you will pretty much can have an agent to go and work for you. So you can have like Holly for, you know, taking care of the home and Holly for taking care of work and Holly for taking care of your calendars and answer.
Holly Owens (10:46.687)
It's kind like the Jetsons. The Jetsons was like rosy and everything. Yeah.
Shaunak Roy (10:51.936)
It's crazy and where each of these agents can actually learn from your behavior in different worlds that you're in, because all of us have different lives, for example. We may have certain preferences when we are at home and be this temperature and this and that, and then agent can actually learn those things and be exactly your assistant. So I think the question comes down to this, which is, so AI is getting very smart and it's getting just smarter. Like AGI might be only five years away. There's some talks around it.
We'll never know, but it clearly is getting smarter. So that's kind of a given right now. The second thing, is also now given is that it has now agency, which is these agents are going to go and do things for us as we kind of, you know, program it better and better. But the one thing AI cannot do is relate to humans because it's not a human. So it is a smart, you know, intelligence where it cannot be Holly or Sharnak or something like.
We are beyond information. Like we have some other things which are very hard to even nail down in terms of information that AI can learn from, like our behaviors, our past experiences, our feelings. And some of these things cannot be decoded into AI. So the point is that I think AI can do a lot, but it cannot really replace human to human connections.
because that is something which is very human-centric, unless we create a human, which is an AI. So I think to me, the point I always like to make is that, I think we really have to think about what kind of skills we want to build, which are human-centered, which like critical thinking, analysis, problem solving, communication.
being able to look at different points of views and solving problems like broadly is that category. So humans are going to do that because we are going to build solutions for other humans. And we want to make sure that our learning environments have that human centeredness so that we don't forget about that. Like we can talk to Chad GVT all day long, but we still have to really think about human beings, human problems and being able to create those environments for us.
Shaunak Roy (13:07.456)
I see that to be a real opportunity when it comes to AI in terms of kind of building those environments.
Holly Owens (13:12.904)
Yeah, for sure. And I totally agree with you on the fact that you can't take out the humanness of it. And I always relate to the analogy that people often talk about is when self checkouts came into the situation, taking away, like the human still has to be there. Like they're still there. You know, if you mess up, still have to come over and kind of fix what you did or check your ID if you're buying alcohol or whatever you're purchasing. So that's still an important aspect of that. like this sense of like,
feeling belonging and supported that's so emotional. I don't feel like at this point, AI is ever going to be able to like mimic that human emotion. Because if you think about how complex the brain is and the psychology of all the different things that our brain does and the folds and everything, like, mean, maybe it'll prove me wrong when I'm like 100 years old, but right now, like how is AI going to be able to mimic that? It's not.
There needs to be vast improvements over time in order for that to happen. And basically, you know, like connecting a human to a, you know, whatever, plugging things in. And there's been lots of, you know, movies around pop culture, like how AI, like the movie, You, when he falls in love with AI, and there's been new stories and things like that. And they can develop, you know, those sorts of connections or people can kind of, you know, talk to it through that way.
But what I wanna know from you is from a social learning perspective with our platform Yellowdig, how do you see this really transforming what we're doing, especially in the higher education space where there's people who are fully accepting of it and then there's people who are completely resistant of it. So how do we kind of navigate that and how do you see that enhancing engagement and outcomes?
Shaunak Roy (15:03.566)
Yeah, no, it's a great question. And one of the ways I think about is that how do we use AI to maximize controllable outsides and minimize uncontrollable downsides? So what I mean by that is, for example, a controllable outside is a reduction of time spent on activities that could be spent easily saved, like instructor time.
One of the features we launched recently, Holly, know about that, which is the content recap feature, which is one of the things we have heard from quite a few years now, where, especially in communities and courses where there are lots of students, lots of activities, it's very hard for an instructor to go in and get a quick sense about what's really going on. So they have to spend a lot of time scrolling down a feed to truly understand the themes that are being played out.
Holly Owens (15:47.454)
True.
Shaunak Roy (15:53.998)
So the content recap feature essentially with one click it summarizes everything into like two paragraphs, which kind of tells you the context, what's being trending and also some of the important posts or which is somebody should pay attention to. So the idea there is to kind of really save time and kind of probably, you know, 10, 15 minutes per week for instructors with that just one feature.
And the downside, is uncontrollable with AI, which is sometimes like AI can hallucinate, for example, right? Or AI can, you know, we have talked about it's getting better, but there lots of issues with AI. So we would never put AI to make a decision for a human. So in this case, the summary goes to the instructor and the instructor can take the summary, maybe add a few things they would like to add to, or maybe some other conversation that's happening in the community and send an email out to their students.
So we don't let the AI email out directly. We actually give this instructors to kind of really process it and add their own human touch before they reach out. that's one example. The other example is we just launched another feature called, it kind of basically highlights any problematic content. So it's an AI insights feature where if there is any bad behavior, which is fairly, you know,
Holly Owens (17:08.436)
Shaunak Roy (17:16.248)
glow in our platform given the way it's designed. But if it's something that somebody said something crazy, but are we having a bad day or something going on? So that would be highlighted to the instructor. But again, we don't automatically take the content down. So we essentially kind of highlight it so the instructor can take a look at it because everything is contextual. So that's another feature that we have launched, which essentially is going to save some time and include safety.
You know, one bad post or comment can create like you know, a downstream impact, which we want to avoid those kinds of features. So that's how we are approaching it. So we are approaching it in a very pragmatic sense in terms of things that the instructors probably will find helpful and things that we think is going to not deter human connections and interactions.
Holly Owens (18:04.7)
Yeah, 100%. I feel like what you're doing and what I'm seeing from other companies who are just kind of marketing it as a way to get business is the fact that you're using AI in the census as a collaborator to figure out how we can make things better in terms of the human connection. Like you're giving the instructors the opportunity to see what's happening in their communities, like a little preview so they don't have to spend too much time scrolling. But on the other side of that too, you're also saying like this
particular post might be inappropriate. So maybe it's not because it might be sensitive topics that are discussed in the class. So given the instructor the opportunity to say, okay, this can be posted or this shouldn't be posted. And maybe also to having the opportunity to have a conversation. Like you're saying that the human aspect is so important. And as an instructor, I appreciate that because I think all too often we decide that these decisions are like yes or no, or black and white. And they're really not. There's a lot of gray area.
in these conversations. And one of the things I was going to say is that as an instructor, it's so important that the students feel like they're in a safe space. And Yellow Dig communities provide that safe space for them to further their knowledge or get that ancillary information about the content that we're discussing. So when it comes to AI and generation of content from the student perspective,
One of the things that instructors are worried about is what they're calling cheating or generated AI content. So how are we looking at that from our experiences and from Yellowdick's perspective? Do we need to monitor what's AI generated or do we just need to let that be? How does that all look and feel? I I have an opinion about it, but I want to hear yours.
Shaunak Roy (19:57.454)
I would love to hear your opinion on this, Holly. So, you know, I kind of always have thought about it, and this is something from my own experience, which is if it is something that I have to respond to, like to an example being write an essay on, you know, any topic, right? And then I would have an incentive to actually go and do my own research, maybe go to Gen.ai and try to create and see what ideas I can get from there.
Holly Owens (20:01.02)
I'll go after you, I'll go after you.
Shaunak Roy (20:26.554)
and essentially write something up. So any sort of assignments which are driven by the instructors or prompted discussions, I think it is almost, we are at a point now, it'll be very difficult for the students not to use JNA, if they find it helpful, right? So they might use it in a way to just improve grammar or just make sure that their sentences make sense and things like that. So we don't know whether that's called cheating, but it's something where
Use of AI is going to be more and more natural. So the question becomes what sort of discussion assignments that can be designed that doesn't allow or doesn't really require any sort of AI use. Which is what we believe, like one of the things, as you know, Holly, like we focus on authentic conversations. So the big difference between like discussion boards and Yellowdig is that in Yellowdig, we see the students that co-curators and co-creators of knowledge. So they are actually
Holly Owens (21:14.194)
Yeah.
Shaunak Roy (21:24.962)
bringing in their authentic voice, could be their experience at work or something they found in a project or a tech talk they saw last night, they want to talk about it because it's in their mind. We create a safe space for them to bring those knowledge into the course discussions, which are relevant. So these are all kind of closely relevant topics. We see very low use of GenAI in those environments, essentially because in a lot of the conversation that happens is basically their own voice. They want to talk about something they're excited about.
It's not just they're writing an assignment or something where, you know, they just want to use GNI just to kind of get through it as quickly as possible. I think having like the design of our engagement strategy, which is more authentic, more student driven will naturally reduce GNI usage. And the other thing I would say is that if there is GNI usage, it's always interesting to kind of really create some policies around it so that it's clear for the students what's good use and what's not good use. So I think.
That's kind of where I am at. Curious to hear your thoughts.
Holly Owens (22:27.376)
From the instructor perspective or the designer perspective, the way that you design it and if you intentionally design things that are authentic or you allow opportunities to do different things, I feel like that deters from using the AI. And in fact, in my classes, I encourage people to use the Chat GPT because of the particular assignment project that they're working on. So it really lends itself to generating ideas for them.
where they're spending more time focused on the requirements of the project than actually sitting down for hours thinking about a particular topic or an outline for that topic. And they can get ideas from chat GPT. So I think if right now you're in the more traditional mode of higher education where it's lecture-based, very discussion board, checklist-based, like you're definitely gonna get some AI-generated things because people are gonna see that
they're not gonna connect with that content and they're not gonna see it as significant in their lives. Whereas if you are honoring what's happening and acknowledging what's happening in our culture and society with AI, they're going to see that as more relevant to what they're doing in their lives and what they're experiencing out in the real world. And we can't, I don't wanna say we can't just like shut it completely off because if we're thinking about future,
job opportunities, we're probably doing a disservice to our learners by not using AI because it's going to be a part of their daily lives. Like I use AI every day, not for Yellowdig stuff, well sometimes. But I mean, like we can't, we have to be able, like we're doing, we have to be able to collaborate it with it and decide where is the happy medium in this situation? Because going on the extremes is not doing anybody any good.
And if you're keeping the learners at the center and you're like, well, what do they need? Then you really need to incorporate some AI use and be accepting of that. And it's really difficult for some, there's a lot of different detection tools out there now, Shonic. And I don't know if AI itself is still evolving. How are these detection tools able to detect it? You know, like it's changing every day. So I wanna know like,
Holly Owens (24:51.124)
I know you can't answer this question, but how do you feel about some of these companies that say, have an AI detection tool and we can detect it and it's kind of plagiarizing if you're using AI. That instills fear to me amongst the students and fear is something that doesn't make them feel safe. That's my perspective. I could keep going, but I'll stop there.
Shaunak Roy (25:11.308)
Yeah, 100%. Yeah. You know, as you know, this has been a topic, you know, because it kind of emerges from this idea back in the days when we used to have like plagiarism checks. You can actually go and, you know, see a text and compare that and other, you know, sources available in the internet and see if you can see the exact match.
But with AI now, it is getting to a point where, I mean, we have crossed the Turing test so that AI is better than humans sometimes with the way they write. it is actually a very difficult thing to do because I've heard so many cases where somebody will say this was AI written and somebody was penalized for that, but it turned out to be this was their own writing because humans can write well too. So.
I mean, my, my thought on that, that's a kind of a slippery slope downwards. Even if you have a tool that you think kind of detects AI, maybe there's a confidence level, you can say like 78 % confident this is AI text, but it's a confidence level and you, you may be right or wrong about it. And institutions that adopt that strategy is essentially walking this like very tight road where things can go wrong anytime because somebody can really kind of be penalized for the wrong reason. So.
Holly Owens (26:13.236)
Mm-hmm.
Shaunak Roy (26:28.594)
My thing is that the technology in terms of catching AI tech is going to be very, very hard going forward. And to me, I almost feel like, you know, we have been on this journey for a while now because, know, before we had, you know, Chad GPT, we had Grammarly as a tool. And I know Grammarly was used everywhere, which essentially made everybody's grammar better. So their writing sounds better because there's less grammatical mistakes.
And it means that their ideas are becoming clearer because you're not focusing on grammatical mistakes. You're really trying to understand what's really the underlying idea behind that writing. And I think with AI, I rather than try to focus on whether this language was generated with any support of AI, it's kind of a really good question would be like, what are the unique ideas in this writing that really pushes forward the field or the idea of what the project that we are working on?
Holly Owens (26:59.432)
Right.
Shaunak Roy (27:21.312)
I feel that is a natural progression in our learning journey. learning is going to go from more this testing-based mastery, like do you really know this, like the various tools around it, to how do you use that language to communicate better, create new things, right? Could be ideas, projects, or products that you can launch.
I think we are moving there. We are moving there as an economy. I think it's important for institutions to also kind of think about learning from that lens as well.
Holly Owens (27:51.592)
Yeah, I totally agree with you. Now I'm putting my instructional design hat on. And again, it translates into something that's saving me time. It translates into something where I can spend more of my time focused on being creative with the deliverable or the modality than spending my time actually figuring out how to set things up. Because I can't tell you like pre-chad GPT how much time it took me to curate some LinkedIn posts, to curate...
different webinars and descriptions and objectives. it took hours. It was absolutely not saying that I go to chat GPT type it in, then I completely use it. You have to read over it because sometimes it doesn't tell you it's not exactly what you're saying or what you want to say. So you definitely have to revise it. But in terms of like the time saving and putting out there in organized fashion and logical fashion, it really does help with that.
and the way that, you I write like I talk, which from a scholarly perspective is not okay. It's more like stream of consciousness. So I need like the chat GPT to kind of organize those thoughts in a way that makes sense. And it sounds very research-based and it sounds, it has that, it backs up. Like in my brain, it sounds one way, but when I put it pen to paper, it sounds terrible.
Shaunak Roy (28:55.757)
Ha ha ha ha.
Holly Owens (29:17.968)
It sounds it does very sound like maybe like a comic book strip or something. I don't know. Characters talking. So I really do. And I know this is true for other people. It's not just a personal experience of mine that they need that in order to support their journeys, whether in their doctoral programs or their, you know, writing a book, that kind of stuff. can outline that for you and can kind of then you start generating more ideas and more creativity. I really feel like it does set us up for that.
Shaunak Roy (29:44.034)
Yeah, 100%. I I think the positive side to that is, you know, we are going to reduce a whole bunch of busy work for all of us, if we do it right, and, you know, really do things that we care about and we can actually, you know, make an impact on. So I'm very positive. I think there are some road bumps along the way.
And maybe the other thing to point out here is we are, is you are saying Holly is that the answer is not really what chat GPT does or what this AI tool does is that how do we use it? And, and if you can focus more around what are the use cases that we can build. That's really going to be helpful for learning, especially in this new world. That is where I think a lot of the conversation is going to go as opposed to this LLM and that LLM. I think that is going to evolve over time and we have, know which path we are on, but
Holly Owens (30:29.993)
Yeah.
Shaunak Roy (30:34.784)
Where I see as a big gap now is essentially kind of agreeing and understanding what are the skills that we want to teach our students in this, you know, new world order that we're living in.
Holly Owens (30:46.152)
Yeah, for sure. So that leads me to a question about like data and security and ethical considerations when it comes to AI. So as a CEO and founder, you have to answer, you're probably gonna have to answer a lot of questions around legality, like the data that we were using and you know, whatever the core summary is spitting out, making sure it's not violating different things. So how do you, ugh.
How do you consider that when you're integrating AI? And you've touched on this a little bit, but I want to know from you and from Yellow Dig's perspective, how are you considering that, especially regarding privacy and data? Like what is AI's role in that? And how are we protecting people?
Shaunak Roy (31:28.344)
Yeah, no, it's a very important question. And our policy in this area is making sure that the data that is being used for any of our AI features and models does not leave our systems. Meaning, we have our own systems which are secure, and the data that we have, our students' data or the content data that we have in the system, does not leave the parameters of the system for any sort of analysis.
Like for example, if you're putting a whole bunch of student information in chat GBT, depending on what kind of agreement chat GBT has with that institution, the data might be with chat GBT, it might be deleted after 30 days, there are different types of policies in place. And I'll highly encourage everybody to look into that because these things are not set in stone and sometimes something which has been popular is said, but if you really did the documents, it was quite different.
Holly Owens (32:14.482)
Yeah, look at your policies.
Shaunak Roy (32:24.761)
I think that is probably the biggest thing to kind of really make sure doesn't happen. And as a company, we have made sure that all the models that we have built are all within our system so that the data doesn't leave, leave our premises. that's probably the biggest thing that we have done. The other thing, I mean, from a cautionary point of view, I think the biggest risk we have is, you know, I mean, it kind of goes back like back in the days people used to say that, Hey, think about what pictures you're uploading in social media, because that's going to be there forever.
Holly Owens (32:54.429)
Mm-hmm.
Shaunak Roy (32:54.732)
You can't delete it. Even if you delete it, it's going to show up somewhere else. With AI, we are actually the next level of that risk, which is not only the data is not going to be gone, but somebody is going to learn on that data. Because these systems are built in a way where they're always ingesting new data about somebody and trying to create a much more intelligent model. So anything that goes into that universe, it's going to be there for a long, time. So it's not going to be possible to just go and delete a data set.
Holly Owens (33:24.713)
Right.
Shaunak Roy (33:25.09)
because these models are so complicated. So which kind of, I think as you're saying, mean, this is a very important area for us. And I think for our clients and anybody who's an AI, I mean, just going to pay a lot of attention to.
Holly Owens (33:39.73)
Yeah, for sure. like we said, if we think about this in the grand scheme of things from a perspective of that, everything is not 100 % secure all the time, but we do our best to make sure things are secure. And if we make that the primary focus, then things are going to be OK. And I truly do believe that. If you're just kind of like,
you know, letting things roll in the wind and not really thinking about the security and FERPA and all those different things, and you're gonna have issues. You're gonna have issues. And the policies around AI and developing those, especially in higher education, require many voices. It's not just the voice of the leadership, it's not just the voice of the faculty, it's also the voice of the students too, because I think they're the ones that are gonna be impacted the most in their young lives about how this is going to evolve.
Shaunak Roy (34:15.096)
Yeah.
Holly Owens (34:35.696)
and impact them. You there's lots of different movies I could point to in terms of like Will Smith's I, Robot and you know, some other things where AI had like turned on you. So we're probably already scared that's going to happen. Like the robots are going to take over. But really, like we're saying here, like the theme is it can't replace the human connection. It can give us freedom to do more creative things in a creative space, save us some time. But and we can be protected.
Like there is a way to create that privacy and security around all the data.
Shaunak Roy (35:10.798)
Absolutely.
Holly Owens (35:11.922)
Yep. So I want to know from you before we wrap things up. I see like we had Olga was in the chat. was, we already addressed this about the tendencies around detecting AI. So I think we addressed that, but as we're looking to the future and maybe from the lens of a higher education perspective, since that's where our audience is from, like, what do you see happening with this, with AI?
Shaunak Roy (35:37.09)
I think, know, I mean, to me, this is a real good opportunity for us to kind of, you know, think about what the future of higher ed looks like, because, you know, I mean, as you know, like we, technology has been coming in and out in higher education for a long time. We went from completely in person to online learning. From online learning, now we are moving towards digitally empowered learning, where learning can be powered with technology and technology tools.
I think AI kind of fundamentally helps us to think it very carefully and what does it mean to be a student in this world, right? Why would I go to school and what exactly I'm going to learn to be successful in the world that I'm going to get in. So I'm fundamentally very positive about it. What it takes is for us to kind of come together and have lots of conversations, try different things, see what works, what doesn't work. We want to share with one another as much as possible.
And then really see the ones that are working scale those up to more and more institutions. So for us as a company, one thing we do always is most of our ideas come from our instructors and our students, because that really tells us there is a real need around that problem. And then we get that problem in our team, and we really ideate that problem around different tools that are available, including AI.
So that's the approach we take and anything that we launch in the AI front is something we want to abide by our pledge that we have done, which is it always has to increase engagement, increase learning outcome. We're not going to do anything that's going to reverse that trend.
Holly Owens (37:16.382)
Sure, exactly. I love that perspective. I honestly just coming into Yellowdig recently and seeing our take on AI and how we're approaching the situation and from a collaborative standpoint, I think we're doing a great job of doing exactly what you're saying. So as you wrap up the episode, is there anything else you want to share about AI otherwise or Yellowdig? We do a lot of this from the lens of Yellowdig, but really, if you think about it, this could be...
Insert said EdTech platform here, insert said LMS or whatever here when you're thinking about these things. Is there anything else you want to share, anything that we forgot when it comes to AI or other topics?
Shaunak Roy (37:57.71)
Well, the only thing I'll say is that the way I learn about AI is by using it. So if you're out there and you're thinking about how does this work for me, just get a few licenses. These are not $20, $30 a month. It's not that expensive. And trying to really play with it is how we truly understand the power of the technology. So that's the only thing I'll share out there. And yeah.
Holly Owens (38:21.854)
Yeah, yep, that's awesome. And I'll say this, like I was scared to use it at first, but now I love it. I think it's fantastic and it saves me so much time. And initially I wasn't like on board with it. I'm like, my gosh, I know education is gonna have a huge problem with this. Corporate's gonna be okay with it and kind of incorporate it into what they do, but education.
is gonna have a huge issue with this, but I'm glad to see that companies like Yolodig and other places are embracing it and making it more of a collaborative relationship than like, no, we're not gonna use it. So, Seanak, thank you for coming back on and talking about this very big topic. I appreciate you and all that you're doing. Can't wait for the next one.
Shaunak Roy (39:07.948)
Yeah, thank you so much, Holly. Thanks for organizing this and always great to always have a chat with you.
Holly Owens (39:14.3)
Absolutely.
Founder and Co-Host
Holly Owens is an Instructional Designer with Amazon Pharmacy. With 16+ years of education experience. She's held roles as an educator, instructional technologist, and podcast host. Holly has taught education and instructional design courses at various institutions, including the University of Maryland, Baltimore County, Coppin State University, and Northern Virginia Community College. For the last five years, she has been teaching instructional design courses at Touro University's Graduate School of Technology.
Holly holds a B.A. in American Studies from the University of Maryland, Baltimore County, along with two master's degrees—one in Instructional Technology and another in Distance Learning—from the University of Maryland, Global Campus. Currently, she's pursuing her doctorate in Organizational Leadership with Franklin University. Her passion lies in online learning, ed-tech, and shaping future generations of learners.
With over 23,000 LinkedIn followers, Holly was recognized as one of the Top 35 eLearning Experts to Follow by iSpring Solutions. Her podcast, EdUp EdTech, is a popular resource to stay updated with the latest Ed Tech tools, featuring interviews with 90+ CEOs, Founders, and EdTech innovators, making learning more accessible and meaningful.
Based on the East Coast of the United States, Holly resides in Myrtle Beach, SC, with her Mom, Julie, younger sister, Madelyn, and her furbaby, Berkley.
Founder & CEO
Shaunak is the founder and CEO of Yellowdig. Yellowdig is a community-driven active learning platform adopted by over 200 colleges and universities, K12 schools, and corporate training clients.
Yellowdig’s mission is to transform every classroom into an active, social, and experiential learning community.
Shaunak graduated with a degree in mechanical engineering from IIT Bombay and completed his graduate studies at the Massachusetts Institute of Technology.
Prior to founding Yellowdig, Shaunak spent a decade advising global companies on technology, strategy, and growth.