<img src="https://ad.doubleclick.net/ddm/activity/src=11631230;type=pagevw0;cat=pw_allpg;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;npa=;gdpr=${GDPR};gdpr_consent=${GDPR_CONSENT_755};ord=1;num=1?" width="1" height="1" alt="">

Exploring Higher Education's AI Frontier: Balancing Innovation and Privacy

Air Date: July 25, 2024

 

In this episode of Hash It Out, Trevor Foskett and Paul Drake, IT risk manager at Notre Dame University, dive into the transformative intersection of generative AI and higher education, examining both the opportunities and challenges this technology brings. Gain insights into how AI is reshaping academic landscapes, strategies for balancing innovation with ethical considerations, and practical applications that can enhance learning and research. Whether you're an educator, student, or tech enthusiast, this episode offers valuable perspectives on the future of AI in education.


Transcript
[FOSKETT] Alright. Hello, everyone. Welcome to the latest in Virtru's, Hash It Out series where we bring in some industry experts and just talk about kinda different topics, in the cybersecurity industry. So today, our topic is exploring higher education's AI frontier, balancing innovation and privacy. If you're a Hash It Out regular viewer, you may recognize me, Trevor Foskett, VP of solutions engineering here at Virtru. And today, I'm joined by my friend Paul Drake, IT risk manager at Notre Dame University. Paul, wanna introduce yourself for, the folks out there?

[DRAKE] Sure. I'm Paul. We are excited at the new world of generative AI and all the opportunities it presents, but it adds a lot of challenges. Our risk program manages new and existing applications, and we look at risks from a departmental ends as well. And, we're being asked, you know, how can we take advantage of this, the promise of generative AI to do more and better things, you know, in the realm of research, administratively, in teaching and learning, and how can we do that securely? So that's how I spend a lot of my time these days.

[FOSKETT] Yeah. No, no small challenge. I think you and, you know, most organizations out there be being asked to juggle this idea of, look, you know, there are incredible opportunities here with this technology, but also new things to think about when it comes to security, privacy, a lot of the things that you, spend your days on as as a risk manager in the IT space. So before we get into some of the more, you know, concerning parts, we'd love to just ask, you know, as far as generative AI tools, are there any places where either your team or even, you know, Notre Dame as a whole have found, found opportunities to leverage these technologies today, ones that you've decided, you know, there is the the reward is worth the risk in certain areas or things that your team relies on today, any workflows that you've been able to, accelerate or improve upon with these new tools?

[DRAKE] Yeah. You know, we write a lot of risk reports. We have to translate kind of technical concepts into layman's terms and take layman's terms and make sure we're covering, we're explaining to the engineers what those concerns are and making sure we have technical controls for them. And so, I use generative AI all the time to help translate to, you know, say, what are some ways to explain, you know, hashing sensitive data in API calls? And, and that is, not a sentence I try to write to executives very often, and generative AI is great at explaining Kerberos, to, executives or hashing or, and, you know, we deal with a lot of different stakeholders. So procurement and people end users who wanna use new software, and we try to speak in their language because ultimately, everybody at an institution is managing risk. And we're just there as kind of a subject matter expert as a coach to make sure they're aware of the risks, and there's a lot of new risks.

[FOSKETT] Yeah. Absolutely. Especially when a lot of these tools are available, for free or for, you know, a low cost option to the average end user who can just go to, you know, chat dot open a I dot com and sign up. So have you guys thought about you know, before we get into kind of the things embedded into the technologies that you use every day, have you guys done something like a sort of not corporate, but a university policy on sort of acceptable use of these tools or general user guidelines of how people should be engaging with these tools today?

[DRAKE] Yeah.Almost every university has put out statements with some guidance. We really see AI fundamentally as an enabler.We just need to do it in an informed and thoughtful way.So we have a journey by working group.

[DRAKE] There we recently appointed a, someone to lead our generative AI efforts, and, Notre Dame has already held multiple forums, and and so other folks at the institution are leading across higher ed and in the research space for, ensuring we have, good models to, make sure AI is used ethically and, you know, as a force for good in the world is the Notre Dame mission. And, also, you know, is used effectively. And so we've got a new group coming up that will help, identify those areas where there's the biggest kind of bang for the buck in terms of AI, and, how do we execute on those really well and start to see some great payback for the investment.

[FOSKETT] Yeah. It's funny you mentioned that. I think a lot of us out there are doing the same thing. We have a similar group at Virtru right now specific to our kinda go to market team where we're all discussing, you know, if we're gonna make an investment here, what is the place where we're gonna get the best return on it? What's the place where we can do it in a safe way? You know, how can we make this work for the team and try to settle on what that is? Because there are so many options now. I mean, every SaaS platform I use has a button now. You know, try the AI thing. And, you know, it's sort of tempting to see what it does, but it makes you wonder, you know, who else is seeing that button? Is it available to end users? Am I only seeing it as an administrator? And how do we wade through the ones that we wanna use?

[FOSKETT] So how have those discussions been going? Have you guys settled on any specific options that you want to engage with? I know you guys are a Google Workspace organization. Have you looked at, you know, Gemini and and thought about engaging with that or things in, you know, directly from OpenAI? How are you guys kinda looking at that today?

[DRAKE] Yeah. We're surveying the market in terms of options. As a Google Workspace customer, Gemini already has a lot of our data. So when we think about, you know, AI that can be informed by artifacts that already exist, rags if we're gonna use the term. There's a really powerful proposition there. Not everything we want is consumed by the AI, so we have to kinda balance that. We're looking at lots of different options, though, in terms of building a secure platform so people can interact with it in a way where the terms are acceptable to legal and the pricing is acceptable to procurement and the data controls are we're, right now, on a lot of I think we have three or four different pilots going on.

[DRAKE] And then on the teaching and learning side, there's, you know, a ton of great opportunities both in terms of helping students to learn and educators to educate, but also learning about AI. You know, in ten years, that's, I think, a skill set that will just be, you know, expected, as much as typing. Like, you have to learn to interact with these tools. And so the students who are gonna hit the workforce in one to four years want that as part of their education. And so we're educating about AI, and part of that is how to use it effectively and to be aware of where it can steer you wrong and where it can really add value.

[FOSKETT] Yeah. Oh, that's really interesting. You know, I tend to, you know, I've kind of been thinking about this conversation thus far in the context of mostly sort of commercial business organizations and sort of forgetting about the fact that universities have this whole other component, that, you know, a a standard business doesn't have in the form of the students and how, you know, these Gen AI tools have had such a massive impact on on education. You mentioned a couple of things that were really interesting. One, I think was spot on is that, you know, learning how to interact with these tools is certainly a skill for people of my generation, I think. You know, you'll see on people's resumes sometimes when people roll their eyes, but, like, I'm good at googling. It's like a very good skill. I know how to go out there and find information online and how to, you know, curate search results. And I think you're exactly right that the next iteration of that is I know how to work with these LLMs and, and introduce prompts that are gonna get me the information I need.

[FOSKETT] It's certainly a skill because I've thrown some prompts in there that have returned garbage. But then you go and work on it for a little while and you get something a little bit more valuable back. So there's certainly a skill there. But I wanted to go off topic just for a second here and talk about kind of the flip side of you know, we're talking about IT and how your staff and those tools that you have access to interact with AI. Tell me a little bit about if you have visibility into, you know, how the university is thinking about more controlling the use of AI by students because, you know, the kinda degenerate in me immediately thinks if I'm a student, I need to write an essay. ChatGPT is gonna be my best friend. Right? So do you guys have any policies or technologies in place to suss that out, or is it something you're leaning into and just changing sort of assignments or the way that we, assign work? What does that look like if you're aware?

[DRAKE] I'm sure you. Well, I try to keep tabs on it. I am far from an expert on pedagogy and AI. It depends on what you're studying and what the learning objectives are. So there's some broad level guidance from the university around the use in class about how we should, you know, encourage students to use this technology, but it's not appropriate in all cases. And so, you know, you need to make sure, the subject matter is appropriate. You know, Notre Dame has a great law clinic that helps people without a lot of resources navigate the legal system.

[DRAKE] And, you know, that's the way that you probably don't wanna put in an AI that's gonna get trained if it's, not public information. And, and so the guidance is that each educator in each class should be really explicit about what's appropriate. And, you know, there are classes where they're like, use AI to write this and then and then fix it. And then there are other classes that are like, do not use AI at all. This is about coming up with original, you know, creative writing, and we don't want you using AI for that. And then there are probably other classes that are like, no. Use AI to brainstorm and then write your own. And, you know, from a grading standpoint, we've been using AI for years before it kind of hit the stage. Most higher ed groups are using a couple solutions that will help anonymize and analyze to make sure there's no implicit bias in your evaluations of, you know, two hundred students. Is it graded fairly and consistently, particularly if you split up the grading? Right? How do you make sure that's consistent? AI is a great opportunity for that.

[DRAKE] We're implementing a new hiring system, and there are some opportunities there for AI to assist in making really good evaluations of people. And, if we're thoughtful about how we do it, we can actually use AI as a way to help us make sure we're doing things the way we wanna do them, you know, ethically and effectively.

[FOSKETT] Very very cool. A lot of stuff for you guys to be thinking about. You know, it's one thing that my colleague, Nikita, who you know well, has said a couple of times that, you know, universities are like cities, basically, and so you guys have to think about these issues from so many different angles. Always fascinating to hear.

[FOSKETT] I'm gonna bring it back to sort of the topic, for today was sort of balancing this innovation and privacy. And so I wanted to, you know, touch on the conversation that you and I had, a couple months ago about some of the thoughts you had on thinking specifically about your Google Workspace implementation. How can we take advantage of Gemini? But given all the different types of data that you guys have in Workspace, because, again, universities deal with health, student records, criminal justice sometimes, all sorts of sense of information, How do you, how do you balance that need? And it sounded like at the time that was sort of holding you back from maybe flipping the switch on Gemini. So is that still the case? Can you talk a little bit about the background of your concerns there and then maybe, we'll talk about some potential ways to address that?

[DRAKE] Yeah. So Google Workspace is a phenomenal collaboration platform, and it's really good at sharing information. And it holds, you know, the several terabytes of data that the university has produced. Right? So Gemini already handles the intellectual property concerns, the data handling. You know, Google has a pretty robust security program. And so as with emerging technologies, there's always gonna be new flaws.

[DRAKE] You know, betting on one of the big players addressing those flaws quickly is a pretty good bet. The kind of challenge is that Gemini doesn't have a way to look at data, in a in a, like, contextual way, and we don't have a way of saying, let's carve out this data. Like, I would like you know, everything that is shared with me about some topic except this, you know, this folder or this file, or this shared drive. And so, you know, we worry about the LLM kicking out data that I didn't know came from a topic that was sensitive or a data enclave that was sensitive. You know, and we all have to balance how much time we're investing in any of our work product. And and so, I might not, you know, research it carefully, especially as I build some confidence in the tool. Day one, people are pretty careful about it. It's six months from now. You know? We bang out emails, and then sometimes I'll be, like, looking at my old emails and be like, man, there's a lot of typos in these. And just because I take it for granted that, my spelling is gonna get corrected.

[DRAKE] And so we want users to be able to say, okay. All this data or all my data that I can see and interact with accept certain areas. And that's kinda hard to do out of the box, with Gemini, and I'm sure more options will come. The other is because it's a collaboration platform, anyone can share with anyone else. And maybe we had a temporary worker. Maybe we have somebody who arranged folders and expanded the sharing without realizing it. That's a really common risk of, like, I've got, you know, applications for a position on my team, and I put it in a folder. And then somebody else has edit access to that folder, and they put it in a more broadly shared folder. And, that's normal file sharing challenges. But with AI, it becomes easier to find that information. Right? And so the idea of unintentional or even malicious, like, you know, evil person at evil corp, dot com can share information with me and not notify me. And then that information is in Gemini's, in Gemini's database, and it can be served up. And, that poses some risks.

[FOSKETT] Yeah. You know, it's interesting. You know, we've kinda had a lot of discussions here at Virtru about this. You know, is this a crisis and a huge privacy nightmare? Is it kind of a nothing burger? And I think where we've settled is it's not a it's not a crisis, but there is, you know, some people would argue, is Gemini even really different from search? I mean, in order to search my emails, Google has indexed my emails. Right? That's how they do the amazing things that they do with respect to search. And so is Gemini that different? And I think for me, the reason that it feels a little bit different is because, one, there are things you can do with LMs that you can't do with search. That's why they're so great to kinda have a conversation where you can kinda poke a little bit more. Instead of search, you're kinda fishing in the dark a little bit more. LLMs, you can kinda follow the breadcrumbs to eventually get to where you need to go. But, also, I think people tend to be more just relaxed with them because you get this feeling that you're talking to someone. Right? It almost feels like there's someone on the other end of it. You forget that it's just this, you know, probability engine. And so I think there are ways that people would either, you know, reveal too much with their prompts that they probably wouldn't dump into a Google search box. But also this is the scenario you mentioned, I think we've seen this with OpenAI. And if I'm remembering this incorrectly, I owe them an apology. But some, you know, early versions that came out, people were able to sort of get it to dump out information that it certainly shouldn't have. There were some personal records in there, you know, things that would really be bad for you if someone were to get Gemini to dump out the the contents of some private Google Doc.

[FOSKETT] So that idea of, well, we wanna use this, but restrict access to certain sensitive data is interesting. And what you had suggested, when we spoke a couple months ago was maybe leaning on some of either Google's built in tools like Google client side encryption or Virtru's encryption tools for Gmail Drive to encrypt and thus deny access to those, sensitive items and drives that they don't get incorporated into that knowledge base that the Gemini is using. And I thought that was really clever and sort of and you know, I always love hearing new use cases from customers that, you know, we haven't planted the seed with you. When you come up with your own, that's the best thing for us to hear. Like, oh, there's other ways to use our technology. We have thought of it. And I just thought that was so clever. Absent any controls within Gemini itself, how can you put controls on your data before you put it into that Google ecosystem? So Yeah.Give me your thoughts on that. I mean, I know we've spoken, but for the folks who are, you know, listening in, they weren't privy to that conversation. So, give me just a few thoughts there.

[DRAKE] Well, so there's a couple strategies with leveraging Virtru's technology side, and we've been a Virtru customer for a long time. And, iit has very quickly addressed some of our immediate pain points that we had, which is really around user experience. And, you want a good user experience. So if you roll out of control that is too hard to use, nobody will use it, and then your control doesn't doesn't do anything. So building on that foundation of a really good user experience, we wanted to kinda take certain data off the table because jailbreaking LLMs, Gemini or otherwise, we're not picking on Gemini or OpenAI. They're they're all subject to it. It's an emerging technology, and they're just now developing frameworks to evaluate security of them. And, those frameworks are still, still developing themselves. So, as ways to, kind of get around the safeguards that are there, keep coming up, you know, the safer thing to do is just take the data off the table that you don't want it to be able to consume. And so, you know, if we're thinking about student transcripts, the AI doesn't need that, for the registrar to write an email to someone explaining that they got into the University of Notre Dame, a really happy letter, right, if we're rewriting our admissions letter. And so, you know, following the least privilege, if the AI doesn't need access to it, but it is a good place to store the information in Google. Then let's just take that off the table while the security mechanisms of this technology evolve and grow and improve, and they get better at testing it. And, and then, you know, when we have more confidence in the underlying technology, and better ways to evaluate and test it, then we can, you know, maybe put that more sensitive data in there. My Social Security number, though, doesn't need to be retrievable. And so there's already DLP rules in Google Workspace, also Office three sixty five, which we have that as well. So there's already ways to identify sensitive data. You can even label them.

[DRAKE] But, you know, what we were talking about is a feature I'm super excited about is the ability to apply virtual encryption when it hits that rule. And kind of the way that Gemini was explained to me, from Google is that if it has access to data and then you revoke that access, if you, you know, encrypt it with Virtru, then Gemini from that point on will not produce any. It doesn't have memory. Right? It just has retrieval. And and so, you know, we can apply DLP rules with some thoughtfulness, and then encrypt that with Virtru, and then that just takes a whole list of concerns off, and it prevents the it it addresses a lot of, you know, legitimate institutional concerns around, could someone use this to more effectively hunt for Social Security numbers? If you're staring at the, like, Google Drive bar, you can't just type in show me all the Social Security numbers because, you know, it's looking for search strings, and you can type in SSN or whatever. But if it's just like a spreadsheet or a database of real or simulated, I keep a file of fake Social Security numbers that everybody in my line of work has to evaluate these things. But, as it, as we identify that, we can encrypt it, and, and then we don't have to worry about it anymore. So the DLP rules with Virtru means that we really have control over our kind of institutional risk tolerance of what sorts of data is going in. And, when we encrypt something, it's still easy to access, and that's, we don't want people changing files so that they don't hit the DLP rules to avoid the encryption because they don't like the experience. So, again, for me, you know, as a security professional, I've seen pretty creative end runs around what we're trying to do on the security side. Sure. And that's why I'm super passionate about, like, please make the secure thing easy. And then if we do make it easy, there you know, people might just be like, oh, That file's encrypted. Okay. And then they interact with it as normal, but we have the posture we want as an institution.

[FOSKETT] Really well said, Paul, and really interesting to hear how you guys are thinking about this. I think we are, at times here, but I'll just kinda finish on one thing that you mentioned that, you know, this is not a, you know, Gemini specific concern. I think for a lot of us who spend our days in Google, that's a very real world example of the types of things that we should be thinking about with this new technology. But you see this play out across, you know, Reddit's API is now rate limited and needs payment. Same with Twitter. Why? Because, guess,OpenAI got half their training material. And so this idea that there's just a new value that's been assigned to our data, and we say this at Virtru all the time that, you know, data is your organization's most valuable asset. There's been more value assigned to that now because these LLMs can add additional value with access to that data. And so how do we manage, and control that very valuable asset? Just things to think about. One is a data centric approach the way that you just articulated there. Others could be simply, you know, closing down programmatic access the way that some of these social media sites have done. So new horizons, new frontiers that are still being explored right now, but very interesting to hear the way that you guys are thinking about this at Notre Dame.

[DRAKE] Context of a, like, full security program, you know, if you look up, like, the zero trust security model, there's a lot of evolution around, you know, application security and a lot of offerings there and identity where everyone is rapidly evolving their identity, security. But data is its own pillar for a reason. It's, and should have its own security road map and opportunities. And I think Virtru's, a big part of that for us, as we're planning for future offerings for Notre Dame.

[FOSKETT] Awesome. Well, we are happy to partner with you, Paul. Great to have you guys as a customer and and look forward to continuing the partnership with you guys into the future, whatever shape that may take with all these, changing technologies. So, again, folks, that's it for today's session on, you know, challenges introduced by AI and productivity tools. I have been Trevor Foskett and, my colleague, Paul Drake here from Notre Dame. Thanks for joining us again, Paul, and, hopefully, we'll tune in next time, folks. Thanks.

[DRAKE] Thanks.

Enjoy a coffee on Virtru!

Fill the form below to claim your gift.