In the latest Beutel Goodman Speaker Series, our Private Client Group welcomed Terence Persaud, Chief Technology officer of global systems integrator Jolera Inc.
In conversation with host Darren Bahadur, Vice President, Private Client Group, Terence discussed how cyber threats are evolving and how we can safeguard against malicious social engineering tactics such as phishing and pretexting, as well as ransomware and deepfakes.
This recording took place on November 26, 2024. The transcript following the replay is edited for clarity.
Note: Do not copy, distribute, sell or modify this transcript of a recorded discussion without the prior written consent of Beutel, Goodman & Company Ltd. All information in this transcript represents the views of Beutel, Goodman & Company Ltd. as at the date indicated. This information in this transcript and recording is not intended, and should not be relied upon, to provide legal, financial, accounting, tax, investment or other advice.
Darren Bahadur: Welcome to the latest installment of the BG Speaker Series. My name is Darren Bahadur, Vice President in the Beutel Goodman Private Client Group. I’m joined by our special guest, Terence Persaud, Chief Technology Officer at Jolera. To put it simply, Terence is plugged into all things cyber, and I think well positioned to help us unpack the topic of cybersecurity in the age of artificial intelligence. Terrence, can you tell the audience a little bit about Jolera and share a little bit about yourself?
Terence Persaud: Sure. Thanks Darren and thank you everyone for having me. So Jolera is a global systems services integrator. We’ve been around for about 25 years and we operate everywhere in the world. We operate in Canada, in the U.S. Asia, Europe, South America and Africa. We are an end-to-end provider. So we provide everything in technology, from development to managing IT services to help desk. And our flagship offering is cybersecurity. I’ve been with the company for 18 years. I’ve worn many hats here. I’ve started off 18 years ago as our principal architect, moved into managing professional services, eventually moved over to the sales team and the solutions architecture group. Then the last sort of six years, handled product development, working very closely with our Chief Information Security Officer, designing products, understanding the security landscape. And then more recently in the last few years, I’ve moved up to the role of Chief Technology Officer.
Darren Bahadur: So, Terrence, I mean, from the outside looking in, it seems like cyber threats are growing at an exponential rate. The news flow of breaches is seemingly daily. And listen, from a personal perspective, I don’t go a day without getting a spam phone call. Now, layering on artificial intelligence, I’m sure our audience is wondering, how does this change the landscape for cybersecurity and what should they be doing to protect themselves? But before I hand the reins over to you, I’ve got a short disclaimer from our legal team.
The information in this webinar is of general use and current to November 26, 2024. It is not intended and should not be relied upon to provide specific legal, financial accounting, tax, investment or other advice.
Okay, formalities out of the way. Terrence, I know you’ve got some content to share with our audience, so I’m going to step aside and let you take over.
Terence Persaud: Sure. Let me get it up on the screen here and let me know if you can see the front slide there.
Darren Bahadur: Looks good, looks good.
Terence Persaud: Okay, so let’s get started. So thank you everyone for joining this webinar, Cybersecurity in the Age of AI. In today’s session, we’re going to dive into the intersection of AI and cybersecurity. We’re in an era where artificial intelligence is reshaping both cyber threats and our ability to defend against those threats. Every single interaction that we do, from your morning email check to your evening Netflix browse, is now a potential entry point for AI-powered attacks. Imagine a world where a phone call might not be from who you think it is, where a video can entirely be fabricated. We’ve all seen it on the Internet, these deep fakes where an email knows exactly what you bought online yesterday and uses that information against you. This is no longer science fiction. This is our reality in 2024. So sort of thinking of it as your digital life, your banking, your social media accounts, your work emails, your smart home devices, each one of these is now a potential target for sophisticated AI-powered attacks. But with every advancement in AI comes an equal advancement in AI-powered protection.
So today’s webinar isn’t really about spreading fear. It’s about helping you understand this new reality and learning how to navigate it safely. So I’m just going to quickly cover the agenda of what we’re going to go through today. It’s meant to be practical and actionable. We’re going to start with the evolution of threats. So we’re going to understand how threats have evolved from 2020 to 2024. I think it’s important that we understand how we’ve gone from basic ransomware to AI-powered deception. We’re going to talk about the current threat landscape, so how AI is being weaponized against individuals, why personal, individual accounts have been a prime target for attackers. We’re going to talk about the rise of social engineering and AI-based psychological manipulation and then we’re going to get into some real-life AI attack vectors or how AI can target you. We’re going to talk about the emerging AI-based attack methods such as deepfake video and voice, and they’re becoming much more prevalent. We’re going to talk about AI powered phishing campaigns and we’re going to talk about how criminals use AI to personalize attacks. And then we’re going to go through some practical safety tips, you know, essential cybersecurity hygiene habits that anyone can implement and how to sort of spot AI-generated scams.
So AI is pretty incredible. You know, it’s helping us to write emails faster, it’s helping us to summarize large documents. It’s great for research. I use AI to research products and product development and emerging technologies. But there’s a catch to all this, right? The same tools that are making our lives better, they’re also making it easy for bad actors. So think about this example. Why would a scammer spend hours writing fake emails to targets when they can have AI churn out thousands of fake personalized emails in minutes? Cyber criminals are loving this AI revolution just as much as we are. And they love the convenience that AI is enabling for all of us. Maybe they’re liking it just a little bit more. But it’s not all doom and gloom. In fact, AI is one of our strongest weapons in cybersecurity. AI is protecting us every day. They’re powering the systems that catch those spam emails. If you’re using Gmail or if you’re using Microsoft 365, there’s built in AI in those systems that are flagging suspicious activity that might be out of the norm from what we do on a regular basis.
And it’s helping to keep us safe. So I hope that by the time we wrap up the session, you’ll walk away with some basic, straightforward ways to protect your digital identity. So let’s get into the evolution of threats. I think it’s important that we understand that during Covid is where all of this stuff really became prevalent. It was the sort of advent of ransomware to today where AI is making it really difficult to discern or it’s making us even question what is reality, right? So let’s start back in 2020. 2020 hits. It’s Covid and suddenly ransomware is everywhere. Quick terminology check. Ransomware is a malicious software that an attacker drops in your device. It’s called a payload. Once they activate that payload, it encrypts all of your files, making them completely unreadable using military encryption until you pay that ransom. So think about it like a digital padlock that only hackers have access to. What’s interesting about ransomware is when they drop a payload on your device, you might have endpoint detection, you might have an antivirus installed. But ransomware inherently is not malicious when it’s dropped, it becomes malicious once the hacker activates it and it’s already encrypted your file.
So these attacks usually start with a clever phishing email. Maybe at that time in 2020, it was a fake Covid test or a vaccine appointment. And one click, boom, everything is locked. They weren’t asking for much back then. I think they were asking for maybe $500 to $2,000. And they knew people would pay that. They wanted to get their family photos and their documents back. Then 2021 comes around and those criminals got just a little bit more sophisticated and a little greedier. They introduced something called double extortion. And double extortion is when they first steal your sensitive data, they exfiltrate the data and they encrypt it. Very similar to a ransomware attack. But then there’s a twist. Even if you pay them to unlock or decrypt your files, then they threaten to publish everything online unless you pay more, right? It’s mugged twice from the same wallet. But that’s not all, right? All of those home devices that we bought during the lockdown, maybe it’s your Alexa, maybe it’s your Nest, maybe it’s your fancy fridge. They’re all connected to your network. And if they’re not secured properly, they [hackers] can use those as a potential back door into your home.
And then we hit 2022 and the game completely changes again. Instead of breaking down your front door, criminals realize it’s just way easier to steal your keys, which is your log in credentials. Your username, your password. Think about how many accounts you have today. You have your email, you have your Netflix, you have banking, you have your Facebook. Now imagine someone getting access to all of it. That’s what made financial theft so dangerous. And here’s the kicker. They didn’t really need hacking tools in 2022, right? They came out on the dark web, they came up with something called phishing kits. And it’s basically, you know, call it cyber crime for dummies. And so, if I were to give you an example, you get an email saying that your Amazon account is locked, and the email looks perfect, it’s perfectly aligned, the logo, the formatting, everything. You click the link, you enter your password on what looks exactly like Amazon’s website, and boom, you’ve just handed your keys to the criminal from the fake website because they’re logging everything you’re typing into that website. Then 2023 arrives, and then AI once again changes everything. The scams get personal, really personal. Instead of those emails where it was “dear valued customer”, you’re getting emails that mention the Nike shoes that you just bought yesterday or the Facebook post from yesterday. The AI is sort of analyzing your digital footprint and creating the perfect trap, right? It’s like, “hi Sarah, I noticed yesterday that you ordered Nike shoes and there’s a delivery issue. Can you verify your information in the next 24 hours?” And you want those Nike shoes. And everything is true. But that’s what makes this type of scam so dangerous, right. The lesson here is speed is the enemy of security. These attacks work because they make you act quickly without thinking it through. If an email is asking you to rush to do something, it’s probably trying to rush you into making a mistake.
And now we get to 2024, and this is, you know, this is where things get really crazy. It gets really fascinating, but also it gets really frightening. Deepfakes have now entered the chat. Most of you may have heard of deepfake. You may have seen a music video that’s a deepfake artist singing someone else’s song or a deep fake voice that sounds like somebody else. I mean, picture this scenario. You get a video call from your CEO. It looks like them, it sounds like them. It even has their mannerisms down perfectly. And they need you to wire $50,000 to a vendor. Everything seems legitimate. Except it’s not your CEO. It’s an AI generated deep fake. Or maybe you get a voicemail from your mom. This is a more individual example. You get a [voicemail] from your mom saying, I’m in trouble and I need help immediately. It’s her voice, it’s her way of speaking. Even the little laugh that she does, it’s AI mimicking her voice perfectly. A great way to explain how they get your voice […] some of you may have experienced a random number calling your phone. You pick it up, you say, hello, hello, is there anyone there? And there’s no one on the other line. But what they’re doing is they’re capturing your phone and they’re recording your voice, and then they’re using AI to generate a deep fake version of your personality, of your voice. So that’s why, especially in our business or in working with large businesses, but also at home, we’re trying to push what’s called a zero-trust mindset.
And so in today’s world, it’s really essential. Zero trust means exactly what it sounds like. Trust absolutely nothing at face value and verify everything, even if it looks like something from someone you know, even if it sounds like them, even if you bet money on it that it’s them. Always try and verify whenever possible. And here’s the truth. AI has made it almost impossible to spot these deep fakes at first glance. I mean, I’ve seen tons of deep fake videos where it looks like that person, it sounds like that person, but it’s not that person. And that’s what makes these technologies so convincing. But good old verification still beats them every single time. So remember, in 2024, the most powerful security tool isn’t a fancy software or a platform. It’s your pause button, it’s your verification button. Always verify.
So now we’re going to talk about the current cyber threat landscape. We’re going to talk about some statistics on why the landscape has shifted from businesses to individuals. And it’s kind of not what you’d expect, right. While the media loves to cover massive corporate breaches, you see them all the time, you know, Ticketmaster hacked or state-sponsored attacks. The reality is much closer to home. Cyber criminals have realized something really crucial. When they try to break into sophisticated corporate defenses, like you know, a firewall that has all of these security features built in, why can’t they just target individuals? Think of it like thieves casing a neighborhood. They’re not going after the mansion with security cameras and guard dogs. They’re looking for the house that left, you know, the windows open or the front door open. And that’s why with personal accounts [attract] cybercriminals. It’s opening the windows into their digital lives.
So if you look at a couple of stats here, today’s cybersecurity criminals realize they don’t need to hack your Internet-connected home devices or laptops. All they need is your email accounts. And here’s the brutal truth: 40% of cyber attacks now target personal accounts. Why? Because your email account is like your skeleton key to your entire life. And that’s what makes it sort of terrifying. The attackers don’t just want your email, they want your digital DNA. So if you were to take this example, every time you create an account and you make a purchase or interact online, you’re leaving a bread crumb. So your Amazon shopping history, your Netflix preferences, your Uber routes, your Doordash orders. It all paints a very detailed picture of who you are. And guess what all links these together — your email. When you sign up for Amazon, it says do you want to sign in with your Gmail account? Once they have your email account, they can initiate password resets on all your accounts. They can access your cloud storage with all those family photos. They can even see your calendar knowing what you’re doing on vacation. They can check your food delivery patterns to know when you’re not home. Right. This isn’t just theft. It’s sort of digital hijacking.
So now we get into social engineering, and that 71% figure that you’re seeing, that you see for social engineering attacks, that’s not just a stat. It’s sort of a testament to the fact that hackers are starting to understand the dark psychology being used against us. So think of social engineering like a magic trick. While you’re watching the right hand, the left hand is doing the real work. These attackers are master psychologists. They know that fear, urgency, and curiosity are hardwired into our brains, and they’re weaponizing those emotions against us.
So I’ll give you a real life scenario that just happened last month. An attacker used AI to analyze thousands of LinkedIn posts for a very specific tech company. They learned the corporate lingo they learned the internal names of projects, code name projects that were being developed internally. The AI even learned a little bit about the inside jokes that were happening within the company. Then they crafted an email that looked like it came from IT support, referencing those specific projects using the exact same language patterns and internal communication. And the scary part is that traditional communication, traditional security tools can’t catch these because you’re not really breaking a rule. It’s just an email. It’s learning how people are interacting psychologically and then using that against you. And so it looks legitimate. The request makes sense. Everything feels right. It sucks. It’s an illusion. And here’s the twist. The more digital footprints that we leave, wherever it is, whether it’s online, whether it’s, you know, through AI, the more personal and the more ammunition we give these attackers. Every public post, photo, shared update, that becomes part of their psychological arsenal. And they’re not just stealing data, they’re stealing and weaponizing our own digital behaviours.
And here we arrive at sort of the intersection of security and AI. And it really is sort of an AI arms race in cybersecurity. It’s literally, you know, AI versus AI. And so, you know, there’s two sides to this. There’s AI as a threat and there’s AI as a protection or tool to protect you.
Let’s start with the threat section. So criminals are creating deepfakes that are so convincing that as I mentioned previously, you can get a video call from your CEO, except it’s not really them. Everything is perfect. And we’ve seen cases where deepfakes have convinced financial controllers to transfer millions of dollars in company funds. And I’m going to get into that case in a minute. But again, it’s like I said, the “dear sir” or “dear madam” scam email that I spoke about, AI-powered attacks are like a digital stalker. DAA studies everything that you’re doing, understands you probably better than in some cases you understand yourself. And that’s how sophisticated these scams are becoming.
That’s sort of the dark side of it of AI. So on AI as a protection, we’re using AI and we’re learning to use AI to protect ourselves and help identify when AI is a threat. Right. So we can sort of think of AI as our digital body lens. If we were to start with smart shields, these are all sort of new technologies that are coming to market. Having a smart shield is like having a super vigilant security guard that’s watching your accounts 24/7. So as an example, sometimes maybe on your Facebook account, you may have logged into that account from Toronto at 3 p.m. and an hour later you get someone trying to log into that same account from New Jersey, which is something called impossible travel. It is impossible for me to travel from Toronto to New Jersey in one hour. That’s what a smart shield is. It’s looking for these different correlation rules that would be impossible or anomalous. It’s not something that you’re doing in your everyday life. Then you have your phishing busters, which is if you use G Suites, if you use Microsoft 365, if you use most popular spam filters, they’re using AI to try and figure out is this a spam email. Before it reaches your inbox, they’ve already identified it in quarantine.
And then we get onto your personal security coach; sort of an example I can give you is if you use Gmail. Gmail will sometimes give you a security tune up screen and it’ll say you need to do these things to make sure that your account is secure. It might say that we’ve found your email being used on a dark website. It might say that you need to enable MFA in your environment or in your accounts or different things like that. It helps identify things to keep you safe. And then finally we have deep fake detectors. These are actually called truth detectors and these are becoming more prevalent as deep fakes become more common. It’s basically AI being used to detect AI. That’s essentially what it is and we are seeing more and more of that happening; thankfully at Jolera we’ve not seen any customers being taken victim by deepfake. But it is coming and as fast as AI is evolving for the threats, it’s evolving to protect us.
So now going to go through three real-life examples of how criminals attack you or try to attack. We’re going to start first with the remote access scam. So this is a real-world example of someone in Texas that was successfully scammed using this method, which is a remote access scam. So imagine sitting at home and a pop up appears on your screen. Everybody’s had this. Your computer is infected with a virus, click here to fix it. This kind of fake alert is called a remote access scam. So in this one case, the 68-year-old Texas person clicked a fake antivirus pop up and that gave hackers complete access to their computer. Once inside, the hackers stole $85,000 from her online banking account. They even manipulated her into thinking she was speaking with the bank, which made the scam feel more real, right? This scam works because it creates a sense of panic. The pop-up looked official; it looked urgent. It makes you act without thinking. And once you grant access, the hackers have complete control over your computer. So the key takeaway here, as will be a theme throughout this webinar, is be skeptical of everything. Be skeptical of pop-ups, especially those asking for immediate action. If you see something like this, close it, run a scan with trusted antivirus software. Do not click the link.
The second one is called a phishing campaign. It’s a phishing email. But this one in particular was a real phishing campaign that happened this year and which still remains one of the most common ways hackers target people. So in Canada, there was a bunch of bank customers from various banks that were targeted with emails looking like it came from their bank. This email asked them to click a link to verify their account and update their login credentials. So you would click this link, it would take you to the the bank’s website. It would say you have to update your credentials because of this. Whatever the reason, the problem is it didn’t lead to the bank’s website. Instead, it took them to a fake site that’s designed to steal their login credentials. The emails look real, the bank site looks real. Complete with bank logos, professional language, even a personalized reading. These phishing campaigns are smarter because hackers are using AI to analyze data and create these emails that feel legitimate. They mimic the tone, the design, the sense and urgency, making it really hard to differentiate, you know, fake from real. And so the best defense here, obviously, again, is to verify everything. If you get an email asking for personal information, don’t click. Instead, go to the official website or call them to confirm.
And then this last use case is a really, this is an exceptional one. And some of you may have heard about this. This was all over the news. This is one of the most alarming developments in cybercrime. This is a deepfake. So hackers are using deepfake to create fake videos and audios. And in this particular case, cyber criminals use deep fake audio to impersonate a CFO in a boardroom meeting in an online boardroom. The CFO was remote. CFO had no idea that this was happening. And through deep fake voices, voice manipulation, they convinced an employee to transfer $25.6 million into a hacker’s account. Now, I just want everyone to think about how convincing that must have been. Seeing and hearing what looks like the CFO giving you instructions. That’s the power of deepfake technology. And it’s becoming more and more accessible to cyber criminals. Some of you may have heard of ChatGPT’s new Sora platform, which essentially can create movies and create animations out of basic natural language.
There’s a reason they haven’t released it yet, and it is for this specific reason, because of the fear that it’ll be used for harm. So deep fakes want to exploit your trust. If you see a video or hear audio for someone, you’re less likely to question it. So hackers use this trust to manipulate you.
The best defense against deep fakes is verification. If you receive an unusual request, even if it seems to come from someone you trust, pause and confirm it through a different channel. A quick phone call can save you from being a victim of a deep fake scam.
So we’ve kind of gone through the history of cyber threats. We’ve gone through a few examples. But I want to give you some good news, and that’s that you don’t have to be a cybersecurity expert to protect yourself. Simple habits like verifying requests and being cautious with links. I want to say before we get into this next slide, you know, I work with our CISO [Chief Information Security Officer] quite regularly and we’ve seen hundreds of companies become a victim of a cyber attack. And I can tell you that nine times out of ten, those hackers did not get through sophisticated firewall or sophisticated managed detection and response or sophisticated AI-layered protection. What they did was get in through open windows within the environment. You know, basic things that you’re going to do.
So the next slide we’re going to go through is basic steps that you can do to protect yourself. And these are generally the most common ways that I see cyber criminals get into an environment.
1. So first, the first one is pause before you click. I actually had a scenario when I was talking to Darren and the team when I was writing this webinar, I was expecting a package from FedEx and I was creating this deck and I got the pop up and it said that that delivery was going to be delayed unless you took this action. And I was about to click on it, because it was expected, it was supposed to get here the next day. And I clicked on the actual email sender and it was “FedEx” but with some weird domain email address. Always hover over the link, always verify the sender. This is probably the most, if not the most common way that attackers get into your environment, get into your accounts.
2. Second one is enable MFA [multi-factor authentication]. I know sometimes enabling MFA, you have to log into any sort of account. You have a second form of authentication. You go to your app, your authenticator app, and you have to do all of these additional steps. Most MFA is incredibly difficult to get through for an attacker because the code changes every 60 seconds. Enable MFA in your environment.
3. Third is use unique passwords. I mean, password managers are pretty widely available. They’re inexpensive. Passwords remain another way of hackers getting in. Because, you know, if you use a password for your Netflix, you might use the same for your corporate account, you might use the same for your banking account. We’re humans, we tend to use the same things for convenience, right? Password managers make it easy. You have one master password, it’s encrypted, and then it’ll create very complex passwords and manage all of that for you. For your other accounts, keep your software updated. Again, you might be in the middle of a workday and a Windows pop-up says, please install these updates. Incredibly annoying, I know, but you have to shut everything down. But the moment that you get that notification, hackers also know that that’s a security vulnerability and they’re going to try and get into as many systems as they can before you patch your system.
4. Backup and encrypt your data, protect your files from ransomware. Backups are arguably the most important item that you can do. And encrypting your information. So if a hacker were to get it, there’s nothing for them to expose but encrypting your data and then backing it up somewhere that you trust, perhaps in the Cloud or somewhere else. And then leverage AI-powered tools. Most online platforms now have AI-powered tools that look for behaviour that’s very much out of the ordinary. Something that, you know, I wouldn’t do in my day to day. Something that over the last six months seemed very odd. Credit card companies do a great job of looking for anomalous behaviour. This purchase seems completely uncharacteristic of Terence. Let me just verify that this is Terence. And they’ll send you an alert. Most importantly, trust your instincts. Always. If something feels off, don’t click it, you know, don’t do it. Always verify.
These are AI tools. I wanted to identify a few different tools that you can use. Again, the best way to keep yourself protected is to verify, is to trust your instincts, is to hover over, is pause. But there are tools that you can use to protect yourself too. So there’s identity protection tools. These tools monitor your identity. Maybe it’s your email, maybe it’s your username, maybe it’s your password. And they’ll monitor stuff like the dark web to see if your information has been leaked on the dark web. If it’s for sale, it’ll go through and detect all of these items.
Then we go into AI spam filters. I have to imagine most folks are already using these. Most spam filters today have AI built into them, such as Gmail and Outlook. You have your behaviour monitoring tools, which I mentioned before, stuff like unusual logins for impossible travel where you logged in here in Toronto and then all of a sudden, 30 minutes later you’re in New Jersey or New York. It monitors the behaviour and tries to figure out when something is unusual for your typical activity. Then we have personal fraud detection. This is sort of, you know, as I mentioned, your bank monitoring for weird transactions, unusual patterns and flagging potential fraud. We use machine learning and we call this anomalous behaviour, something out of the ordinary. Mobile security apps becoming very important. You know, last summer I was in Europe, I work in Europe a lot of the year. And I lost my phone. Actually, I didn’t lose my phone. It stopped working, the screen stopped working. And I realized that’s the first time that’s ever happened to me. And I realized how much we rely on our phone. We rely on our phone for our identity. We rely on our phone for our wallet, for our credit cards, our banking information, all of our login information for everything, our phone numbers, our contacts, everything is on your phone. So making sure that your mobile device has some sort of security protection is becoming increasingly, increasingly important. And then of course, I’ve already talked about this last piece, which is password managers with AI.
A lot of password managers now have AI. They’ll not only make sure your passwords are secure and encrypted, but they will also scan the dark web and they’ll look for if your password’s being used anywhere else. And they will protect it. And they’ll do all sorts of really sort of interesting things to keep you safe.
And then finally, I wanted to just quickly talk about a breach response plan. So a lot of times we get customers, quite often we’ll get calls from new customers where all of a sudden their computers are starting to become encrypted. They’re in the middle of an attack, right? So if you were to take this on an individual level, there’s a few steps that you can take when you’re going through this. And the first thing is freeze compromised accounts, right? You, the first thing you would do is if your account is being compromised, don’t call the local branch. Every bank or every platform has a fraud detection department. Call that department, have your accounts, your account information ready, request an immediate freeze on all accounts, get confirmation on those freezes. The next piece is notify institutions if you believe that your in the middle of a breach. Notify these institutions that you believe something malicious is happening on your account. Have them start to investigate what’s going on. And while you’re doing that, it’s really important to continue to monitor all of your other services to make sure that nothing else is happening, there’s no unauthorized access happening with those other accounts. And then finally, and maybe one of the most important pieces that people tend to forget is document everything. At Jolera, when we do a forensics analysis, we document every single action that we take. We document the date, we document the time, the specific action in detail. Why is that important? First of all, it’s important that you have a log for everything. Second of all, if you’ve lost money on your credit card, on your banking, whatever that might be, your insurance, your banking, you’re going to need a complete log of everything that happened. So you’re able to get all of that information. You’re able to get your returns or get refunded or get insurance or warranty claims passed through pretty quickly.
And so, that’s kind of the end of the webinar here. I wanted to try and convey this message across. And the message I’m trying to convey is that cybersecurity is getting very sophisticated, but it is the common tasks such as verification, such as enabling MFA, it’s the basic things that people tend to forget, which is the way that attackers get into the environment. And so I hope that you can take those basic tasks of verification, trusting, calling back, enabling MFA, making sure you keep up with basic updates. Those are the simple things that we can all do to make sure that we’re protected in our day to day lives.
Darren Bahadur: Terence, thanks so much for this. I mean, you’re spot on. Despite how complicated and sometimes scary cybersecurity and AI can be, the best offense is common sense. And you’re talking about our personal emails. I can speak personally. So much of my life flows through my email, whether it’s Uber receipts, it’s Amazon packages, and just making sure that those passwords are buttoned up is a simple solution. So thank you for that. A couple of questions have come in, so let’s pivot to that. I’m going to paraphrase a little bit. So in your good [cyber] hygiene, you spoke about backups and encryption of data. Okay. As we increasingly move away from paper and shift to utilizing cloud storage like Google Drive or iCloud or Microsoft OneDrive, should we be using these? Can they be trusted and are they safer than saving files and or data on our local hard drive.
Terence Persaud: Yeah, good question, Great question. So here’s the thing about hackers, when they drop ransomware in your environment and you have backup saved on your local drive, the first thing a hacker does before encrypting your information is they go and delete your backups. Then they encrypt your information because once you’re encrypted, first thing you do is go to your backups if you have it locally. Right? We see this all the time in companies that have on site backups, they have on site tapes or a USB drive, the first thing the hacker does is delete that. With online storage, such as iCloud, such as G Suite, such as Microsoft 365 OneDrive, they are not only encrypting this information into your environment, they’re mandating a lot of stuff like MFA. They also have highly sophisticated AI-based security tools that are monitoring where you’re logging from, conditional access, how you get in. I have not yet seen a scenario where a customer has called us and said we know our data in Microsoft 365 or Google Drive has been hacked or has been encrypted or has been stolen. So I highly recommend, if you have the opportunity, that’s where you should be saving your data.
Darren Bahadur: Perfect, that’s helpful. Next question is a little more challenging or obscure. LLMs, large language models, something like a ChatGPT. Right. I mean it’s more or less democratizing the cyber threats. Right. It’s never been easier to generate a piece of nefarious code that could be used. What safeguards are companies like OpenAI, Google Gemini, Meta AI or Microsoft putting in place? And what role do you see regulation playing in the future?
Terence Persaud: It’s a great question. So, you know, I think that AI is still in the discovery phase. You know, it’s been around, in the consumer sector for the last few years now. If I were to go on ChatGPT and say, write me a malware, it would say no. It would inherently say no. But as someone in the industry, I know that malware is comprised of different sets of different pieces of code. So I could probably figure out a way to use some sort of AI to develop that software code that I need. But AI companies, let’s take it a two-part question. AI companies know that they hold a lot of responsibility for what happens on their platforms, right? So they are taking measures to not allow the creation of software development that’s used for malicious activities. They’re taking steps to [address] a lot of things that could be used for harm. They’re not going to catch everything right away. There’s still someone that is going to eventually figure out a way to manipulate the system in terms of regulation. I think we’ve seen the EU was the first sort of country or organization to come out with an AI bill.
I think we’ve now seen an AI bill come out in the U.S. the challenge that I see is that AI is moving so fast, I’m not sure that governments are going to be able to keep up with it. So they are writing different regulations. It’s going to have to be a regulated industry. There’s just no way around it. There are certain ethical concerns around a lot of this stuff too. All of this needs to be accounted for. It’s a very complex matter and I think we’re still all trying to figure that out.
Darren Bahadur: Yeah, it’s fluid, right?
Terence Persaud: Yeah, exactly. It’s changing day by day from various political administrations.
Darren Bahadur: I bet. Next question. AI itself is pattern based, right? It’s always learning. What happens if it learns something that’s incorrect or starts to embed some sort of bias into its algorithm, if you will?
Terence Persaud: That’s something called AI drift. So we work with OpenAI, we work with IBM, their platform Watson X, and a few other vendors, Anthropic, they all have something embedded in their AI platform which is called drift. And what drift is, it measures the specific bias of an AI platform. So maybe it’s measuring the responses it gives to a male versus a female, or maybe it measures the types of responses from, you know, a different age, maybe a younger person versus an older person. And it’s part of ethical and responsible AI; drift is an important component of all of this. And there is a measurement, there is a baseline, there is a score that these platforms use to figure out when their platform is becoming too biased one way or the other. And it’s able to automatically adjust that bias.
Darren Bahadur: Great. One final question, and this one’s from me — a lot of great content today, from the evolution of AI, how it’s becoming more prevalent, to good [cyber] hygiene. If you could, if you could leave our audience with one final piece of advice, what would it be?
Terence Persaud: Use your instincts. Verify everything. As I mentioned, the vast majority of breaches that we come across are not sophisticated through the firewall attacks. They are very personalized emails to you that sound legitimate, that sound like they’re from your mom, that sound like they’re from your bank, that sound like they’re from something legitimate. But a little investigation, a slight pause, a slight hover over the email address, will sometimes end up saving you tens of thousands of dollars. So that would be my best advice.
Darren Bahadur: That’s great, Terence. Once again, thank you so much for this. A lot of great insights on cybersecurity, AI, and just how it impacts us on a day to day. For anyone in the audience, if you have any questions on today’s presentation, please contact your Beutel Goodman representative and they’ll be happy to follow up with you directly. You can also visit our website, beutelgoodman.com where you’ll find a library of white papers, insight articles and previously recorded webinars on a variety of topics. And finally, with December around the corner, we wanted to wish everyone a warm and happy holiday. Thank you everyone for joining us.
Related Topics and Links of Interests
©2024 Beutel, Goodman & Company Ltd. Do not copy, distribute, sell or modify this transcript of a recorded discussion without the prior written consent of Beutel, Goodman & Company Ltd. All information in this transcript represents the views of Beutel, Goodman & Company Ltd. as at the date indicated.
This information in this transcript and recording is not intended, and should not be relied upon, to provide legal, financial, accounting, tax, investment or other advice.