TORONTO – Denis Villeneuve has been working in cybersecurity for 15 years, but rarely have the threats he faces felt as personal as they do today.
Employees at his workplace, tech company Kyndryl, have received fake videos from CEO Martin Schroeter, designed to trick them into handing over their login details to fraudsters.
Villeneuve also saw a friend who runs a small engineering firm get waylaid when his wife left a voicemail using what sounded like his voice to falsely convey that he was in trouble and needed her to post a bail bond quickly.
“I thought, ‘Oh my God.’ This was close because this is a good friend of mine,” recalls Villeneuve, a cybersecurity and resilience practice leader at Kyndryl Canada.
The attacks were made possible by artificial intelligence-based software, which has become even more affordable, accessible and sophisticated in recent years.
But despite the threats to cybersecurity, Villeneuve – like much of the tech industry – is careful not to label AI as bad.
In the fight against cyber attackers, they reason that AI can help as much as it can harm.
“It’s a double-edged sword,” Villeneuve explained.
As AI improves, experts believe there will always be a bigger or more innovative way to get past a company’s defenses, but those defenses are also getting a boost from technology.
“AI is ultimately much better for the defenders than for the attackers,” said Peter Smetny, regional vice-president of engineering at cybersecurity firm Fortinet Canada.
His reasoning lies in the sheer number of attacks some companies face and the resources required to address or deter them.
A 2023 EY Canada survey of 60 Canadian organizations found that four in five had experienced at least 25 cybersecurity incidents in the past year. Indigo Books & Music, London Drugs and Giant Tiger have all been victims of high-profile incidents.
While not all cyber attacks are successful, many companies see thousands of attempts to penetrate their systems every day, according to Smetny.
AI makes handling them more efficient.
“You may only have four or five people on your team and there are only so many alerts they can manually go through, but this helps them focus and know which ones to prioritize,” says Smetny.
Without AI, an analyst would have to manually check whether each attack is associated with an Internet Protocol address, a unique identifier assigned to each device connected to the Internet that can help identify the origin of an attack.
The analyst would also investigate whether the person behind the address was already known to the company and what the extent of their attack was.
With AI, an analyst can now query software in simple language to quickly compile and present everything about an attacker and their IP address, including where they were able to enter a system and what actions they took.
“It can really save you a lot of time and point you in the right direction so you can focus on the things that are important,” Smetny said.
But attackers have the same tools in their arsenal.
Dustin Heywood, the chief architect of IBM’s global intelligence service X-Force, said anyone with malicious intent can turn to AI to collect data from various breaches and build a profile of a target.
For example, if the data tells you that someone regularly shops at Toys “R” Us or Walmart for children’s products, this could tell an attacker that someone recently had a child.
Sometimes the attackers resort to a practice known as “pig slaughter” to fill in the missing information.
“You get a bot to talk to someone and build a bond using things like generative AI,” Heywood said. “They’ll make sure they all feel nice and trusted, and then they’ll… start extracting information.”
When attackers obtain financial information, a Social Security number, or enough personal information to gain access to an account, the data can be used to falsely apply for a credit card or sold to other criminals.
The potential damage is even greater if there is enough material to create a deep fake, that is, a clip of someone doing or saying something they did not do. Villeneuve’s example of his friend apparently leaving a message for his wife is an example of this tactic.
For smaller targets, AI does the heavy lifting, giving attackers time to focus their attention on high-value victims.
“You can have a bot operator talk to 20 people at the same time,” says Heywood. “It used to be a farm of people in a third country, typing on mobile phones.”
He’s also heard of people using augmented reality glasses that instantly pull up information about someone, including their personal data being sold on the dark web, as soon as you look at it, and others working on “jailbreaking” AI chatbots that extract personal information. have entered.
The evolution of attacks has convinced him that AI is “changing the game.”
“In the 1990s, it was teenagers, children and students who broke into websites to make them unreadable,” he says. “And recently we made the transition to ransomware, where companies had their computers encrypted.”
Now the focus has shifted to taking over someone’s identity, a “really big undertaking,” Heywood said, which is further fueling AI.
The Canadian Anti-Fraud Center has said the country counted 15,941 fraud victims in the first half of the year, with $284 million lost in those incidents. The year before there were 41,988 victims and $569 million lost.
Heywood, Smetny and Villeneuve believe that the fight against attackers is not useless and that companies are taking it seriously.
Their employers conduct exercises for companies such as banks and major retailers, simulating what it would be like if their businesses were attacked, and helping them prepare their workforces to tackle threats and locate and patch software vulnerabilities.
It’s not difficult to get companies to take action, Heywood says, because a cybersecurity breach can cost companies an average of $6 million and result in a drop in stock prices, less revenue and a broken relationship with customers.
Anything they can do to stop an attack is worth it, he added, because “trust is gained by an inch but lost almost immediately.”
This report by The Canadian Press was first published Oct. 20, 2024.
Tara Deschamps, The Canadian Press