Byte-sized bandits: AI is levelling-up scammers but also helping Hongkongers fight back
Lian’s case is not unique in Hong Kong. The city saw 39,000 scams filed with the Hong Kong Police in 2023 alone, resulting in about HK$9 billion (US$1.16 billion) in losses – a significant increase from the HK$4.8 billion lost to scams in 27,923 cases in 2022. Cases were up 31 per cent in the first half of this year with HK$2.66 billion in losses, according to the Hong Kong Police Force.
Lian, who posted about her experience on the Chinese social media platform Xiaohongshu and asked to only use her English name, said the WhatsApp message she received looked exactly like it had come from her landlord, complete with previous message history. That deception would not necessarily require AI, but a growing threat is the way in which GenAI helps fraudsters take on the likeness and speech patterns of others.
“It is widely recognised that GenAI is making it easier to create phishing [links] and gain access to or steal data, leading to an increase in scams,” said Ho Ling, a partner at the law firm Clifford Chance. The result is that hundreds of thousands of targeted messages like the one Lian received can be sent within a very short time.
One of the biggest threats to emerge just within the last few years comes from deepfakes, which use GenAI to create videos, images and audio mimicking a real person’s likeness. Some high-profile cases involving the technology have put regulators on alert.
“Deepfakes are becoming increasingly prevalent, posing significant risks to individuals,” said Matthew Chan, business director at Trend Micro Hong Kong, an American-Japanese cybersecurity firm that makes deepfake detection software.
“Deepfake videos can be created with AI apps and resources that are freely available, opening up opportunities to exploit the technology,” he said.
To help users distinguish between real and fake people in video conferences, Trend Micro launched Deefake Inspector for free to the public. The tool looks at pixel values and spatial frequencies to try to detect if there have been any subtle manipulations in the image, according to Trend Micro. It also analyses elements of user behaviour. Chan said it has reached an accuracy rate of 94 per cent.
A less dazzling but more common means of identity spoofing is stealing what are known as machine identities. These are software or algorithms used as proof of a person’s identity online, such as persistent login cookies that keep a person logged into services like Google or Facebook after closing their browser.
“The machine identity needs to access the operating system, access the database, access the networks to do all the automation,” said Billy Chuang, the solution engineering director of CyberArk, a Nasdaq-listed information security provider. “So leveraging one identity is very risky. If the hacker can compromise that identity, he gets all the access.”
“We received an increasing number of inquiries about machine identity, because [companies are] moving their infrastructure to the cloud,” said Sandy Lau, CyberArk’s district manager for Hong Kong and Macau.
Lau said using multiple cloud services means third and fourth parties are often involved in accessing sensitive data, which can be difficult to manage.
To address this, CyberArk launched an identity-centric secure browser in March. “This browser can secure both the in-house stuff, but also unmanaged devices,” Lau said. The browser separates work and personal applications and domains.
With the help of its Cora AI assistant, which launched in May, the browser is meant to detect and respond to abnormal situations and help users automate operations, increasing productivity at the same time, according to Lau.
However, no security solution is flawless, which is where “white hat” hackers come in. Companies offering this service will simulate attacks and provide cybersecurity assessments on the robustness of a company’s infrastructure.
“When a company asserts that its entire network, system, and data are secure, it’s essential to verify this claim through simulated cyberattacks,” said Lai Qian, assistant president of Integrity Technology, a mainland Chinese cybersecurity firm.
Founded in Beijing, the four-year-old company opened its international headquarters in January at Cyberport, a government-backed hi-tech hub in the southwest of Hong Kong.
An investigation by the Office of the Privacy Commissioner for Personal Data (PCPD) concluded that Cyberport had weak security infrastructure and defences.
The park used a single antivirus programme to protect its vast network, according to the PCPD, without any multi-factor authentication, which requires users to enter two pieces of information such as a one-time code sent to their phones in addition to a password.
“Cyberport is our landlord … At the same time, it is also our customer,” Lai said. “After this incident, we provided security services such as security checks to Cyberport.”
Integrity Technology provides different technical means and tools to examine security risks and vulnerabilities in enterprises’ online systems.
“Simulated cyberattacks and testing are one of the most important criteria in assessing the effectiveness of enterprise security,” said Lai. “We don’t do destructive behaviour to the system, but we take full advantage of the latest technical means, including known vulnerabilities, to test our clients’ digital system in a constant way.”
Integrity Technology has served about 30 clients in Hong Kong, most of them government institutions, such as the city’s newly established Digital Policy Office, the Hong Kong Police Force and Hospital Authority. Lai said Integrity aims to expand its “digital family doctor” services to the finance sector, universities and technology start-ups.
The growing use of cryptocurrencies has also created new vulnerabilities – often with irreversible damage. While Lian and the Arup employee were duped into transferring money manually, crypto can be stolen in the blink of an eye with a wrong click online, creating the need for more crypto-related security solutions.
Yu pointed to transaction confirmation delays and vulnerabilities in smart contracts as issues that could be exploited to steal funds.
In the first half of 2024, SlowMist recorded 223 crypto-related security incidents globally, resulting in losses of US$1.43 billion – a 55 per cent jump over the same period a year ago, according to the company’s Hacked Database.
In a shocking incident in June 2024, a user on OKX – among the world’s largest crypto exchanges by trading volume – claimed that a hacker broke into his account and stole more than US$2 million worth of crypto using an AI-generated deepfake video that bypassed the company’s security system.
Yu said emerging technologies have created new concerns. Crypto firms should operate on a “zero trust principle”, he said, meaning they should use multiple layers of access controls and logs for key visits.
Companies have responded to these threats by developing tools to help law enforcement and exchanges enhance their cybersecurity. SlowMist, for example, offers a library of malicious blockchain addresses and services for tracing and recovering stolen crypto using AI.
One low-tech solution to all these problems that can help in the long term is education, helping people become aware of these new avenues of attack.
“I encourage every business to recognise that this is an overwhelmingly likely event for them,” said Dave Russell, vice-president of enterprise strategy at Veeam, a back-up and data recovery software company. “It’s not determined by geography, company size, or any other factor – it could happen to your Hong Kong bank, a disgruntled customer, or even an internal employee.”
For some, though, the help will come too late.
Lian reported her incident to the police but has not recovered her money. Among the more than 250 comments on her Xiaohongshu post were many recommendations for tools such as Whoscall, a Taiwanese app that filters spam phone numbers. Others suggested she call the Hong Kong Police’s anti-fraud line at 18222.
This was the first time Lian had heard of such tools, she said, adding that she hopes the government can better publicise them in the future.