AI anxiety has hit Asia Pacific as a new study reveals that 41% of businesses in the region have suffered data breaches caused by AI in the last 12 months.
The study conducted by leading connectivity cloud company, Cloudflare, and published on 8 October also reveals that 87% of respondents in the study are concerned about AI increasing the sophistication and severity of data breaches in the future.
Notably, 47% of businesses have suffered more than 10 data breaches, including construction and Real Estate (56%), Travel and Tourism (51%), and Financial Services (51%).
Data breaches on the rise
AI is one fast-evolving aspect of technology that has been widely adopted by businesses and individuals to increase efficiency and productivity.
However, there have been concerns that it could also be used negatively, and this may not be wrong.
According to the study, AI-driven data breaches are on the rise in the Asia Pacific region, with actors mostly targeting user information.
Of all the breaches, 67% focused on user data, 58% on user access credentials, and 55% on financial data.
In general, 87% of respondents expressed concern that AI may increase the ability of bad actors to initiate data breaches, which may be more difficult for cybersecurity professionals to control.
Some respondents believe that AI will be used to crack passwords or encryption codes, enhance phishing and social engineering attacks, advance DDoS attacks, and to create deepfakes and facilitate privacy breaches.
Concerns despite regulation
Because of the concerns around AI, many regulators have beamed their searchlights on the industry recently, to ensure that the technology is not exploited for harmful uses.
Unfortunately, the regulation seems to be affecting businesses more.
Data from the study also shows that 43% spend more than 5% of their IT budget to address regulatory and compliance requirements.
Another 48% spend more than 10% of their work week learning about the latest industry regulatory and certification requirements.
Although these efforts have helped businesses improve their baseline privacy and/or security levels and the integrity of their organization’s technology and data, regulators may need to do more to curb the use of AI for bad ends.