Increase in Cybercrime and Data Leaks Driven by AI Arms Race

As artificial intelligence tools become more common in businesses, cybersecurity risks are on the rise, adding complexity to the already challenging task of protecting against cyber threats. According to a recent report from Check Point Software Technologies, companies keen on utilizing AI are unprepared to handle security threats like data leaks and advanced phishing attacks.

Check Point conducted a survey of over 1,000 cybersecurity professionals and found that 47% of them had discovered employees uploading sensitive data to AI tools in the last year. This highlights a growing sense of urgency among Chief Information Security Officers (CISOs) and IT leaders as they struggle to keep up with the rapid adoption of generative AI without having robust security measures in place.

Sergey Shykevich, the threat intelligence group manager at Check Point, pointed out that many organizations are facing challenges in implementing adequate controls to prevent data exposure. The line between innocent employee curiosity and harmful data leaks is becoming increasingly thin.

The increasing adoption of AI technology has opened up new avenues for cyber threats, with criminals adapting their tactics to exploit the growing use of large language models and automation. A significant 70% of survey respondents mentioned that bad actors are already using generative AI to carry out phishing and social engineering attacks, making their malicious activities more sophisticated and believable.

The consequences of these security lapses are real. Approximately 16% of survey participants reported incidents of data leaks directly related to the use of generative AI applications by their companies in the past year. In some instances, employees unintentionally exposed confidential information like customer records, source code, and sensitive documents by inputting them into external AI services.

Recently, Samsung revealed that it had prohibited its employees from utilizing ChatGPT after an engineer uploaded internal code to the platform. Following this, various companies in industries ranging from banking to defense issued similar directives, with some opting to develop their own AI solutions internally to avoid sharing data with external providers.

Despite having strict policies, many companies struggle to monitor the influx of third-party AI tools into their networks. This phenomenon, known as “Shadow AI” according to the Check Point report, refers to employees circumventing official protocols to experiment with AI models, sometimes leaving security considerations behind.

Enforcing these policies remains a challenge for many IT teams, as employees may find ways to use productivity-enhancing tools like AI even if they go against corporate guidelines. Only 28% of survey respondents stated that their organizations had comprehensive, up-to-date rules governing the use of generative AI, leaving security monitoring largely reactive instead of proactive.

As attackers continue to innovate, using AI-powered tools to create deepfake videos and impersonate voices in spear-phishing attacks, detecting and attributing these threats becomes increasingly difficult. Regulatory bodies like the U.S. Securities and Exchange Commission and the European Union are paying attention, urging companies to disclose AI and cyber-related risks and setting strict standards for transparency and accountability.

To mitigate these risks, the report recommends measures such as educating employees on AI-specific threats, investing in data loss prevention technologies capable of monitoring AI tool usage, and adopting approved AI solutions hosted within the organization to maintain data within secure environments. The evolving landscape of AI technology calls for defenses to evolve just as rapidly to keep up with the changing threat landscape, according to experts.