Social platforms have integrated mechanisms to show which content was created using artificial intelligence. However, the Kaspersky Security Bulletin 2025 Statistics report warns about the lack of consistent criteria and the ease with which cybercriminals can remove the labels implemented to alert users. As Kaspersky details, the development of “deepfakes” and the continued sophistication of generative artificial intelligence are among the biggest threats to cybersecurity in 2026, both for attacking and defending digital networks and systems.
According to the report published by Kaspersky, the rise of artificial intelligence will change the forms of digital attacks and allow any user, even without technical experience, to access advanced tools to create manipulated content such as fake images, videos and audios with high realism. Additionally, the company expects that deepfakes will continue to evolve to enable real-time manipulation, such as changing a person’s voice and face during a video call, increasing the risk of personalized attacks and other fraud involving identity theft.
Kaspersky emphasizes that while there are systems that warn about the presence of artificial intelligence-generated content on social platforms and digital services, users are left with no protection due to the lack of common standards and the ease of bypassing or removing labels. Given this scenario, it is predicted that new initiatives will emerge over the next year aimed at introducing stricter regulations and technical solutions to close the detection gap.
The ability to manipulate audiovisual content has been improved, especially in the audio area, as published by Kaspersky. Deepfake generation programs are becoming increasingly user-friendly, meaning attacks based on this type of deception are more accessible and more common. This situation could increase the malicious use of deepfakes in various areas, not only in targeted cybercrimes but also in other forms of manipulation in social networks and messaging platforms.
The Kaspersky report also notes that open source artificial intelligence models have reached a level of performance comparable to the closed models of large technology companies. However, in the absence of such strict controls and protection mechanisms, these open models pose additional risk as they can be used for both legitimate purposes and criminal activities.
Some of the most elaborate practices include creating professional-looking emails to deceive victims, faking the visual identities of well-known brands, and designing websites dedicated to “phishing” that have a high level of credibility. The same report warns that the normalization of the use of artificial intelligence-generated content in advertising campaigns will make it even more difficult for users to distinguish between authentic information and manipulated materials.
Another dimension that Kaspersky highlights regarding artificial intelligence in cybercrime is its integral use throughout the entire attack chain: from generating malicious computer code to identifying new vulnerabilities to delivering malicious software or “malware”. Malicious actors also try to hide any evidence that reveals the use of AI tools, complicating the work of forensic teams responsible for post-attack analysis.
In the area of digital defense, Kaspersky describes a parallel development. Artificial intelligence will not only serve as a tool for attackers, but also a relevant resource for cybersecurity professionals. So-called “agent-based” tools will be able to continuously monitor technology infrastructures, identify security vulnerabilities and provide analysts with immediate contextual information. This automation allows companies to respond to incidents more quickly and efficiently.
Another trend covered in the report is the adoption of natural language interfaces for digital defense solutions. Instead of requiring complex commands or technical instructions, it is sufficient for operators to formulate simple instructions in everyday language. This simplification promises to bring cyber protection to a wider range of professionals and make incident management easier.
The report published by Kaspersky concludes that generative artificial intelligence and agent-based solutions will play a prominent role in both expanding cyberattacks and strengthening protection techniques. Developing standards and promoting technical initiatives are essential to address the challenges that will dominate the digital universe in the coming year, according to the security company.