
It’s not only social engineering. AI is also being used to create malware that adapts to its environment. In a recent example, researchers from Outflank used an open-source language model to help generate malware that could avoid detection from Microsoft Defender nearly 8 percent of the time. That may sound small, but it only takes one breach. With enough variations and automation, that detection rate becomes extremely dangerous.
By Gary Bernstein
The Good and the Bad of AI in Security
Another major concern in 2025 is prompt injection, where attackers trick an AI system into executing malicious commands. This threat is especially dangerous in AI-powered chatbots, virtual assistants, and automated workflows.
Start now. Make a full list of your AI systems. Review how they interact with users and data. Talk to your team about real risks. Set firm rules. Train people regularly.
Smarter Phishing and Social Engineering
In cybersecurity, speed is essential. But unchecked automation can become its own risk.
This framework helps avoid blind spots. And it keeps your human teams involved.
AI is already deeply embedded in modern security stacks. Security tools use machine learning to spot suspicious behavior, identify malware, and isolate threats before they spread. According to a recent study by Cybersecurity Insiders, over 80 percent of security professionals say AI has improved their ability to respond to threats. About two-thirds of businesses now use AI actively in their cybersecurity programs.
Adaptive Malware Built by AI
If an attacker embeds a hidden instruction in a user message or a web link, they may be able to hijack the AI’s behavior without needing access to the system itself.
These tactics are getting harder to detect, even for people trained to spot phishing.
Prompt Injection and Supply Chain Risks
AI is giving cybercriminals the ability to personalize phishing emails at scale. Using tools powered by large language models (LLMs), attackers can now write emails that mimic the tone, language, and behavior of real people. They scrape information from social media and internal documents to sound believable.
This expands the attack surface dramatically. And since AI often runs across cloud services and APIs, supply chain risks become even harder to control. If your vendor’s model is compromised, you might never know until the damage is done.
Cisco’s 2025 State of AI Security report highlights a sharp increase in AI-driven phishing and social engineering attacks. The report notes that attackers are now using artificial intelligence to craft highly personalized messages that mimic familiar communication patterns. As a result, phishing emails often appear to come from a trusted colleague or vendor, making them much harder to recognize and resist. Unfortunately, it’s not just email. Deepfake voice tools are being used in Business Email Compromise (BEC) scams to impersonate executives over the phone. In some cases, employees transferred funds or shared credentials with someone they thought was their boss.
The Rise of Agentic AI
You can’t afford to sit back. Here’s a practical approach:
The main point is this: trust is not enough. Structure matters. Oversight matters. You must assume that attackers are using every tool you are using. Possibly more.
AI security is not a checklist. It’s an ongoing process. You build it into your team, your tools, and your mindset.
What’s Actually Working
This is the new cybersecurity battleground. AI is both your shield and the adversary’s sword.
- Assess what AI tools and data you’re using
- Set policies on acceptable use and access control
- Build controls to protect data and prevent misuse
- Train people to recognize where AI might go wrong
Agentic AI refers to systems that can take action independently while analyzing data, making decisions, and executing tasks without human approval. These tools are starting to manage parts of incident response, like isolating devices or blocking IP addresses automatically.
A Few More Numbers
- 86 percent of security leaders say AI increased the speed of their team’s response
- 56 percent say it helped them prioritize threats
- 51 percent say AI contributed to at least one successful attack
- AI-driven cloud workloads are growing up to 22 percent annually
What to Do Next
AI-generated malware can test itself against common antivirus tools. It learns which lines of code trigger alarms. Then it rewrites itself until it passes. This iterative method is fast and cheap. And it’s growing.
- Know your AI systems. Make an inventory of every AI tool in use from chatbots, monitoring tools, third-party APIs.
- Set clear rules. Define what those tools can access and who controls them.
- Use AI to defend. Invest in AI-powered threat detection, but pair it with human oversight.
- Protect your inputs. Filter user inputs and monitor for prompt injection attempts.
- Train your team. Show them real examples of AI-generated phishing and deepfakes.
- Audit frequently. Review how your AI tools perform. Log every strange event.
In 2025, staying ahead means staying aware. Keep your systems secure while keeping your teams informed.
One Final Thought
AI is no longer a side tool. It is part of your defense. It is also part of the threat.
That helps reduce response time. But it also introduces new risk. If an AI agent misclassifies a threat or reacts too aggressively, it could shut down critical systems or block legitimate users. These are not theoretical problems. They’ve already happened.
But here’s the flip side. In that same study, 51 percent of respondents said AI had played a role in recent successful cyberattacks. The power of AI cuts in both directions. Attackers are not just using it, they’re building attack strategies around it.
In 2025, artificial intelligence is changing the rules of cybersecurity. You see it in real time. AI is helping organizations stop threats faster, sort through alerts, and protect user data. But on the flip side, attackers are using that same power to launch smarter and more dangerous campaigns. The tools that help defend your business can just as easily be used against it.
Despite the risks, AI is still helping. According to TechRadar, the best approach to AI in security follows four phases: