As cyberattacks grow faster, more automated and more convincing, technology alone is no longer enough to keep organizations secure. Many of today’s most damaging threats succeed not because digital systems fail, but because of human behavior, including trusting too much in AI and automation, allowing smart security habits to fall by the wayside, or succumbing to emotional manipulation.
Understanding where human behavior introduces risk and how small changes can strengthen resilience is increasingly critical as attackers refine their tactics. Below, members of Forbes Technology Council highlight human-centered cybersecurity risks leaders should not ignore, along with practical steps to reduce exposure and improve overall security posture.
“A growing vulnerability is overtrust in ‘friendly’ automation. Teams often assume bots, scripts and AI agents behave consistently, and humans rarely question their actions. Treat every non-human actor like a dynamic identity: Verify intent, validate permissions and use digital twins to simulate impact. Trust should be earned continuously, not assumed.” – Peter Hill, Gathid
Letting Organizational Culture Undermine Security
The biggest cyberthreat is an obsolete organizational culture, which creates severe security vulnerabilities. Risk-averse cultures block market response, and “not my problem” attitudes create security issues like misconfigurations. We need to transform culture by making “everyone is a security engineer” the core value, shifting security from a retrospective gate to a shared, proactive responsibility. – Joey Ahnn, SSG.COM
Losing The Habit Of Professional Skepticism
Attackers leverage advanced technology to further their ability to deceive humans. Our most valuable human capability to counter this is to be professionally skeptical and feel empowered to challenge. Security culture is more than a “corporate speak” element of a program; it is increasingly foundational to our ability to operate in a rapidly changing world. – Kim Bozzella, Protiviti
Normalizing Risk In The Rush To Scale AI
In their urgency to scale AI quickly, leaders often rationalize unnecessary risks with sensitive data because they believe they must choose between competitive advantage and robust data protection. This trade-off between security and innovation leads to an ethical blind spot—they’ve essentially chosen speed over their duty to protect the data their users have entrusted to them. – Beena Jacob, Donoma Software
Overestimating Personal Ability To Spot Phishing
Personally, I’ve seen overconfidence in spotting phishing as a major vulnerability. Employees often think they can identify scams, but attackers use highly personalized tactics. My advice is to implement ongoing simulation training and real-time reporting tools. This builds awareness, reinforces caution and creates a culture where verifying before clicking becomes second nature. – Laxmi Vanam
Falling For Emotionally Manipulative Phishing Tactics
One growing human-related vulnerability is emotional manipulation in AI-powered phishing, where attackers exploit urgency, fear or trust. Organizations must go beyond basic training and implement scenario-based simulations that evolve with threat patterns. Reinforcing critical thinking and emotional awareness can build a stronger frontline defense against automated social engineering. – Govinda Rao Banothu, Cognizant Technology Solutions
Trusting Voices Without Verifying Identity
Person-spoofing using voice replication will have a big impact on cybersecurity and will lead to more sophisticated attacks that take advantage of quick voiceprints to mimic real people. Organizations need to add new training and more sophisticated voiceprint matching as defensive tools. – Kali Durgampudi, Apprio
Avoiding Simple Verification In Remote Work Settings
Remote work isolation makes verification awkward. In an office, you’d walk over and confirm a weird request. At home, people hesitate to “bother” colleagues over Slack or video. Attackers exploit that friction. Normalize quick verification calls. Make it culturally acceptable to say, “Let me just confirm this is really you” without it feeling accusatory. – Marc Fischer, Dogtown Media LLC
Oversharing Small Details That Enable Big Attacks
Underestimating small personal or company details shared on social media—like a badge photo or a post about tomorrow’s corporate event—creates hidden risks. Modern AI models can aggregate this microdata to launch personalized social engineering attacks. The solution: digital hygiene training and a clear corporate social media policy defining what employees may share publicly. – Illia Smoliienko, Waites
Relying On Users To Spot Email Deception
On a daily basis, email users are being tempted to either give up their credentials or click on a link. Email security needs to be revisited and eventually replaced. Credentials being entered on a bogus site are not going to be blocked by most system admins. Training users not to “click this” but to “click that” is not ideal. Companies still lack proper DNS protections to stop spoofing. – Robert Giannini, GiaSpace Inc.
Ignoring Real Threats Due To Alert Fatigue
Alert fatigue is turning into a silent cyber weakness. After enough pings, prompts and “urgent” warnings, people stop thinking and start clicking. The fix isn’t more training—it’s less noise. Streamline alerts, sharpen approval flows and teach teams what truly matters. A quieter environment makes humans far tougher to deceive. – Balaji Adusupalli
Accepting AI Suggestions Without Question
A rising vulnerability is employees relying too heavily on “smart suggestions” from AI tools and approving changes they don’t fully understand. It’s not phishing; it’s overhelping. I think we should teach teams to pause and question automated suggestions the same way they would question a human colleague. – Ajay Pandey, Tradeweb Markets
Assuming Automation Is Always Acting Correctly
A growing vulnerability is overtrust in “friendly” automation. Teams often assume bots, scripts and AI agents behave consistently, and humans rarely question their actions. Treat every non-human actor like a dynamic identity: Verify intent, validate permissions and use digital twins to simulate impact. Trust should be earned continuously, not assumed. – Peter Hill, Gathid
Operating On Autopilot When Managing Critical Workflows
One rising vulnerability is “workflow autopilot,” where employees approve tasks, invoices or access requests without rereading them because AI and automation make everything look routine. My advice: Add lightweight, mandatory verification steps for high-impact actions. A few seconds of friction prevent expensive mistakes. – Nidhi Jain, CloudEagle.ai
Approving Actions Without Verifying AI Intent
Organizations underestimate the risk of humans acting on AI-generated instructions without verification. As agentic AI grows, the goal should be to not have unverified humans or agents handle authentication and approvals. Automate trust decisions with strong identity, policy enforcement and cryptographic proof so actions rely on validated intent rather than human judgment. – Jason Sabin, DigiCert Inc.
Offloading Thinking To AI
When AI takes over routine tasks, people stop practicing critical thinking and lose confidence in their own judgment. Over time, this creates dependency and weakens decision-making skills. My advice: Keep humans in the loop, rotate tasks, encourage manual checks and build a culture where humans actively validate AI outputs instead of blindly accepting them. – Vasanth Mudavatu, Dell Technologies
Lacking The Skills To Use AI Defensively
AI is a powerful tool in cybercriminals’ arsenal. Organizations need intelligent solutions to thwart AI-powered attacks. However, practitioners lack the skills to adopt defensive AI tools, with 48% of respondents to a global survey revealing that a lack of staff with AI expertise is the biggest challenge to implementing AI for cybersecurity. Specialized AI training is key for cyber defenders to stay ahead. – Michael Xie, Fortinet
Letting AI Browsers Act Without Oversight
Humans take the path of least resistance. AI browsers (and browser AI extensions) allow users to automate routine tasks without IT. Unfortunately, AI is enthusiastic and fast and lacks security training. It is prone to take actions that get the job done expediently without weighing risk. Deploy tools to control how a browser is used by both AI and people. – John Carse, SquareX
Encouraging Shadow AI Through Restrictive Policies
Nearly 60% of employees use unapproved AI tools at work. Overly restrictive or poorly implemented AI governance policies can have the unintended effect of forcing employees to use shadow AI to boost productivity. Provide sanctioned AI tools when possible and help team members easily vet new ones. Make the official path too easy to work around. – David Talby, John Snow Labs
Giving Employees More Access Than They Need
Organizations should be paying attention to employee access. We still see companies trying to employ all sorts of high-tech defense mechanisms while their employees have too much access to their systems and networks. This is especially true for admin access. Organizational leaders need to think long and hard about least privilege and not just use the term loosely. – Shane O’Donnell, Centric Consulting
Trusting Platforms Without Understanding Data Exposure Risks
A growing vulnerability is the way attackers now operate like businesses. Even friendly freemium platforms quietly harvest first-, second-, third- and fourth-touch data. They look harmless, yet they exploit the same layers hackers automate. The fix is clarity: Teach teams how data transforms so trust is earned, not assumed. – Doug Shannon