A recent survey of 500 security professionals by HackerOne, a security research platform, found that 48% believe AI poses the most significant security risk to their organization. Among their greatest concerns related to AI include:
Leaked training data (35%).
Unauthorized usage (33%).
The hacking of AI models by outsiders (32%).
These fears highlight the urgent need for companies to reassess their AI security strategies before vulnerabilities become real threats.
Must-read security coverage
AI tends to generate false positives for security teams
While the full Hacker Powered Security Report won’t be available until later this fall, further research from a HackerOne-sponsored SANS Institute report revealed that 58% of security professionals believe that security teams and threat actors could find themselves in an “arms race” to leverage generative AI tactics and techniques in their work.
Security professionals in the SANS survey said they have found success using AI to automate tedious tasks (71%). However, the same participants acknowledged that threat actors could exploit AI to make their operations more efficient. In particular, respondents “were most concerned with AI-powered phishing campaigns (79%) and automated vulnerability exploitation (74%).”
SEE: Security leaders are getting frustrated with AI-generated code.
“Security teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations — or risk creating more work for themselves,” Matt Bromiley, an analyst at the SANS Institute, said in a press release.
The solution? AI implementations should undergo an external review. More than two-thirds of those surveyed (68%) chose “external review” as the most effective way to identify AI safety and security issues.
“Teams are now more realistic about AI’s current limitations” than they were last year, said HackerOne Senior Solutions Architect Dane Sherrets in an email to TechRepublic. “Humans bring a lot of important context to both defensive and offensive security that AI can’t replicate quite yet. Problems like hallucinations have also made teams hesitant to deploy the technology in critical systems. However, AI is still great for increasing productivity and performing tasks that don’t require deep context.”
Further findings from the SANS 2024 AI Survey, released this month, include:
38% plan to adopt AI within their security strategy in the future.
38.6% of respondents said they have faced shortcomings when using AI to detect or respond to cyber threats.
40% cite legal and ethical implications as a challenge to AI adoption.
41.8% of companies have faced pushback from employees who do not trust AI decisions, which SANS speculates is “due to lack of transparency.”
43% of organizations currently use AI within their security strategy.
AI technology within security operations is most often used in anomaly detection systems (56.9%), malware detection (50.5%), and automated incident response (48.9%).
58% of respondents said AI systems struggle to detect new threats or respond to outlier indicators, which SANS attributes to a lack of training data.
Of those who reported shortcomings with using AI to detect or respond to cyber threats, 71% said AI generated false positives.
Anthropic seeks input from security researchers on AI safety measures
Generative AI maker Anthropic expanded its bug bounty program on HackerOne in August.
Specifically, Anthropic wants the hacker community to stress-test “the mitigations we use to prevent misuse of our models,” including trying to break through the guardrails intended to prevent AI from providing recipes for explosives or cyberattacks. Anthropic says it will award up to $15,000 to those who successfully identify new jailbreaking attacks and will provide HackerOne security researchers with early access to its next safety mitigation system.
The post 48% of Security Professionals Believe AI Is Risky appeared first on World Online.