- The Sophos report finds that companies are worried
- 99% say that AI is essential when choosing a supplier
- The human approach first seems to be key, says Sofos
The emergence of artificial intelligence is reaching the expense of an increase in cybersecurity threats, and companies are struggling to adapt, said new research.
A SOFHOS report revealed nine to 10 (89%) IT leaders concern that generative AI system can damage cybersecurity strategies of their businesses.
Despite this, almost all (99%) IT leaders now consider the essential capabilities by selecting a cybersecurity provider in the perfect example of fighting fire with fire.
The role of AI in cybersecurity
Artificial intelligence has given the actors of threat new powers, making the attackers not qualified into more sophisticated code creators, while making analysts difficult to track the origin of threats.
One in five respondents expected AI to help them improve protection against cybernetics, with 14% with the hope of a reduction in employee depletion.
However, everything has a cost, with four in five believing that the new tools integrated into their cybersecurity solutions will increase the cost of the tools. Even so, 87% believe that savings will exceed the initial costs.
“We have not really taught machines to think; We have simply provided you with the context to accelerate the processing of large amounts of data, “said Sofos Global Field Cter Chester Wisniewski, adding that companies must” trust but verify “Genai tools.
An overwhelming majority (98%) of the companies surveyed now has some form of integrated into their cybersecurity infrastructure, but 84% are concerned about the pressure of reducing labor forces due to excessive dependence on technology.
Wisniewski added: “The potential of these tools to accelerate security workloads is surprising, but still requires the context and understanding of its human supervisors so that this benefit is made.”
Looking to the future, Sofos asks IT leaders to evaluate the suppliers of AI in search of things such as the quality and source of their training data, to establish measurable results that they hope to achieve from AI and adopt an approach human first.