You’ve undoubtedly seen some of the recent press about AI; sadly, much of it negative. People are rightly worried about generative AI and how it will impact jobs and creativity – the everyday effects of which are slowly starting to trickle through into reality.
Change is always scary, but the pace at which AI technology is advancing is sending many into a panic. Yet AI is neutral. It’s a productivity tool. Something to help us get things done quicker.
As cyber security experts, we can’t speak much to administrative roles, creative industries, or to others who may be feeling the AI pinch, but we can certainly see numerous silver linings in using AI in the fight against cybercrime.
How Does AI Currently Help Fight Cybercrime?
Though the public hype around AI has risen a few degrees lately, AI-powered cyber security tools are thankfully nothing new. There are many ways in which AI is already being tasked with fighting cybercrime – and doing a bloomin’ good job at it.
SonicWall’s Capture Labs have been using artificial intelligence in their threat research and protection for over a decade now. Sophos is particularly focused on AI, machine learning, and data science in information security. WatchGuard uses AI throughout their range of products.
Though the specific AI-driven crime-thwarting potential differs between each product, each brand, and each use case, all of these solutions (and many more) are already out there for us to use and are getting better every day.
How AI Will Help Security Professionals Fight Cybercrime
We foresee AI cybersecurity tools getting better and better – and rather optimistically, we see them eventually winning the war against cybercrime. Many of these predictions are rooted in things that cybersecurity AIs are already doing, though some of our hypotheses may seem a little sci-fi at the moment! All in all, they’re an indication of where we see AI taking us in terms of fighting cybercrime.
Large Scale Data Analysis
One particular skill where AI has proven itself time and time again is its ability to analyse massive amounts of data in a short amount of time. Machine learning (ML) algorithms can pour through swathes of data and uncover patterns, context, insights, anomalies, and priorities at a speed that would be impossible for humans.
This will surely be a real benefit to the security community. Whether an algorithm is set to work analysing network traffic for erroneous activity, flagging phishing emails at speed, detecting the spread of zero-day malware, or something else entirely; AI will be able to analyse and flag threats of all kinds at speed, freeing up security analysts from having to manually trudge through data across various domains of cybersecurity.
Automated Malware Detection
In the bad old days of malware detection, antivirus tools relied heavily on matching known threat file “signatures” to files that were present on a device. If the device’s antivirus uncovered a file with a known virus signature (along with a few other telling characteristics) then the virus was flagged as being present.
This worked fine at the time when viruses were a minor, rare annoyance. However, it only works against known threats, and signature data can easily be changed to evade detection. In our current world of zero-day threats and new malware variants popping up every day, signature-based detection is simply not enough.
More modern threat hunting tools rely on heuristic analysis of threats, which determine whether something is malware depending on its behaviour rather than its characteristics. If a file is behaving suspiciously, or simply doing something that is unexpected of it, then a heuristic antimalware tool will generally flag it and attempt to neutralise the threat. This neatly sidesteps the issue of needing to know about a threat’s existence (and its signature) before being able to defend against it.
Many endpoint protection tools already use some kind of AI-enhanced heuristic threat prevention, which serves to further analyse any nefarious behaviour that may be happening on a device just below the surface.
But as the tech develops, at the very least, we see it boosting the speed at which on-device threats can be uncovered. But at most, AI threat detection has the potential to put a massive dent in the entire concept of zero-day malware – possibly even malware as a whole.
Think about it. Before we know that a zero-day threat is doing the rounds, it has to… well… do the rounds a bit. But once we are armed with effective ML-powered tools, there’s the potential to analyse and quash zero-day activity as soon as it rears its ugly head – and to report it back to the cybersecurity community at a speed not yet seen. It could even potentially serve to combat metamorphic and polymorphic malware that deliberately changes its appearance to obfuscate its presence.
Social Engineering & Spam Detection
To err is human, and according to research from the WEF, 95% of cyber breaches can be traced to human error. With the rise of deepfake technology and with the 135% rise in “novel” social engineering attacks (which could potentially involve AI), it would be easy to think that AI will win the battle of social engineering. But will it really?
We happen to agree with Eyal Benishti’s article in Forbes when he says “AI Is The Problem, And AI Is The Solution.”
Due to the rise of readily available generative AI platforms, there’s the potential that we will start to receive more in-depth, well-written, better coded social engineering threats, and lots of them – all created with the help of generative AI. This wave cannot simply be stemmed with traditional cyber awareness training and threat detection tools as we know them today. We need to fight AI fire with AI fire.
AI’s data-crunching ability can be put to good use here, as it could be used to analyse each email’s source, content, metadata, timing, and much more in order to gauge its risk. It could also analyse trusted mail too, to help it discern good mail from bad mail.
A system like this could further be bolstered by an email reporting process that lets human workers flag potentially risky emails too – giving the AI even more insight into what humans deem benign and what we deem malicious.
But Benishti makes another point we agree on here – when these incredibly powerful new tools are freely at our disposal, that doesn’t mean training needs to fall by the wayside. When it comes to any kind of social engineering, your first line of defence will always be your human team members, their cybersecurity knowledge, and their secure habits.
IT Asset Management on Autopilot
Back in the day, when all of an organisation’s PCs were always on site and hard wired into the network, keeping a lid on those assets was easy-peasy. If you wanted to check the presence of a machine, you could toddle on down to the PC’s location and see it with your own eyes.
However, now we have WiFi; Internet of Things (IoT) devices; Bring-Your-Own-Device (BYOD) policies; smart appliances; cloud software and storage; and remote working to contend with. It’s harder than ever to ascertain where the boundaries of your own IT infrastructure begin and end. Pair this with the fact that keeping a lid on your own infrastructure is typically a reactive process rather than a proactive, strategic one. It’s easy to see that this is an area of IT security and management that is crying out for a little more peace of mind.
Thankfully for those in ITAM, using AI to analyse tech usage patterns doesn’t just have to be about threat hunting. We foresee asset managers using AI and automation to track tech inventory; optimise hardware and software usage; manage software licensing; and ensure every device on the network is equally well-served. This way, ITAM professionals will be in a much better place to focus on high-level tasks that require real human ingenuity.
AI could also help you reduce spending by optimising the procurement process, finding the best prices, optimising software licensing spend, and more – all in all, taking a lot of the administrative headache out of tracking and logging ITAM data.
Reducing False Positives & Alert Fatigue
False positives can be a real pain. They’re the “boy who cried wolf” of cybersecurity analysis. In fact, 81% of surveyed IT professionals said that over a fifth of the cloud security alerts they received were false positives.
The worst thing about false positives is that they can take some considerable investigation, only to discover that they are totally benign. It’s good that there was no danger after all, but that’s time and energy that could have been spent on something more productive!
Having to bat away too many false positives takes a team’s energy and focus away from the real, problematic alerts; can add to a sense of overwhelm; and ultimately result in the dreaded alert fatigue. When countless alerts clamour for your attention, it’s harder for the important stuff to rise to the top and get the attention it needs.
In terms of combating false positives, we have to turn, once more, to machine learning. There are already AI security tools out there which allow analysts to feed back to the AI about what alerts are false positives so the system can learn not to consider them alertable incidents in future.
However, it’s false negatives that really keep security personnel up at night – that’s when something malicious has taken place but it has flown completely under the security system’s radar. But again, this is a problem that AI may be able to help with in future. AI is great at detecting anomalies in vast swathes of data, so it may even become able to root out and solve security problems before they happen.
Reducing Human Error & Bias
There’s a lot of data flying around in cybersecurity – and it’s increasing in volume every day. We’ve already touched upon how AI can make light work of huge amounts of data, but there is another issue that AI sidesteps entirely: human error and bias.
Humans can make biased choices in the data they choose to sample. Many humans’ eyes glaze over when faced with bucket loads of unstructured information. Humans misinterpret data, humans make assumptions, and humans understand things subjectively. Humans also sometimes selectively pick data that supports a given hypothesis or narrative.
However AI suffers from none of these issues. It is completely impartial and logical, and can quickly gather the objectively correct information for the task at hand – and it can do so at an enviable pace.
Do we think this makes the old security or data analyst role obsolete? Absolutely not – AI is their friend! We see AI being the one who does the manual data trudging, so the analysts can focus on the higher level, value-added stuff that really matters.
Predictive Analytics
With AI cybersecurity tools being fed all of this juicy data, this leads us on to another incredible possibility: that AI will eventually be able to pre-empt and predict attacks before they occur.
Using ML’s self-learning capabilities and a whole treasure trove of data to trawl through, it’s likely that tools will get bigger and better at sensing risk, analysing potential attacks, and possibly even keeping their anomaly antennae out for new threat behaviours on the horizon – all of which can potentially feed into a powerful predictive engine.
Increasing Speed & Efficiency of Response
Security operations centres are busy places. Any strides that security analysts can make in terms of improving reaction times, seeing threats on the horizon, and dealing with issues in a timely manner are very much welcome.
However, there is a huge, current threat to cybersecurity as we know it – it’s sort of the elephant in the room here. The security community will have a whole host of AI tools to help them work both smarter and harder – but the trouble is, the cybercriminals will be getting the same benefits from AI too.
The current AI revolution may lead to a wave of security threats like we’ve never seen before – self-adaptive malware, increased social engineering capabilities, and increased criminal efficiency are already happening “thanks” to AI.
Yet, armed with their own AI tools, the security community can make their own anomaly-spotting, phish-detecting, data-analysing efforts all the more robust. And with AI taking care of some of the grunt-work, human operatives will be freer to work on the kind of strategic IT security work that really moves the needle in the long-run.
Curious about how AI-powered security tools can protect your organisation? Or maybe you’re wondering whether there are AI-enriched versions of the security tools you’re already using? Perhaps you’re worried about what AI has in store for your business in terms of cyber threats? The Just Cyber Security team would love to hear from you. Book a call with one of our technicians today!