Hackers are turning our AI security systems against us — but they can be stopped

Hackers are “making friends” with our systems, it’s time to break them up

With the use of AI growing in almost all areas of business and industry, we have a new problem to worry about – the “hijacking” of artificial intelligence. Hackers use the same techniques and systems that help us, to compromise our data, our security, and our lifestyle. We’ve already seen hackers try to pull this off, and while security teams have been able to successfully defend against these attacks, it’s just a matter of time before the hackers succeed.

Catching them is proving to be a challenge – because the smart techniques we’re using to make ourselves more efficient and productive are being co-opted by hackers, who are using them to stymie our advancements. It seems that anything we can do, they can do – and sometimes they do it better. 

Battling this problem and ensuring that advanced AI techniques and algorithms remain on the side of the good guys is going to be one of the biggest challenges for cybersecurity experts in the coming years. 

To do that, organizations are going to have to become more proactive in protecting themselves. Many organizations install advanced security systems to protect themselves against APTs and other emerging threats – with many of these systems themselves utilizing AI and machine-learning techniques. By doing that, organizations often believe that the problem has been taken care of; once an advanced response has been installed, they can sit back and relax, confident that they are protected. 

However, that is the kind of attitude that almost guarantees they will be hacked. As advanced a system as they install, hackers are nearly always one step ahead. Conscientiousness, I have found, is one of the most important weapons in the quiver of cyber-protection weapons. 

Complacency, it’s been said many times, is the enemy, and in this case, it’s an enemy that can lead to cyber-tragedy. Steps organizations can take include paying more attention to basic security, shoring up their AI-based security systems to better detect the tactics hackers use, and educating personnel on the dangers of phishing tactics and other methods used by hackers to compromise system.

Hackers have learned to compromise AI

How are hackers co-opting our AI? In my work at Tel Aviv University, my colleagues and I have developed systems that use AI to improve the security of networks, while avoiding the violation of individuals’ identities. 

Our systems are able to sense when an invader tries to get access to a server or a network. Recognizing the patterns of attack, our AI systems, based on machine learning and advanced analytics, are able to alert administrators that they are being attacked, enabling them to take action to shut down the culprits before they go too far. 

Here’s an example of a tactic hackers could use. Machine learning – the heart of what we call artificial intelligence today – gets “smart” by observing patterns in data, and making assumptions about what it means, whether on an individual computer or a large neural network. 

So, if a specific action in computer processors takes place when specific processes are running, and the action is repeated on the neural network and/or the specific computer, the system learns that the action means that a cyber-attack has occurred, and that appropriate action needs to be taken. 

But here is where it gets tricky. AI-savvy malware could inject false data that the security system would read – the objective being to disrupt the patterns the machine learning algorithms use to make their decisions. Thus, phony data could be inserted into a database to make it seem as if a process that is copying personal information is just part of the regular routine of the IT system, and can safely be ignored. 

Instead of trying to outfox intelligent machine-learning security systems, hackers simply “make friends” with them – using their own capabilities against them, and helping themselves to whatever they want on a server. 

There are all sorts of other ways hackers could fool AI-based security systems. It’s already been shown for example, that an AI-based image recognition system could be fooled by changing just a few pixels in an image. In one famous experiment at Kyushu University in Japan, scientists were able to fool AI-based image recognition systems nearly three quartersof the time, “convincing” them that they were looking not at a cat, but a dog or even a stealth fighter. 

Another tactic involves what I call “bobbing and weaving,” where hackers insert signals and processes that have no effect on the IT system at all – except to train the AI system to see these as normal. Once it does, hackers can use those routines to carry out an attack that the security system will miss – because it’s been trained to “believe” that the behavior is irrelevant, or even normal. 

Yet another way hackers could compromise an AI-based cybersecurity system is by changing or replacing log files – or even just changing their timestamps or other metadata, to further confuse the machine-learning algorithms.

Ways organizations can protect themselves

Thus, the great strength of AI has the potential to be its downfall. Is the answer, then, to shelve AI? That’s definitely not going to happen, and there’s no reason for it, either. With proper effort, we can get past this AI glitch, and stop the efforts of hackers. Here are some specific ideas:

Conscientiousness: The first thing organizations need to do is to increase their levels of engagement with the security process. Companies that install advanced AI security systems tend to become complacent about cybersecurity, believing that the system will protect them, and that by installing it they’ve assured their safety. 

As we’ve seen, though, that’s not the case. Keeping a human eye on the AI that is ostensibly protecting organizations is the first step in ensuring that they are getting their money’s worth out of their cybersecurity systems.

Hardening the AI: One tactic hackers use to attack is inundating an AI system with low-quality data in order to confuse it. To protect against this, security systems need to account for the possibility of encountering low-quality data. 

Stricter controls on how data is evaluated – for example, examining the timestamps on log files more closely to determine if they have been tampered with – could take from hackers a weapon that they are currently successfully using.

More attention to basic security: Hackers most often infiltrate organizations using their tried and true tactics – APT or run of the mill malware. By shoring up their defenses against basic tactics, organizations will be able to prevent attacks of any kind – including those using advanced AI – by keeping malware and exploits off their networks altogether.

Educating employees on the dangers of responding to phishing pitches – including rewarding those who avoid them and/or penalizing those who don’t – along with stronger basic defenses like sandboxes and anti-malware systems, and more intelligent AI defense systems can go a long way to protect organizations. AI has the potential to keep our digital future safer; with a little help from us, it will be able to avoid manipulation by hackers, and do its job properly.