AI threat detection still needs humans


Artificial intelligence offers huge advantages for detecting cyber threats, but technology cannot do the job alone.

That was the main message during a session at the Ai4 Cybersecurity Summit 2022 featuring two government cybersecurity professionals – Garfield Jones, Associate Chief of Strategic Technology for the Cybersecurity and Infrastructure Security Agency (CISA), and Peter Gallinari, Data Privacy Officer for the State. of Tennessee. The duo discussed the promise of AI threat detection and answered questions about what they see as the future of this technology, potential challenges and how humans will fit into the world. board.

Jones made it clear at the start of the panel that every cybersecurity system implementing AI will always require human involvement.

“My view on this is that AI definitely has a future in threat detection and response,” Jones said. “I have to caution with this, we will always need humans to be part of this. As we evolve these tools, and [they] start learning and operating faster for detection and response, a human will definitely be needed in the loop.

“With the rapidly changing threat dynamics, AI has really become more prevalent. The computing capacity and resources are definitely there to collect the data, train the data, retrain the data, retrain the algorithm in the data to provide us with a solid solution when looking for the best course of action for current detection.”

Gallinari noted that while AI can and is being used for rapid threat detection and response, one of its best features is its ability to run hands-on tests on cybersecurity systems.

“The best thing to address right away is how quickly to detect [and] how quickly can we solve a problem and see if there are problems downstream,” said Gallinari. “It also gives you the ability for the AI ​​to impersonate a hacker, which is really nice. [AI models] can pretend to be a hacker to see what’s in a hacker’s mindset just by profiling the data, and they might get a better [incident response] account for what they have seen going in and what is going out.”

Jones said the AI ​​relies heavily on input from the user and is then able to take the data and identify the most effective responses to a given threat.

“Once he learns the threats, he’s able to give you the best response and tell analysts, ‘That’s the best response for this type of threat,'” Jones said. “With machine learning, this will give you the best probability as to the response that will help mitigate this threat.”

Both speakers pointed out, however, that the data fed to the AI ​​can often create problems. Gallinari, for example, pointed to the problem of “poisoned data”.

“[AI] uses data, lots of data, from different sources,” Gallinari said. “You have to think about the security issues around using AI from a data privacy perspective. Consider self-driving cars. AI tells the car where to go, how to go and when to stop. If that data has been poisoned or compromised, that car won’t stop. He will knock someone down. You have to think about what you do with that data, how pure is that data, who controls that data, what’s around it?”

While he sees the risks, Gallinari also discussed the great benefits and threat detection efficiencies that come with AI.

“We see things changing every day, not by the hour, but by the minute. They’re hit by over 600 malicious events per minute, on average – who could handle that?” says Gallinari. “We’ll never get the insights we need fast enough if AI and machine learning weren’t part of the environment.

“So I think that’s going to play a big part in the smooth ride of being able to distribute meaningful information as long as it’s correct information and then using the right inputs and the right outputs coming out. I think everyone benefits, and at the end of the day, we can react faster before it corrupts the rest of our environment.”

Jones explained how AI is already being used to bolster cybersecurity at CISA, particularly in terms of access control and authentication within a zero-trust network.

“For us, it’s basically tracking and monitoring every access request, if you’re not familiar with a lot of zero trust principles,” Jones said. “When you start looking at the behavior and the dynamics of the behavior, and where the boundaries are, the AI ​​and the machine learning are really helpful. It’s going to give that likelihood of whether that person should have access to some of that network, if it really need access according to their profile.

“Yeah, we’re going to get inaccuracies from the AI, we’re going to get these false positives that I talked about, and we’re going to start getting scores and stuff like that. But I think zero confidence and AI is a marriage that will be together very soon.”

Jones also noted that while AI can be used to identify specific threats, the world of cybersecurity is constantly changing and human analysts cannot be completely removed from the equation. Each new day brings a different type of threat, and Gallinari and Jones said AI threat detection needs to be updated and maintained by human analysts so it’s ready to handle any emerging threats.

At the end of the panel, Jones said that AI could be used not only for detecting threats and discovering malware once it enters a network, but also for predicting when it is about to attack. hit.

“I think it’s going to get to the point where it’s not just detection that we’re looking at, but we also have AI when, once it learns enough, we’re going to get to that prediction part, where it’s a threat prediction,” he said. “I think that’s where you’ll see the biggest upside.”

Previous 'The responsibility lies with us': Mayors receive advice from the OAG on compliance with the Open Government Act | News
Next April 6, 2022 Personal Loan Rates: Rates Rise