Is AI/ML a game-changer for security, or overhyped?

1.4k views3 Upvotes17 Comments
Sort By:
Oldest
VP, Director of Cyber Incident Response in Finance (non-banking)3 years ago
I think that there's room for artificial intelligence and I think that there's room for machine learning. Those things are super important. At my previous job, I was on an invention disclosure for how to develop an algorithm to detect heartbeats. It's easy to detect heartbeats from something that's on the network, 24/7. But when you've got a mobile workforce, it gets a lot more difficult to try and detect those kinds of things. Especially when you've got threat actors who will have a heartbeat of 24 hours plus 10 minutes, so there's only one ping every day, or even worse: like the malware from Solar Winds. It is two weeks before it does anything bad.

I can throw up a whole bunch of reasons why it's bad or why it's hard. Executing applications based on math and trying to have computer systems that execute programs that we think are supposed to run on those particular computer systems and platforms is great. But it is so hard to keep up with the configuration management and the class of computing systems that things are running on at enterprise scale. Move it into the cloud and I think you've just magnified the problem exponentially several times over, because it's not your computer. It's somebody else's computer that you're renting space on.
1 4 Replies
VP, Chief Security & Compliance Officer in Software3 years ago

Jeff, I agree with you. I think first of all, data is just too fluid at this point. I was on a call with SOC leaders and we were talking about next generation SOAR. What is that really going to get us?

CEO in Services (non-Government)3 years ago

I definitely agree with you, Jeff, that you are renting someone else's server. I have been raising a red flag within electronics manufacturing to secure the hardware. I've been wanting to get the electronics manufacturers industry to start securing the device. Whatever they're making, it should be secured in manufacturing. No disrespect to anybody in software, but sometimes, there's only certain things that you can't do in hardware that you need software for, and hardware is more difficult to hack. If you can build it embedded, you're ahead of the game.

lock icon

Please join or sign in to view more content.

By joining the Peer Community, you'll get:

  • Peer Discussions and Polls
  • One-Minute Insights
  • Connect with like-minded individuals
VP, Chief Security & Compliance Officer in Software3 years ago
I think for AI to work for cyber, the capability of AI to learn my environment has to be faster. I'm not moving at the speed of the actors. I am stuck having to protect a hybrid environment, which is the challenge for probably most of us. So my focus is constantly fractured, because I'm defending in the cloud. I'm trying to help continue to migrate off of antiquated platforms and systems, to the cloud. So I don't have the luxury that the actors have, of being single-focused. 

I think we need to get to the place where we are actually taking cyber threats more seriously at the board level, where they don't question the investment to complete a lifecycle migration. Then we can just be done with it rather than spoon-feeding the migration, which then keeps you from being able to defend at the same rate. When we look at the Solar Winds attack, it wasn't necessarily that they leveraged a vulnerability which was embedded in code. It was the fact that they were able to go undetected for so long and mimic traffic that our security tools are designed to detect. I think we have to be able to free ourselves, to move at the speed of the actors, and have that singular focus to really start to win this battle. AI and machine learning are very important to do that, but they have to learn my network faster. I can't take like a year and a half for it to learn my network.
2 Replies
President and National Managing Principal in Software3 years ago

If the AI isn't getting to know your network, is it bad AI or is it that your network isn't generating enough information to create a useful model? To create a set of patterns of normal activity.

Field CISO in Consumer Goods3 years ago

I wholly agree that AI needs to be faster at learning the intricacies of individual environments. Time-to-value is one of the biggest barriers of acceptance for AI being necessary within an organization. I also agree that much of the issue resides in the lack of meaningful input to feed the AI engine. Without external influene and integrations as well as "care and feeding", many times ROI doesn't start until 6 months and true value at a year or so (if ever without the right inputs). But is that so different than a human in the same role? There is a reason why skilled cyber-pros are hard to come by. They have years of experience to lean on. They can use information from past events and from their peers and they can use their intuition to correlate seemingly non-similar data. Dare I say that AI and ML are under-hyped in their ability to help organizations cut through the noise, expedite response time, and augment the human element today but over-hyped when it comes to being the "silver bullet" of tomorrow?

CEO in Services (non-Government)3 years ago
Cyber threats are dynamic. You're never going to know when they first come in, you're only really going to be able to see them within a certain period of time, after they've already invaded. Edge brings a lot to the table in that respect. But with respect to AI particularly, I would offer this: If you don't know where the original data came from to build a model for the machine learning to train the AI, you might as well not even bother. The only way you can do that is to gather data, encrypt it, and keep it under lock key, while the 3-5 data scientists are building out that model. Make sure that there's no way they can leak anything. Allow them to address bias just as much as security threat, because a security threat and a bias can be the same thing. You can manipulate the data that you want that AI to be defending against, simply by the way the model is being built for the AI. 

To wit, if a model is trained to pick up a pronoun in the wrong context, that's a very simple way for an AI to start breaking down NLP or other kinds of capabilities, whether it's email or content or something outside of a structured environment, and use it. That's just one example. In manufacturing, it's 10 times that, because you have sensors and actuators and PLCs and all sorts of equipment. In that environment, even the 1 or 0 in binary can become a weapon for a hacker. Because the mechanical of “open a circuit, close a circuit, one and zero,” can easily be triggered by a malevolent actor to go the wrong way. There goes $100,000 worth of product that falls off the line.

But there's still something inside of me that says, there is a way to do this properly. Maybe it's design for security or design for privacy or both at the model level. Maybe the emphasis to the board is that if we build the models correctly, we can leverage those, because depending on whose version of model and how good the data scientist, you don't need a year and a half. You need five cases that are germane and specific to the traffic flow of the environment. I live in insurance, I need something that's insurance-related. I live in manufacturing, I need something that's manufacturing-related and so on and so forth.
President and National Managing Principal in Software3 years ago
You can't just throw AI, just like you can't just throw technology, at a problem without having a good, in-depth understanding of the inputs, the throughputs, the outputs. From there, you can fully articulate what your use case is. And also, who's going to be there to correct it. Back in the day we had analysts that were looking at statistical modeling charts, tweaking the standard deviation rules, tweaking the alert mechanisms. Because in one sense, it was too chatty and in another sense it wasn't chatty enough. That's what people leave out of the whole AI discussion. You've got to have people that are there turning the knobs and dialing it back or forward or adjusting it to what their particular use case is.
Sr. Director of Enterprise Security in Software3 years ago
Is it just me, or does it seem like so many of these AI/ML solutions are posed to be some sort of magic bullet, supposed to make up for the fact that your best practices are terrible? That's what I keep seeing. Every new security product is designed around that. I'll take strong best practices in an organization over some magic bullet AI that's going to fill my gaps for me. I feel like when every new security startup pitches their solution, I don't know how they do it. They can't seem to reproduce anything outside of the demo they show me. I'm not really sure how this is going to help me.
1 Reply
VP, Chief Security & Compliance Officer in Software3 years ago

I agree with you, Joseph. At the heart of it, it's just good hygiene. But then the additional assurance is important.

Content you might like

Yes35%

No54%

Planning to do10%

View Results
3.4k views1 Upvote

Cost31%

Repeat Issues44%

Response Time17%

Customer Service7%

View Results
3.9k views1 Upvote1 Comment
IT Manager in Constructiona month ago
Hello,
the topic is so broad, what are you focused on?
Read More Comments
3.4k views2 Upvotes4 Comments
Senior Director, Technology Solutions and Analytics in Telecommunication3 years ago
Palantir Foundry
3
Read More Comments
11.1k views12 Upvotes49 Comments