From the Expert: AI’s Impact on Data Security
AI’s rapid adoption will require cybersecurity that’s more sophisticated than ever before.
Sign up to uncover the latest in emerging technology.
Ali Allage, CEO of BlueSteel Cybersecurity, spoke with Patent Drop about the security risks that AI poses and the scale of the problem as AI adoption continues at breakneck pace.
Patent Drop: How does AI present a big data security problem?
Allage: You’re essentially entrusting an algorithm to account for all different variables. And there’s a level of data manipulation that could occur either due to error, misconfiguration of how the algorithm was supposed to be used, and modification of the data … It really depends on the AI itself.
PD: How does AI data security impact the average user?
Allage: It all comes down to data privacy concerns. For instance, the information I provide (to a service), what happens to it? How is the information I’m providing going to be used, and how does it sort of play a part in the greater application? And sometimes you might have a runaway train scenario.
PD: As AI adoption grows, is the security problem only going to exacerbate?
Allage: It depends. Does (an organization) have a history of deploying applications that are going to inherently contain bugs? If the answer is yes, then why would this be any different?
We’re still in a capitalistic environment of people rushing to market. AI is popular, you need funding, you have metrics that you have to fulfill. And we have scenarios in software development all the time where you push things that are highly risky and not necessarily fully tested, but you’re trying to meet a metric and a benchmark. And AI right now, in terms of its development course, compliance, and checkpoints haven’t been really fully established yet.
PD: Is that sense of urgency creating a potentially unsafe environment within tech?
Allage: The paranoid part of me says yes. I’d love to think that everyone has a standard that’s similar, but we don’t. We also don’t all have the same levels of pressures. In this environment that we’re in, I think back to all the prior industries that had pressures like this, data analytics being one. And during that period, we saw massive lawsuits with companies like Facebook because of this viewpoint of needing to get it to market, and users adopting without any understanding of what they’re signing up for. I don’t think we’ve grown up — what we’ve done is add more technologies. It’s one of those “do it now, then figure it out later,” situations.
PD: So how do we fix this?
Allage: I think we need to have a framework of best practices. We need to have a benchmark to use. We don’t have that. If I were to fix something, I would go through the effort of establishing a benchmark that’s agreeable to the industry.
PD: What should tech firms bear in mind as they keep barreling towards adoption?
Allage: The big guys are going through their natural cycle of capturing any IP that they can, and doing their incubation and development to see whether or not technology sticks. And no one wants to go through the issues and challenges of data security.
My suggestion is to have a compliance mindset. And the nuts and bolts of it is that not everyone knows what they have. The advice would be to understand the assets and inventory that you have, and understand who has access to them. Classify your data – Is any of it sensitive? Where are your stored areas? What are the assets that you have and who are the people that have the access to them? Get the clarity that you need there, and then when you’re adopting AI, understand where the data flow occurs when you’re using it.