What can we learn from our mistakes in automation, and what’s in store for this trend in the future? In 2017 we saw plenty of attacks using a wide variety of methods, striking hard against the poorly-protected. We’re also seeing increasingly secure systems, with attacks that still manage to keep up.
As much as our security has improved and continues to improve in line with increases in computing power, so do fraudsters’ variety and complexity of attacks. But fraudsters don’t prey on early adopters with secure systems; they prey on the weak. This is why it’s so important for all of us to keep up with security improvements. Even a cybersecurity chain isn’t stronger than its weakest link.
We know now that in order to maintain our systems safe, we need to monitor not only the attacks, but also the very thing ensuring safety: the ever-increasing computing power of our systems and devices. I’ve previously written about how machine learning enables security automation and empowers the analyst to combat fraud. My focus today is on how these automated systems can come to overwhelm us with data, unless we’re supplied with enough information to leverage the data effectively.
You may be wondering if this will just be another hype piece about how machine-learning and AI will make everything better. Not to worry. First of all, AI isn’t hyped enough. There are exaggerated uses of AI in terms of application, usability, and time to reach the consumer. This said, and in a realistic application, by using AI and machine-learning, we allow computers to do what computers do well. In return, this frees up time for us to do what we do well. As basic as it may sound, this is exactly what everyone in technology is trying to do.
It’s not about making complex decisions with AI, rather it’s about making thousands or millions of decisions with AI knowing that each of these decisions will be simple for AI to solve. Each computer-solved decision is of course possible for humans to solve as well but solving them all would require an awful lot of time. Instead, using a scalable system can solve these problems close to instantly and, as a result, allow us to focus on a select number of decisions that come easily to us.
In many ways, we are the most capable when we’re assisted by these new processing tools. It doesn’t matter if we’re using the map in a smartphone to skillfully navigate a city we’ve never visited before or searching for information online; our new tools are helping us be efficient. Sadly, this is also a reason why we’re seeing more sophisticated cyber-attacks.
Capable attackers are now using a multitude of attack vectors when looking for weakness. By using a combination of tools like bots and automated scripts, a skillful attacker can both identify and strike against weak systems. Our old rule-based systems simply aren’t capable of stopping clever human strategy executed with a computer-powered armada. The solution to this, as we’ve been told many times before, is the multi-layered adaptable security system: a combination that’s certainly not impenetrable, but impenetrable-enough to shake attackers off to easier prey.
So now that we have automation protecting us against automation, does that mean we’re safe? Yes and no. Even with the best identity and access management systems combined with state-of-the-art behavioral analytics in an environment optimized for continuous risk assessment, we still have a big problem. How are we going to process all of these intelligence signals provided to us by our multi-layered security systems, filled with automated subsystems feeding us information?
This potentially massive number of signals have created a whole new problem for us: score fatigue. We humans are simply not capable of effectively making decisions when provided with too much information, especially when signals we’ve come to trust contradict each other. If presented with a problem where half the information indicates a YES and other half a NO, there’s no quick or easy decision to be had for us humans. This is where more data, as contradictory as it may seem, can be used to simplify things. By adding a confidence rating to our information, we can delegate more decisions suitable to our AI friends, freeing us up once again for our now limited number of problems that AI can’t solve as well as we do. If we allow ourselves to be assisted by a flexible decision system that is fed by intelligent signals with corresponding confidence factors, the problem with score fatigue disappears.
Enterprise systems change, aggregators come and go, and fraudsters continue to upgrade their tools. But humans assisted with automated decision systems that are fed with confidence-based signals can easily support and defend every system, at least for the foreseeable future.
Let 2018 be not only the year of the Dog but the year of Decisions as well.