GUEST ESSAY: How humans and machines can be melded to thwart email-borne targeted attacks

By Lomy Ovadia

Phishing emails continue to plague organizations and their users.

Related: Botnets accelerate business-logic hacking

No matter how many staff training sessions and security tools IT throws at the phishing problem, a certain percentage of users continues to click on their malicious links and attachments or approve their bogus payment requests.

A case in point: With business losses totaling a staggering $2.4 billion, Business Email Compromise (BEC), was the most financially damaging Internet crime for the seventh year in a row, according to the FBI’s 2022 Internet Crime Report.

BEC uses phishing to trick users into approving bogus business payments to attackers’ accounts. BEC succeeds despite years of training users to recognize and address BEC emails properly and next-generation tools that harness AI, machine learning, and natural language processing to block phishing and BEC attempts.

The truth is that neither humans nor machines will ever be 100 percent successful tackling the phishing and BEC challenge. Even harnessing both side by side has not proven 100 percent effective.

What is the answer? Meld humans and AI tools into a single potent weapon that can beat the clock and catch just about every phishing email and BEC that attackers throw at it. Let’s examine how each of these strategies works and why both working together stands the best chance of solving the problem.

Leveraging AI/ML

Most people have a pretty good idea how phishing emails and BEC use social engineering to trick their unwitting victims. After extensive research and target identification, the attacker sends an innocent looking email to the victim, who is often someone in the finance department.


The email appears to come from the CEO, CFO, or a supplier, who requests with great urgency that the recipient update a supplier, partner, employee, or customer bank account number (to the attacker’s) or pay a phony late invoice. Thanks to careful research, the invoice is likely to look very convincing.

Legacy secure email gateways (SEG’s) miss these phishing emails because they lack the malicious attachments and links these tools typically look for. SEG’s are also only good at identifying widely known threats and require a lot of time and resources to maintain.

A more recent alternative, next-generation email security tools use advanced AI/ML with natural language processing, visual scanning, and behavioral analysis to recognize potential phishing emails.

Machine learning identifies and even predicts advanced attacks simply by analyzing large data sets, including emails, for similarities, correlations, trends, and anomalies. It requires few instructions and little maintenance.

As with many security tools, however, machine learning often fails to identify zero-day attacks–in this case spear phishing emails–if they’re different enough from previous ones.

With new types of phishing emails released by millions of attackers daily, it’s no surprise that a few get past the best designed ML models. ML can catch 99 percent of phishing emails, but you need more help to catch the remaining one percent.

Human-machine melding

Fortunately, it turns out that while some people can be fooled by phishing emails, others are adept at spotting suspicious emails and the phishing attempts that ML often misses. Multiply that human capability by thousands across hundreds of organizations of all sizes and you can create a very valuable threat intelligence system.

Such a system could potentially feed new phishing information right back into the machine learning models in real time, so they can start identifying similar phishing exploits immediately. Obviously, a machine learning system trained on phishing information only seconds or minutes old will spot potential zero-day attacks much more competently and rapidly than a machine with information that is days or weeks old.

The key is to meld the capabilities of human and machine into one, as the two-working side by side with no interaction cannot be nearly as effective. This melded process must constitute a constant feedback loop with an army of hundreds of thousands of human eyeballs.

The only way to solve a problem that grows exponentially is with a solution that grows exponentially as well. This is a similar strategy used by Waze, Google Maps, and Uber to keep users out of heavy traffic and allow them to share rides.

No doubt phishing and BEC will continue to grow in both frequency and sophistication. Technology and humans cannot catch all of them alone but working tightly together they can come very close.

About the essayist: Lomy Ovadia is Senior Vice President of Research and Development at  Ironscales, an Atlanta-based email security company.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone