Skip to main content
Meet NetWitness at RSA Conference 2024!
Stop by our booth #254 or book a meeting with an expert. Reserve Your Spot Today!
Industry Perspectives

The Changing Face of Insider Threats

  • by Spencer Lichtenstein,casey Switzer

As businesses look for new ways to return to growth and profitability, there are heightened concerns from the risks posed by an insider. Over the last two years, research shows a nearly 50% increase in insider threat incidents with the average incident cost soaring to nearly 12 million USD.

This leads you to wonder, is the frequency and cost of Insider Threats increasing because of the explosion in the remote workforce?

Let’s establish a baseline

Defining the insider threat is not always a simple task. Carnegie Mellon’s CERT defines it as:

Insider Threat – the potential for an individual who has or had authorized access to an organization’s assets to use their access, either maliciously or unintentionally, to act in a way that could negatively affect the organization.

However, as you know, an insider could be …

  • A disgruntled employee sabotaging a corporate network
  • A former employee or contractor re-accessing a network conducting espionage
  • A C-level executive who ignores security policy to drive faster results
  • An employee, contractor or vendor who unknowingly clicks a link in a phishing email, putting the organization at risk
  • A cybercriminal posing as an employee using compromised credentials

The security industry has traditionally implemented a layered approach to address insider threats. This includes technology, policy, physical security and even data science. Yet, insiders are still at the heart of a huge number of breaches. According to one research report, more than 20% of breaches are a result of human error, 25% involve phishing (inherently human), and almost 40% use credentials that are stolen or weak (fundamentally human).

Why do these problems persist?
The workforce is more remote. This means businesses are more vulnerable to human error and foundational insider issues.

As more of our workforce migrates out of the traditional corporate network, exposure to insider threats increases. Remote workforce risks are no longer isolated to a small percentage of on-call IT personnel or road warriors. This population now represents a significant portion of staff from all departments with varying cybersecurity awareness and hygiene.

Consider the following scenarios where the risk potential is elevated:

  • Laptop sharing with family members, who are not subject to the same cybersecurity awareness training and safe browsing habits
  • Laptop sharing among friends, who may be sharing log in credentials or unknowingly inserting a malware-laden USB storage device
  • Laptop use on insecure home or public Wi-Fi networks
  • Frequent use of email on non-corporate devices which have limited or no visibility at the endpoint

This shifting of people and their technology means security professionals must reevaluate how data is accessed and what risk exposure is acceptable. Visibility into this data is critical to understanding abnormal user behavior to detect and respond to an insider threat.

Behavior is hard to predict and identify using technology. Insider threat detection is centered on behavior.

Monitoring and analyzing user behavior for every person and piece of data on a network is the critical component of early identification and resolution. The functional challenge is the volume of information and the complexity of analysis.

Enter machine learning and behavior analytics. More organizations are beginning to leverage machine learning to start modeling behavior. Effective behavior modelling requires significant development and complex data science algorithms, which is why this technology is most commonly implemented by well-resourced Security Operations Centers (SOCs).

However, many organizations do not have employees who are well-versed in machine learning who can interpret and fine tune results. SOCs are also faced with increasing data privacy regulations – the GDPR, CCPA – while maintaining user privacy. Additional hurdles to successful implementation of behavioral learning systems include:

  • Significant manual overhead to tune and optimize
  • Limited number of use cases and data sources, resulting in significant blind spots
  • Investment that could outweigh the perceived value

This paradigm has started to shift as the industry matures. Many behavioral machine learning systems now come self-tuned and optimized out-of-the-box with broader analytics and shorter time-to-value. Some can correlate the data with threat intelligence and business context to uncover malicious activity before it leads to business disruption or data loss. Properly implemented advanced machine learning technology and statistical models are a force multiplier for security teams, enabling them to quickly detect malicious activity.
Where are we headed?

The attack surface created by insiders has expanded exponentially and technology is evolving quickly to adapt. The solution to this problem is multifaceted and requires resource constrained security teams to gain an upper hand. New behavioral technology can help security teams streamline response and improve mean time to detection while reducing false positives. This ultimately means the SOC can resolve issues faster and reduce an organizations risk profile.

Organizations must think about strategic technology investments that address both technology-driven and human-driven risks. This is crucial in addressing insider threats since both components work in unison. Security teams need versatile tools with quick time to value to act faster against these threats.

Join us for Part II of this series to explore more about the technologies needed to address these challenges.