Improve Vulnerability Management in 2023

March 2, 2023

Every enterprise must identify and manage vulnerabilities across the network. And the network continues to change – bare metal machines, virtual infrastructure, and the increasing prevalence of containers. But IT assets are more than just infrastructure – they include applications, users, code, and more.

 

As the enterprise has changed, so have the teams that manage it. And with new teams come new tools. And new tools monitor and create new alerts. Data is everywhere. But it’s often unusable by those that need it. So IT security teams turn to new types of objective data to overcome the limited accessibility of their own. And that’s a big problem.

 

For too long IT security teams are reliant on non-business-specific data points to make decisions on business risk. Honeypots are open assets without security controls, script kiddie forums let you identify the low-hanging fruit which you already know is problematic based on ease of exploitation, and what people are tweeting about is irrelevant. None of these data points reflect your existing security controls and network architecture, nor do they reflect the business-specific risks most sensitive to your operations. To secure your network, you need to know your environment. The problem for most companies is that while they have the tooling to generate the data, there are significant barriers to effective data usage to drive risk management.

 

Tools provide objective data. Analysts use that data to make subjective decisions. You already have tools. You don’t have enough analysts. So to enable efficient security operations, we need to increase the accessibility of data for analysts to make decisions.

 

I know what you’re thinking: “We forward all that to the SIEM.” It’ll be all right there.

 

SIEMs are the great aggregators of all logs. New tool? Forward the logs. More alerts? Reconfigure the SIEM. We’ll reduce the alerts and centralize the data to disposition those that bubble up. But that’s also impeded by data accessibility. The logs are different formats and have different data points, and each tool forwarding logs track different types of assets, access, users, and risk. So while data is centralized, it’s not converged. So we still need to spend a ton of time figuring out what the data means to make a simple decision. Just like home repairs…

 

My lawn mower wasn’t starting. Maybe a gas change? (Did I use a stabilizer last winter?) I go to Home Depot and get the 4-cycle engineered fuel and tubing. I get back home and siphon off the gas and replace the gas. No luck. Maybe I just need to clear out the carburetor. I drive back to Home Depot. I get ethanol to spray in the intake. I get home. The engine cranks and doesn’t go. Shoot. I probably should just replace the carburetor. I’ll save time and money with Amazon and get the replacement part. I need Youtube to figure out which part I need (Home Depot doesn’t seem to have it). 2-day shipping – nice! It shows up a day later than that, but I get my tools ready. The bolts are metric. I only have standard-sized. Back to Home Depot. Back home. In one week I emerge victorious, it only took 3 trips to the same place, a missed SLA, and 4 hours of research and disassembling and reassembling. It would have taken me one hour if I knew what the issue was and what I needed to remediate it. But I needed to manually look at all components, run tests to figure out the issue, and escalate repairs and call in other services.

 

This is very similar to a security analyst trying to understand an issue and resolve it. Just a few trips to different tools, escalate to another department or team, and after delays, they get everything they need to decide. The expense of this is time – the most important resource when considering attack surface management.

 

Especially when you consider I have just one lawn mower – IT security teams have thousands of alerts.

 

The SIEM has everything. You just can’t use it.

 

SIEMs require proprietary query languages that often require specialization to use. However, the data and its purpose is often not understood by a SIEM specialist resulting in incomplete results. The outcome is a series of back and forth to get the appropriate data sets which takes time. Time means increased exposure durations and dwell times. These are not helpful for risk reduction or MTTR.

 

And even when you get your data sets, it’s time to compile and converge them. But each monitoring or administration tool has its unique data values, varying formats, and limitations. To make sense of this data, analysts then need to converge it. Luckily for analysts, they have Excel to complete these tasks. Unfortunately for analysts, they only have Excel. And for many that go into cybersecurity, ourselves included, the majority of us did not enter the field to become as proficient in Excel as financial analysts. Just like how I didn’t grow a garden and a yard to become a small engine repairs specialist.

 

The accessibility of data and the unusable state of non-normalized data inhibit security operations from being effective. Manual, time-consuming, error-prone analysis results in ineffective prioritization, which results in unneeded escalations and partial resolution. Every problem with nuance can’t be escalated when alerts come in volume.

 

Cybersecurity mesh architecture will change vulnerability management in 2023 and beyond

 

So why haven’t we solved the data problem before? 

 

Simply put, the analytical technology and human capacity to manually make up for the gap didn’t exist. Teams operated in siloes with different tools applied for specific purposes and not a common goal. It’s why different environments face different methods of attack despite the application of security controls being generally consistent. But every organization already has investments in tooling and teams – what’s missing is the great convergence. We need to take all the tools and turn them into a swiss army knife to bring every capability we have to immediate access to expedite investigations and remediation.

 

That’s why Gartner’s Cybersecurity Mesh Architecture (CSMA) is both innovative and compelling to enterprises that need to gain operational efficiencies while reducing risk and expenses associated with a breach in parallel. All we need to do is mesh our tools and services together.

 

Cybersecurity mesh architecture is predicated on the interoperability of services. Tools need to effectively communicate the types of data and formats they are programmed to ingest with one another. To facilitate this goal, a converged and normalized data set must be established and maintained. Now, this isn’t another SIEM or data lake. A centralized data plane has the specific pieces of data needed by IT and security to understand their assets, know their environment, and come to decisions quickly if not automatically. We’ve written about this before, both Cybersecurity Mesh Architecture and the impact on Vulnerability Management.

 

The TL;DR is that when we eliminate hurdles of access and consumability of data, analysts analyze and uplevel security domain specializations and not focus on learning the new features of Excel.

 

The experience without converged and accessible data for vulnerability management

 

Many servers are vulnerable – let’s consider just one with a CVE score that prompts prioritization. But that results in the below challenge:

 

  1. My vulnerability scanner identifies the asset as a workstation
  2. My IaaS console indicates it’s a Windows server
  3. My CMDB indicated the asset is part of an application
  4. My CMDB has two different owners for the asset
  5. IAM logs indicate neither has logged into the box in 3 months
  6. IAM shows another user SSHing into the box this week
  7. The asset is communicating with other assets based on NetFlow telemetry
  8. The asset is communicating with public IPs

 

We have enough information to escalate and contact the employee logging in. It only took us logging into our vulnerability scanner, IaaS console, CMDB, IAM console, and firewall management console. Or maybe we filed the ticket with the SIEM team and we just had to wait until SLAs prompted a response for us to get the data to review.

 

That’s a lot of single panes of glass. And a lot of logins. And a lot of manual aggregation and data convergence. And still, a lot of work to understand what the data adds up to. It makes sense why so many threats are partially resolved – we don’t have enough information to make a clear decision with the assurance we aren’t going to disrupt the business. Then incorporate SLAs and goals and we have a metric-driven environment that is impeded by our tech stack’s interoperability.

 

The experienced with converged and accessible data for vulnerability management

 

Data convergence eliminates all the above swivel chair analysis and creates and maintains a centralized data plane. We construct a query across assets and their relationships and provide the results to analysts or route to a ticketing or workflow system. There are common criteria to determine the outcomes for different risks. We need to configure this once to gain prioritization:

  • Show all assets
    • Type of asset server
  • Related to ANY application
  • Receiving network traffic to the internet
  • Related to CVEs
    • CVE severity is Critical
    • CVE exploit is available
    • Network traffic matches port and protocol for exploitation
  • NOT seen by our EDR solution

 

What we just created was a pre-defined query that automatically searches across all of our assets and their network and business context to identify the most significant risk to our business. The results are application servers that are vulnerable, exploitable, and contextually exposed for exploitation to the internet. And these results are calculated and even delivered automatically. Which of the below outcomes are most preferred?

 

  • Email the results to the server team
  • Create tickets for each server owner in ITSM for patching
  • Identify recent admin login activity by users and assign tickets to them
  • Define the criteria by which the user receives the ticket using conditional criteria

 

Regardless of the above options chosen, the result is more effective risk prioritization, automated alerts to stakeholders, and increased visibility and accountability of remediation. And this level of automation requires configuration only once for each level of risk. The result is the empowerment of analysts to allocate their time to analysis to identify risk earlier and respond faster.

 

Why we’re challenging the status quo of cybersecurity for more effective vulnerability management

 

We faced the same problems today’s practitioners faced years ago. The amount of effort to make a simple security decision was time-consuming, tedious, and many times boring. At appNovi, we’re returning data science into the hands of practitioners. The democratization of high-fidelity security data only requires using the tools you already own. The result for enterprises is total asset visibility, complete asset context, and the ability to know your environment and make efficient non-disruptive decisions. appNovi eliminates significant human resource wastes while reducing project costs – watch the video below to learn more and start using appNovi today to make positive changes in your vulnerability management in 2023.