Docker image security scanning tools
Recent cybersecurity incidents suggest an alarming trend that’s becoming a day-to-day struggle for businesses. As we witness more sophisticated and advanced cyber-attacks that threaten to underpin the very core of your business, the role of cybersecurity professionals has been a vital one in the attempt to thwart these attacks. And the effort doesn’t seem to be getting any easier, with an unprecedented amount of data breaches and hacks, in a dangerous threat landscape with an increasing volume of everyday security alerts.
Much of these threat vectors emerge from cloud-based workloads and services that enable businesses a highly reliable, low-cost way to build, ship, and run distributed applications at any scale. One of such platforms has established itself as an essential piece in driving that trend – Docker.
Docker is one of the leading containerization platforms that allows for quick application builds, packaging and testing. Docker containers come with everything your application needs to run, including libraries, system tools, code, and runtime. Using Docker, it is possible to deploy applications into any environment and be sure your code will run the same. Sounds amazing, right? It definitely is, but it does provide an additional attack surface to cover. While Docker, Inc claims it has the strongest default container isolation capabilities in the industry, there are still plenty of other factors to consider in securing application containers from malicious threat actors.
Docker images, that act as “blueprints” for application container instances, are a common source of vulnerabilities that creep in our application containers. Usually such vulnerabilities are inherited from parent images that are used for “hosting” our application code, with their dependencies, though seldom are introduced by our own code. Fortunately, there are specialized tools that will do the heavy-lifting for us and automatically scan our images and report on any found security vulnerabilities. Some of these tools allow you to provide an additional layer of security in the form of custom security policy-adherence rules, either at runtime or at static analysis phase, that you can apply on Dockerfiles, configuration files, 3rd party libraries, and various other kind of resources.
There are numerous such tools and platforms, proprietary and open-source, but we’ll focus on 4 major ones, including 1 proprietary.
Anchore Engine architecture is comprised of six components that can be deployed in a single container or scaled out:
- API Service: Central communication interface that can be accessed by code using a REST API or directly using the command line.
- Image Analyzer Service: Executed by the “worker”, these Anchore nodes perform the actual Docker image scanning.
- Catalog Service: Internal database and system state service.
- Queuing Service: Organizes, persists and schedules the engine tasks.
- Policy Engine Service: Policy evaluation and vulnerabilities matching rules.
- Kubernetes Webhook Service: Kubernetes-specific webhook service to validate images before they are spawned.
Anchore Engine is an open source image scanning tool. Provides a centralized service for inspection, analysis and applies user-defined acceptance policies to allow automated validation and certification of container images. It’s performance and accuracy place it among the very top choices we reviewed. The flexible user-defined policies, breadth of analysis, rich API and reporting capability provide for a well-rounded, comprehensive Docker image scanning solution.
Clair is an open source project for the static analysis of vulnerabilities in appc and docker containers. High-level overview of Clair image scanning cycle:
- In regular intervals, Clair ingests vulnerability metadata from a configured set of sources and stores it in the database.
- Clients use the Clair API to index their container images; this creates a list of features present in the image and stores them in the database.
- Clients use the Clair API to query the database for vulnerabilities of a particular image; correlating vulnerabilities and features is done for each request, avoiding the need to rescan images.
- When updates to vulnerability metadata occur, a notification can be sent to alert systems that a change has occurred.
This is another great contender for top pick solution for static vulnerability assessment. In terms of report depth and overall performance, it’s on par with Anchore, if not slightly better. Also very simple to install and set up (doesn’t require a dedicate image registry like Anchore), with little to no configuration, which is a very nice feature to have that comes in handy for seamless CI/CD pipeline integration purposes. The only downside to Clair is the lack of a customizable policy enforcement engine, which is a must if you have specific compliance requirements to fulfill.
Aqua Security’s MicroScanner lets you check your container images for vulnerabilities. In case the scanner detects a high-severity issue in your image, MicroScanner can fail the image build process, allowing for easy and seamless inclusion as a step in your CI/CD pipeline. The detection rate seems decent, as well as performance, albeit not as comprehensive and detailed as Anchore Engine and Clair. Also, there’s a fair-usage policy cap of number of possible scans at a time.
Dagda is an open source tool, coded in Python to perform static analysis of known vulnerabilities in Docker images/containers. It also helps you to monitor running Docker containers for detecting anomalous activities via the help of included Sysdig’s Falco tool. Dagda uses quite a few vulnerability databases.
It pulls CVEs (Common Vulnerabilities and Exposures) from the Nist NVD database like all of the other scanners, BIDs (Bugtraq IDs), RHSAs (Red Hat Security Advisories) and RHBAs (Red Hat Bug Advisories), and the known exploits from Offensive Security database. It also uses the OWASP dependency checker to check for application vulnerabilities in java, python, nodejs, js, ruby and php. Comes with ClamAV scanner included.
For gauging the detection rates of the tools, I’ve chosen the most recent slimmed-down Docker image of Debian Jessie (debian:jessie-slim), as of the time of writing, scanned the image with all the tools, collected and filtered down the results to only include the respected unique CVE vulnerability ID signature for detected vulnerabilities. CVE is a list of entries—each containing an identification number, a description, and at least one public reference for publicly known cybersecurity vulnerabilities.
The selected method for determining the efficacy of detection capabilities of the tools was selected due to the fact that all tools include a CVE entry for every detected vulnerability, effectively making it an easy metric for comparison. Some tools, like Dagda, support other vulnerability repositories and their referencing systems, such as BIDs (Bugtraq IDs), RHSAs (Red Hat Security Advisories) and RHBAs (Red Hat Bug Advisories), but included a CVE ID for most of the detected bugs in the scan report.
Bear in mind, however, that the proposed metric is by no means perfect, since there seem to be a lot of false positives among the results, according to empirical findings, but is, arguably, indicative of vulnerability detection rates. These tools are static vulnerability scanners and there are going to be false positives naturally, so they should be used in conjuction with other security tools, eg. realtime behaviour monitoring software, anti-malware software, etc, in order to further harden the perimeter of the attack surface.
There was a clear outlier in terms of number of detected unique vulnerabilities among the tested tools – Clair. Clair detected no less than 68 unique vulnerabilities, ranging from as early as 2005 till the present. Other 3 tools, Anchore, Microscanner and Dagda were similar in terms of detection rates. Anchore came in second with 24 unique CVEs, Microscanner took the third place with 19 unique CVEs, in front of Dagda with only 1 less detected vulnerability.
It would be interesting to perform some kind of dynamic vulnerability analysis on the vulnerabilities reported by Clair, in order to “weed out” false positives, to get more precise and consistent readings. Turns out, there’s a lot of subtlety to the way vulnerabilities are detected in container images.