In this article I will be discussing ATT&CK™-Tools project (GitHub: https://github.com/nshalabi/ATTACK-Tools) in the form of multiple “use cases.” I will also explain how the project addresses those use cases as a solution.
Emphasis On the Data Model
As of this writing, the ATT&CK-Tools Git repository consists of
- ATT&CK™ View: an adversary emulation planning tool
- ATT&CK™ Data Model: a relational data model for ATT&CK™ and STIX™ 2.0
However, I would like to focus more on the data model, as gaining an understanding of the concept might enable others to derive their own or tailor the existing one to their own specific requirements and needs.
Initially, I started working on a simple model for ATT&CK that, to some extent, was working fine, however, thinking about other scenarios were this data concept can be more useful (for example, cyber threat intelligence use cases), lead me to revise the existing model design to be based on STIX 2.0 objects, so I ended up with the final data model being a STIX 2.0 relational data object model with ATT&CK being imported as data.
In a nutshell, the following are the benefits achieved from having a data model for ATT&CK/STIX 2.0:
- Having a data model facilitates integration between the various security solutions and helps ATT&CK-enable existing controls. If you are looking to integrate your existing controls with ATT&CK, you don’t have to wait (or get stuck) with your vendor providing this to you. On the other side of the fence, a data model can also help when there is an overlap in tools coverage.
- It helps derive a statistical model that works “for you”, furthermore, this model could be enriched by adding more relevant data to gain more insight that might be valid only within the context of your business/operation.
- Flexibility in implementation and I would recommend not to get caught up with the underlying technology at this stage: relational vs. NoSQL vs. your SIEM data store (or a combination of all), the technology selection usually comes at the end, hopefully after the design phase.
- There is no one size fits all, having a data model, however, allows introducing (plugging) more specific, contextual information, that relates to organizations, operations, teams, etc.
- Enables (or facilitates) contribution to the framework, not just at the level of ATT&CK specific data, such as the ATT&CK techniques, but any information that builds on top of the framework, for example, adversary emulation plans.
The first set of use cases to begin with are related to Adversary Emulation Planning.
Adversary Emulation Use Cases
(1) Measured plans coverage, adversary planning, from beginning to end and continuously, can be enriched with more, relevant information that acts as metadata or attributes, this data can aid in deriving measurement. Examples could be
- Tools used to test ATT&CK techniques
- Testing results
- Start and end dates (were there any new inputs during, or after the testing window, for example, new CTI reports, that might require re-evaluation to the testing plan)
- Captured indicators
- Lessons learned
- Investment (Yes! Money, time and resources)
- Vulnerabilities (CTI)
As a simple example, the following screenshot shows the “coverage view” in APT3 Adversary Emulation Plan that comes bundled with ATT&CK View (developed by MITRE – https://attack.mitre.org/wiki/Adversary_Emulation_Plans), the measurement criteria used here is simply the testing coverage of APT3 group techniques
This view captures (a) Techniques not planned for yet and (b) techniques that do not belong to APT3 group (marked in a different color in the middle list). (c) It can also highlight the planned vs. tested, this last point is another example on how metadata can be associated with our data model, in this case it helps in tracking and progress update.
(2) Organization, one adversary emulation plan might require multiple, frequent tests/iterations (maybe due to the availability of CTI data, internal incidents reports, etc.), which can quickly end up in numerous, unrelated (logically detached), testing plans. However, assigning tests under one “logically grouped” plan can help eliminate this inconsistency, and while I choose to group all tests under one plan, others might choose a different grouping criteria, for example, others might decide that the logical grouping should be per “Target”, or per “Client” in a multi-tenant SOC/SSP, etc. No matter the chosen criteria is, the data model is flexible enough to support it.
As an example, have a look at the APT3 testing plan again using ATT&CK View, as previously mentioned, all tests (I will refer to them also as testing guidelines) were grouped under one plan, and each testing guideline can be associated with one or more ATT&CK techniques, additionally, each testing guideline will be using a specific toolset for testing and evaluation (internal Windows OS tools, Cobalt Strike and Metasploit) in an attempt to simulate the software used by adversaries.
One of the benefits of this organization is already showing in the “Main View” as a “visualization” aid: the testing plan can be visualized by testing technique or testing guideline as shown next:
The first mode, [One] “ATT&CK Technique” maps to [One to Many] “Testing Guidelines,” this helps in visualizing the tests made to evaluate one ATT&CK Technique
The second mode, [One] “Testing Guideline” maps to [One to Many] “ATT&CK Techniques,” this helps in visualizing the techniques meant to be evaluated by a single testing guideline
Additionally, the testing guidelines help in providing more ontext to the same ATT&CK technique, but per plan, the only challenge, in this case, is how this could be captured in a way that is evident in a threat hunting exercise, using ATT&CK Tools, we could capture this by assigning “Tags” for example.
Those tags can be utilized later in reporting, or when used in an analysis model to help derive new future testing plans.
(3) Knowledge Base for all, our data model can act as a shared knowledge base contributed to by Red/Blue/Purple teams. Organizations spend time, money and resources on cybersecurity, and capturing this knowledge would be a good ROI.
This KB can be further extended by importing the work of others, for example, the current data model includes atomic red teams tests as a reference.
As an example, here is a quick search for “query” word in ATT&CK View
(4) Automation, (work in progress), a testing guideline usually have an associated evaluation criterion, such as flagged EDR event, an IPS alert, EAV notification, etc. the process to evaluate any test can be automated by capturing those alert and matching them with a testing guideline evaluation criteria. Tests evaluation is still a human work, but automation can help the analyst focus on the hard work of eliminating the false positives, instead of the process of collecting the data results (and hopefully to facilitate red/blue team coordination).
Again, we are presenting ATT&CK data in our data model as STIX 2.0 objects, this means that CTI data related to specific adversaries can be integrated to help the analyst re-evaluate their testing plans coverage by auto-updating the coverage reports using CTI reports as an input, this help the analyst by “flagging” the gaps in previously developed testing plans in an automated fashion.
You can send me your feedback directly by emailing me at email@example.com, twitter: @nader_shalabi or through the GitHub page of the project (https://github.com/nshalabi/ATTACK-Tools).