In this article and [hopefully] the following ones, I will be discussing ATT&CK™-Tools project (GitHub: https://github.com/nshalabi/ATTACK-Tools) in the form of multiple “use cases.” I will also explain how the project (as a solution) addresses those use cases; I thought that by following this approach, I would be able to allow more room for open discussions, which can reflect as enhancements decisions to the toolset. The use cases are accumulative knowledge and input from the community; although they might not be valid outside the context of a particular organization, I figured out that sharing the way I am trying to approach the problem to come up with the solution might be useful to others.
Emphasis On the Data Model
As of this writing, ATT&CK-Tools Git repository consists of
- ATT&CK™ View: an adversary emulation planning tool
- ATT&CK™ Data Model: a relational data model for ATT&CK™ and STIX™ 2.0
However, I would like to focus more on the data model, as gaining an understanding of the concept might enable others to derive their own or tailor the existing one to their requirements and needs.
Initially, I started working on a simple model for ATT&CK that, to some extent, was working fine, however, thinking about other scenarios were this data concept can be more useful, lead me to revise the existing design within the context of STIX 2.0 (will discuss this further with other uses cases). So I ended up with the final data model being a STIX 2.0 relational data object model with imported ATT&CK data.
The benefits achieved from having a data model for ATT&CK will be address through those use cases, in a nutshell:
- Having a data model facilitates integration between the various solutions and helps ATT&CK-enable existing controls. If you are looking to integrate with ATT&CK, you don’t have to wait (get stuck) with your vendor providing this to you. On the other side of the fence, a data model can also help when there is an overlap in tools coverage.
- It helps derive a statistical model that works “for you,” further, this model could be enriched by adding more relevant data for more insight (we already deal with vast amounts of data)
- Flexibility in implementation and I would recommend not to get caught up with the underlying technology at this stage: relational vs. NoSQL vs. your SIEM data store (or a combination of all), the technology selection usually comes at the end.
- There is no one size fits all, having a data model, however, allows for introducing (plugging) more specific, contextual information, that relates to organizations, operations, teams, etc.
- Enable Automation, more on this shortly.
- Enable (or facilitate) contribution, not just at the level of ATT&CK specific data, such as ATT&CK techniques, but any information that builds on top of the framework, for example, adversary emulation plans (more on this in upcoming write-ups). This contribution could also be public or internal too.
So, without further ado, let’s start with the first set of use cases.
Adversary Emulation Use Cases
(1) Measured plans coverage, adversary planning, from beginning to end and continuously, can be enriched with more, relevant information that acts as metadata or attributes, this data can aid in deriving measurement. Examples could be
- Tools used to test ATT&CK techniques
- Testing results
- Start and end dates (were there any new CTI reports after that date that might require re-evaluation to the testing plan)
- Captured indicators
- Lessons learned
- Investment (Yes! Money, time and resources)
- Targets, and depending on requirements, the list can be longer
Example: Referencing APT3 plan, let’s take as an example the “Coverage View” in ATT&CK-View application, which shows “reported” vs. “planned” vs. “tested” techniques (ATT&CK View comes with a ready-made APT3 plan developed by MITRE team, source: https://www.mitre.org/publications/technical-papers/mitre-attack-design-and-philosophy)
There are 40 reported ATT&CK techniques associated with APT3, however, the developed plan says that only 16 were planned for, this is due to the availability of reported public information, in case more information becomes available (publicly or privately), this view will help re-evaluate the plan coverage and highlight the gaps in assessment.
(2) Organization, one adversary emulation plan might require multiple, frequent tests (iterations) based on the availability of CTI data (being public or private), which can quickly end up in numerous unrelated, testing plans. However, assigning tests under one “logically grouped” plan can help eliminate this inconsistency (Others might decide that the logical grouping to be organized per “Targets” under testing, or their “Client” in a multi-tenant MSSP, you decide).
Example: back to the APT3 testing plan, you find that one ATT&CK technique can have multiple testing guidelines, each using a specific toolset for testing and evaluation (internal windows tools, Cobalt Strike and Metasploit).
Since we are on the topic, the content of the APT3 plan developed by MITRE team is unchanged, what I did is restructure the tests so they can fit (imported) into the database, as a bonus, in the “Main View” I created two methods to “visualize” the testing plan (Using the “switch view” button):
- [One] “ATT&CK Technique” -> [One to Many] “Testing Guidelines,” this helps in visualizing the tests (planned iterations) made to test one ATT&CK Technique
- [One] “Testing Guideline” -> [One to Many] “ATT&CK Techniques,” this helps in visualizing the techniques meant to be tested by a single testing guideline
(3) False Analysis, going back to the “coverage view”, this time, however, select APT28 instead of APT3, this kind of “coverage reporting” might not be accurate (should have a context), as the tests reported as “planned” were developed in the context of APT3 adversary emulation plans, and not APT28. To eliminate this type of false indication (or Ambiguity if you like), ATT&CK View associates more meta-data to testing guidelines in the form of “Tags.” (shown in the previous screen capture as gray circles, colored-tagging are also supported).
(4) Knowledgebase for all, our data model can act as a shared knowledge base contributed to by Red/Blue (and hopefully Purple) teams. Organizations spend time, money and resources on security, capturing this knowledge is a good ROI in my opinion.
More: You can extend this KB by importing the work of others, such as red teams playbooks, and have it all in one place to be referenced
Example: here is a quick search for “dsquery” tool in ATT&CK View, from here, you can retrieve the full details of the testing guideline (and the plan).
(5) Automation, (work in progress), a testing guideline usually have an associated evaluation criteria, such as flagged EDR event, an IPS alert, etc. the process to evaluate any test can be automated by capturing those alert and matching them with a testing guideline evaluation criteria (we are already using STIX 2.0 as a container). This means collecting planned tests results can be automated. Tests evaluation is still a human work, but automation can help the analyst (red teams) focus on the hard work of eliminating the false positives, instead of the process of collecting the data results (Purple!)
Remember, we are representing ATT&CK data in our data model as STIX 2.0 objects, this means that CTI data related to specific adversaries can be integrated to help the analyst re-evaluate their testing plans coverage. Coverage reports would be auto-updated to reflect any gaps in the testing plan, which is (also) still a hard human work, the tool, however, can aid the analyst by “flagging” the gap.
The Overall image
This is (initially) how our data model looks like (the solution), in upcoming write-ups, I will build on top of it for a more complete picture.
We started by building our ATT&CK data in the form of STIX 2.0 objects, then we stacked our “Adversary Planning” data, then integrated both together through their references (AID: ATT&CK Technique Identifier, PID: Emulation Plan Identifier, TID: Testing Guideline Identifier)
For the sake of simplicity, I avoided showing the full database structure, the full details, however, are documented as embedded comments (starts with –) in the SQL script used to create the database (https://github.com/nshalabi/ATTACK-Tools/blob/master/attack_view_db_structure.sql) and GitHub main page (https://github.com/nshalabi/ATTACK-Tools)
Most of the input and feedback I received was related to additional data to be attached to the data model, some I have already captured in this blog entry, others I am still evaluating, I will then come back to do the needed editing accordingly.
You can send me your feedback directly by emailing me at firstname.lastname@example.org, twitter: @nader_shalabi or through the GitHub page of the project (https://github.com/nshalabi/ATTACK-Tools).