ATT&CK™, STIX™, a Data Model and ATT&CK-Tools

In this article and [hopefully] the following ones, I will be discussing ATT&CK™-Tools project (GitHub: in the form of multiple “use cases.” I will also explain how the project (as a solution) addresses those use cases; I thought that by following this approach, I would be able to allow more room for open discussions, which can reflect as enhancements decisions to the toolset. The use cases are accumulative knowledge and input from the community; although they might not be valid outside the context of a particular organization, I figured out that sharing the way I am trying to approach the problem to come up with the solution might be useful to others.

Emphasis On the Data Model

As of this writing, ATT&CK-Tools Git repository consists of

  • ATT&CK™ View: an adversary emulation planning tool
  • ATT&CK™ Data Model: a relational data model for ATT&CK™ and STIX™ 2.0

However, I would like to focus more on the data model, as gaining an understanding of the concept might enable others to derive their own or tailor the existing one to their requirements and needs.

Initially, I started working on a simple model for ATT&CK that, to some extent, was working fine, however, thinking about other scenarios were this data concept can be more useful, lead me to revise the existing design within the context of STIX 2.0 (will discuss this further with other uses cases). So I ended up with the final data model being a STIX 2.0 relational data object model with imported ATT&CK data.

The benefits achieved from having a data model for ATT&CK will be address through those use cases, in a nutshell:

  • Having a data model facilitates integration between the various solutions and helps ATT&CK-enable existing controls. If you are looking to integrate with ATT&CK, you don’t have to wait (get stuck) with your vendor providing this to you. On the other side of the fence, a data model can also help when there is an overlap in tools coverage.
  • It helps derive a statistical model that works “for you,” further, this model could be enriched by adding more relevant data for more insight (we already deal with vast amounts of data)
  • Flexibility in implementation and I would recommend not to get caught up with the underlying technology at this stage: relational vs. NoSQL vs. your SIEM data store (or a combination of all), the technology selection usually comes at the end.
  • There is no one size fits all, having a data model, however, allows for introducing (plugging) more specific, contextual information, that relates to organizations, operations, teams, etc.
  • Enable Automation, more on this shortly.
  • Enable (or facilitate) contribution, not just at the level of ATT&CK specific data, such as ATT&CK techniques, but any information that builds on top of the framework, for example, adversary emulation plans (more on this in upcoming write-ups). This contribution could also be public or internal too.

So, without further ado, let’s start with the first set of use cases.

Adversary Emulation Use Cases

(1) Measured plans coverage, adversary planning, from beginning to end and continuously, can be enriched with more, relevant information that acts as metadata or attributes, this data can aid in deriving measurement. Examples could be

  • Tools used to test ATT&CK techniques
  • Testing results
  • Start and end dates (were there any new CTI reports after that date that might require re-evaluation to the testing plan)
  • Captured indicators
  • Lessons learned
  • Investment (Yes! Money, time and resources)
  • Targets, and depending on requirements, the list can be longer

Example: Referencing APT3 plan, let’s take as an example the “Coverage View” in ATT&CK-View application, which shows “reported” vs. “planned” vs. “tested” techniques (ATT&CK View comes with a ready-made APT3 plan developed by MITRE team, source:


There are 40 reported ATT&CK techniques associated with APT3, however, the developed plan says that only 16 were planned for, this is due to the availability of reported public information, in case more information becomes available (publicly or privately), this view will help re-evaluate the plan coverage and highlight the gaps in assessment.

(2) Organization, one adversary emulation plan might require multiple, frequent tests (iterations) based on the availability of CTI data (being public or private), which can quickly end up in numerous unrelated, testing plans. However, assigning tests under one “logically grouped” plan can help eliminate this inconsistency (Others might decide that the logical grouping to be organized per “Targets” under testing, or their “Client” in a multi-tenant MSSP, you decide).
Example: back to the APT3 testing plan, you find that one ATT&CK technique can have multiple testing guidelines, each using a specific toolset for testing and evaluation (internal windows tools, Cobalt Strike and Metasploit).

Since we are on the topic, the content of the APT3 plan developed by MITRE team is unchanged, what I did is restructure the tests so they can fit (imported) into the database, as a bonus, in the “Main View” I created two methods to “visualize” the testing plan (Using the “switch view” button):

  • [One] “ATT&CK Technique” -> [One to Many] “Testing Guidelines,” this helps in visualizing the tests (planned iterations) made to test one ATT&CK Technique
  • [One] “Testing Guideline” -> [One to Many] “ATT&CK Techniques,” this helps in visualizing the techniques meant to be tested by a single testing guideline


(3) False Analysis, going back to the “coverage view”, this time, however, select APT28 instead of APT3, this kind of “coverage reporting” might not be accurate (should have a context), as the tests reported as “planned” were developed in the context of APT3 adversary emulation plans, and not APT28. To eliminate this type of false indication (or Ambiguity if you like), ATT&CK View associates more meta-data to testing guidelines in the form of “Tags.” (shown in the previous screen capture as gray circles, colored-tagging are also supported).


(4) Knowledgebase for all, our data model can act as a shared knowledge base contributed to by Red/Blue (and hopefully Purple) teams. Organizations spend time, money and resources on security, capturing this knowledge is a good ROI in my opinion.
More: You can extend this KB by importing the work of others, such as red teams playbooks, and have it all in one place to be referenced
Example: here is a quick search for “dsquery” tool in ATT&CK View, from here, you can retrieve the full details of the testing guideline (and the plan).


(5) Automation, (work in progress), a testing guideline usually have an associated evaluation criteria, such as flagged EDR event, an IPS alert, etc. the process to evaluate any test can be automated by capturing those alert and matching them with a testing guideline evaluation criteria (we are already using STIX 2.0 as a container). This means collecting planned tests results can be automated. Tests evaluation is still a human work, but automation can help the analyst (red teams) focus on the hard work of eliminating the false positives, instead of the process of collecting the data results (Purple!)
Remember, we are representing ATT&CK data in our data model as STIX 2.0 objects, this means that CTI data related to specific adversaries can be integrated to help the analyst re-evaluate their testing plans coverage. Coverage reports would be auto-updated to reflect any gaps in the testing plan, which is (also) still a hard human work, the tool, however, can aid the analyst by “flagging” the gap.

The Overall image

This is (initially) how our data model looks like (the solution), in upcoming write-ups, I will build on top of it for a more complete picture.

We started by building our ATT&CK data in the form of STIX 2.0 objects, then we stacked our “Adversary Planning” data, then integrated both together through their references (AID: ATT&CK Technique Identifier, PID: Emulation Plan Identifier, TID: Testing Guideline Identifier)


For the sake of simplicity, I avoided showing the full database structure, the full details, however, are documented as embedded comments (starts with –) in the SQL script used to create the database ( and GitHub main page (

Final words

Most of the input and feedback I received was related to additional data to be attached to the data model, some I have already captured in this blog entry, others I am still evaluating, I will then come back to do the needed editing accordingly.

You can send me your feedback directly by emailing me at, twitter: @nader_shalabi or through the GitHub page of the project (

Sysmon View 1.4 released!


My last blog entry was about Sysmon View 1.2, since then, Sysmon View went through many changes and updates, related to bug fixing, enhancements and recently, the addition of the new WMI events.

WMI Events and All Events View

Sysmon View can now import the WMI events (WMIFilter, WMIConsumer, and WMIBinding), however, there is no way to actually view those events in Sysmon View directly, only because the first view was meant to focus on binaries logically grouped using the GUID field, and the second view was a geo-mapping of the IP addresses from Network events. This was an issue for events like WMI and “Driver loaded” events, which lead to creating the third “All Events” view…


The 3rd view works like a pivot table by grouping related events of the same type, or of the same session (GUID), can sort by event time and have a detailed search through any imported events. Furthermore, expanding events provides access to their ID’s that look like hyperlinks, by clicking an ID number (this is an ID from the database itself, not a Sysmon generated data) you can invoke the detailed view of that event, view related sessions and query virus total for more information (hashes and IP addresses)

Here is the screenshot of an imported Sysmon log from a ransomware running session, with events grouped by type


Searching for the “delete” word reveals the use of vssadmin.exe with the same word passed as an argument, from there, I was able to track back to all the events sequence related to that session…


Open Database

Sysmon View generates an SQLite database for all the imported events, this database can be loaded by any instance of Sysmon View (for example, passed from another analyst). The database can be read by any application or script, it contains summaries of hashes, executables, IP addresses, ports, geo mappings, registry entries, which are all logically linked through a binary file name or a session (executable GUID)


In the case Sysmon View UI is not sufficient, another UI can be created using the database, and Sysmon View can be used as an import utility (work on progress to create a command line interface)

Sysmon Shell – Release 1.1

I have just uploaded a new version of Sysmon Shell (v1.1)


Here is the list of updates:

  • Added new configuration options to include or exclude an entire event log, for example (Surprisingly missing in version 1.0):
    <PipeEvent onmatch=”include”/> or <PipeEvent onmatch=”exclude”/>
  • If you are using Sysmon for malware analysis, you might find the last tap marked “Logs Export” useful, as it allows exporting Sysmon logs to an XML file, for example (the exported XML log files can be loaded into Sysmon View for analysis and visualization) the export feature has 3 options:
    • Export only
    • Export and clear Sysmon event log (to mark new analysis starting point)
    • Export, backup evtx file, and clear the event log


  • In case you are applying Sysmon configuration using Sysmon Shell and not directly using Sysmon, the hash of Sysmon image being executed will be used to run the configuration command will show in the preview pane


The new version can be found on my Github

Please contact me to report any bugs or suggestions

Visualizing & Tracking Sysmon events with Sysmon View 1.2

With Sysmon View 1.1, I was able to view Sysmon logs visually. However, this drawn image was somehow incomplete as I was unable to track the entire process hierarchy (maybe because I was busy laying down the foundation). With version 1.2, following a process through its hierarchy is now possible, additionally, when investigating an event, it is easy now to get to (trackback) all other events related (associated) to the same session.

Example: in the following image, let’s track the history of events related to “AcroRd32.exe” process


Double-clicking on the “process create” event reveals the details of this event (notice that the “Parent process GUID” is being highlighted as a hyperlink), the “event details” is showing “Explorer.exe” as the parent process…


New to Sysmon View 1.2: Before proceeding further, let’s talk about the new events details window. In this window, you can retrieve all event’s data, and query virus total for hash information as shown in the next screenshot (You will have to get an API key to enable virus total queries). In the case of network events, query Virus Total for IP and domains information, including whois data, in addition to “jumping” to the logged registry keys in regedit.


Now back to our topic, clicking “Parent process GUID” link will bring up the parent process session (in this example, Explorer.exe) and all events associated with it


To go further deeper, repeat the same steps recursively: let’s go to the details of the “process create event” of “Explorer.exe”, which shows the parent process as “userinit.exe”


Again… lets get the details of “userinit.exe” parent process though the details of the process create event details…


Which reveals “winlogon.exe” as the parent process, lets further dig behind the parent process “winlogon.exe” details…


You got the idea…

Now you might be asking what the hyperlink of “Process GUID” does, well, it will re-draw (visualize) the same session under investigation again, so why the duplication? well, its not, this is feature is needed for the “Map view”…


When selecting a destination country (Map View will be available if you enabled geo ip setting when importing the XML log data), then all network events related to that “destination” will be listed, now to track back to all events within the context of a running session, click the “Process GUID” field…


And from there, it’s easy to track that process hierarchy or any other event associated with it

For any questions or suggestions, please contact me by email.

Updated Sysmon View

Here is the latest list of updates to Sysmon View (v1.1), the tool incorporated much of the feedback received (thank you all), bug fixing and adding new features:

  • Bug fixes related to internal database connectivity errors
  • Bug fixes related to UI not to be reset after resetting data
  • Bug fixes related to the way information about binary images (executables) are collected
  • Sysmon View design is now based on multiple visual modules (currently there are two modules)
  • “Process View” got an additional “filtering” option that will show images (executables) that are being reported with specific selected events types. For example, to view the timeline of a process, but excluding its network and Image loaded events, then a filter can be applied to narrow down the results (as shown in the following screenshot), which in turn, helps narrow down the number of listed binaries to be investigated.

  • Map View: the new view displays network events based on destination country (Geo IP lookup), this will work only if the “geo-location” option was selected during the import process. Selecting any country will display the relevant network events.

For any questions or suggestions, please contact me by email.

Sysmon View

Although the noise generated by Sysmon could be reduced through filters applied in its XML configuration, it is still somehow too much to look at, (I usually tend to log everything when doing reverse engineering).

The main idea behind Sysmon View is to aid in the analysis of Sysmon logs using “visual” reporting modules, which are based on specific (useful) use cases.

The utility is still in its initial stages, I am releasing it with the first reporting module which can:

Step 1 – Filter binary images (executables) according to their file name

Step 2 – Further filter binary image files (selected from step 1) according to their path, which might be helpful in investigating anomalies in images location (Images with the same name running from multiple locations)

Step 3 – Visualize Sysmon events (related to binaries filtered through step 1 and 2), but per image logged session (this is Sysmon process GUID in action)

The utility can then help “visually” line up (sorted by time) the different events associated with a particular session.

To get started, you need to export Sysmon events first to an XML file using WEVTUtil (I could have designed the tool to connect and retrieve-pull the logs from the server directly, but Sysmon View was not designed to be used as live log analysis tool)

WEVTUtil query-events “Microsoft-Windows-Sysmon/Operational” /format:xml /e:sysmonview > eventlog.xml

Once exported, run Sysmon View and import the generated file “eventlog.xml” (or the name you selected). Note that this might take some time, depending on the size of the logged data (this needs to be done once per log file, subsequent runs do not need any imports, and can be reloaded using File -> Load existing data menu option, which will load previously saved data again)


Sysmon View will build an internal database that I will discuss its structure in upcoming posts and how to utilize its content (which by the way is an SQLite database file).


Once the log file is imported, you can start searching through the collected binary images, which can be easily filtered



Double-clicking any of the binary images will show the path location(s) reported by Sysmon, which will help in identifying anomalies in path location at this stage as previously outlined


Double-clicking an image path entry will cause the tool to collect all sessions (again, this is the process GUID in action) for that image entry that was running from that location


Double-clicking any of the sessions entries will generate a tree of events sorted by event’s logged time


Double-clicking any event block will reveal more details in a floating window (you will notice some additional entries that do not exist in Sysmon XML schema, as previously mentioned, I will elaborate more on this and the internal database structure in upcoming write-ups)


Sysmon View can be downloaded (32 & 64 builds) from Github link @ this link, For any questions or suggestions, please contact me by email.

Sysmon Shell

Sysmon Shell can aid in writing and applying Sysmon XML configurations through a simple GUI interface, it can also be used to learn more about Sysmon configuration options available with each release, instead of digging behind the XML Schema, in a nutshell:

  • Sysmon Shell can load Sysmon XML files configurations: with version 1.0, I am only supporting the latest schema v3.30 for Sysmon v6.01, future updates to Sysmon will be supported. Also, the tool won’t be loading any configuration of Sysmon from the registry.
  • It can export/save the final XML to a file.
  • It can apply the generated XML file by calling Sysmon.exe -c directly (creating a temp XML file in the same folder where Sysmon is installed), for this reason, it will need elevated privileges (the need for this is inherited from Sysmon), the output of applying the configuration will be displayed in the preview pan (Sysmon output)
  • XML Configuration can be previewed before saving in the preview pane
  • The utility contains descriptions for all events types taken from Sysmon Sysinternals home page (

What it won’t do: warn you about Include/Exclude conflicts or attempt to validate the rules itself, however, once the configuration is applied, the preview pane will display the output from Sysmon to show the results (this is the output of Sysmon -c command), from which errors can be identified

Following is a screenshot of Sysmon Shell in action


Sysmon Shell can be downloaded from my Github.

Please contact me to report any bugs or suggestions