GoodFATR: A Platform for Automated Threat Report Collection and IOC Extraction

@article{Caballero2022GoodFATRAP,
  title={GoodFATR: A Platform for Automated Threat Report Collection and IOC Extraction},
  author={Juan Caballero and Gibran G{\'o}mez and Srdjan Matic and Gustavo S'anchez and Silvia Sebasti'an and Arturo Villacanas},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.00042}
}
To adapt to a constantly evolving landscape of cyber threats, organizations actively need to collect Indicators of Compromise (IOCs), i.e., forensic artifacts that signal that a host or network might have been compromised. IOCs can be collected through open-source and commercial structured IOC feeds. But, they can also be extracted from a myriad of unstructured threat reports written in natural language and distributed using a wide array of sources such as blogs and social media. This work… 

Figures and Tables from this paper

Watch Your Back: Identifying Cybercrime Financial Relationships in Bitcoin through Back-and-Forth Exploration

Back-and-forth exploration, a novel automated Bitcoin transaction tracing technique to identify cybercrime financial relationships, uncovers a wealth of services used by the malware including 44 exchanges, 11 gambling sites, 5 payment service providers, 4 underground markets, 4 mining pools, and 2 mixers.

References

SHOWING 1-10 OF 52 REFERENCES

IoCMiner: Automatic Extraction of Indicators of Compromise from Twitter

A new scalable framework, IoCMiner, to automatically extract CTI, in special Indicators of Compromise, from Twitter, using a combination of graph theory, machine learning, and text mining technique to achieve its goal.

#Twiti: Social Listening for Threat Intelligence

By analyzing IOCs in Twiti from various aspects, it is found that Twitter captures ongoing malware threats such as Emotet variants and malware distribution sites better than other public threat intelligence (TI) feeds.

Acing the IOC Game: Toward Automatic Discovery and Analysis of Open-Source Cyber Threat Intelligence

By correlating the IOCs mined from the articles published over a 13-year span, this study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.

ChainSmith: Automatically Learning the Semantics of Malicious Campaigns by Mining Threat Intelligence Reports

The effectiveness of different persuasion techniques used on enticing user to download the payloads is studied, finding that the campaign usually starts from social engineering and "missing codec" ruse is a common persuasion technique that generates the most suspicious downloads each day.

Automatic Extraction of Indicators of Compromise for Web Applications

This paper proposes for the first time an automated technique to extract and validate IOCs for web applications, by analyzing the information collected by a high-interaction honeypot, and shows that this approach has several advantages compared with traditional techniques used to detect malicious websites.

Extractor: Extracting Attack Behavior from Threat Reports

The evaluation results show that Extractor can extract concise provenance graphs from CTI reports and show that these graphs can successfully be used by cyber-analytics tools in threat-hunting.

TTPDrill: Automatic and Accurate Extraction of Threat Actions from Unstructured Text of CTI Sources

This paper developed automated and context-aware analytics of cyber threat intelligence to accurately learn attack pattern (TTPs) from commonly available CTI sources in order to timely implement cyber defense actions and presents a novel threat-action ontology that is sufficiently rich to understand the specifications and context of malicious actions.

Reading the Tea leaves: A Comparative Analysis of Threat Intelligence

This paper formally defining a set of metrics for characterizing threat intelligence data feeds and using these measures to systematically characterize a broad range of public and commercial sources and ground their quantitative assessments using external measurements to qualitatively investigate issues of coverage and accuracy.

Unifying Privacy Policy Detection

A toolchain to process website privacy policies and prepare them for research purposes is developed, using natural language processing and machine learning to automatically determine whether given texts are privacy or cookie policies.
...