Blacklist Ecosystem Analysis: Spanning Jan 2012 to Jun 2014

@article{Metcalf2015BlacklistEA,
  title={Blacklist Ecosystem Analysis: Spanning Jan 2012 to Jun 2014},
  author={Leigh Metcalf and Jonathan M. Spring},
  journal={Proceedings of the 2nd ACM Workshop on Information Sharing and Collaborative Security},
  year={2015}
}
Motivation: We compare the contents of 86 Internet blacklists to provide a view of the whole ecosystem of blocking network touch points and blacklists. We aim to formalize and evaluate practitioner tacit knowledge of the fatigue of playing "whack-a-mole" against resilient adversary resources. Method: Lists are compared to lists of the same data type (domain name or IP address). Different phases of the study use different comparisons. Comparisons include how many lists an indicator is unique to… 

Tables from this paper

Blocklist Babel: On the Transparency and Dynamics of Open Source Blocklisting

A transparency and content analysis of 2,093 free and open source blocklists is performed to shed light on their nature, dynamics, inter-provider relationships, and transparency and concludes with recommendations in terms of transparency, accountability, and standardization.

A Lustrum of Malware Network Communication: Evolution and Insights

It is seen that, for the vast majority of malware samples, network traffic provides the earliest indicator of infection—several weeks and often months before the malware sample is discovered and network defenders should rely on automated malware analysis to extract indicators of compromise and not to build early detection systems.

The Ecosystem of Detection and Blocklisting of Domain Generation

A repeatable evaluation and comparison of the available open source detection methods is presented and it is recommended that Domain Generation Algorithm detection should also be similarly narrowly targeted to specific algorithms and specific malware families, rather than attempting to create general-purpose detection for machine-generated domains.

Rotten Apples or Bad Harvest? What We Are Measuring When We Are Measuring Abuse

It is found that abuse is positively associated with the popularity of websites hosted and with the prevalence of popular content management systems, and suggests the adoption of similar analysis frameworks in all domains where network measurement aims at informing technology policy.

Cuckoo Prefix: A Hash Set for Compressed IP Blocklists

  • D. AllenNavid Shaghaghi
  • Computer Science
    2020 30th International Telecommunication Networks and Applications Conference (ITNAC)
  • 2020
This paper proposes a new data structure known as cuckoo prefix for the purpose of blocking IPs quickly with relatively little space and offers a comparison of throughput and memory usage of several modern hash set and hash table implementations to determine which provides the best throughput at the lowest memory cost.

SIRAJ: A Unified Framework for Aggregation of Malicious Entity Detectors

SIRAJ is a novel framework for aggregating the detection output of various intelligence sources such as anti-malware engines based on the pretrain and fine-tune paradigm that uses self-supervised learning-based approaches to learn a pre-trained embedding model that converts multi-source inputs into a high-dimensional embedding.

The role of graph entropy in fault localization and network evolution

A novel approach to improving the scalability of event processing using a mathematical property of networks, graph entropy, is outlined and a constrained model of network evolution is presented, which demonstrates better quantitative agreement with real world networks than the preferential attachment model.

Summary: C-Accel Pilot-Track A1 (Open Knowledge Network): Knowledge of Internet Structure: Measurement, Epistemology, and Technology (KISMET)

  • Computer Science
  • 2020
The conclusion is that the path to better security does not lie in proposals to make global changes to the Internet protocols, but in finding operational practices that regions of the Internet can implement to improve the security profile of those regions.

Understanding the Characteristics of Public Blocklist Providers

A measurement study to analyze public blocklist providers (PBP) in terms of lifespan, update frequency, entry bias, and user interface metrics is described.

FeedRank: A tamper- resistant method for the ranking of cyber threat intelligence feeds

FeedRank’s key insight is to rank feeds according to the originality of their content and the reuse of entries by other feeds, modelled in a graph, which allows FeedRank to find temporal and spatial correlations without requiring any ground truth or an operator's feedback.

References

SHOWING 1-10 OF 36 REFERENCES

Blacklist Ecosystem Analysis Update: 2014

The results suggest that each blacklist describes a distinct sort of malicious activity, and support the assertion that blacklisting is not a sufficient defense; an organization needs other defensive measures to add depth, such as gray listing, behavior analysis, criminal penalties, speed bumps, and organizationspecific white lists.

Modeling malicious domain name take-down dynamics: Why eCrime pays

An ad hoc model of this competition on large, decentralized networks using a modification of Lanchester's equations for combat indicates that the defenders should not expect to eliminate or significantly reduce malicious domain name usage without employing new digital tactics and deploying new rules in the physical world.

Shades of grey: On the effectiveness of reputation-based “blacklists”

This paper performs a preliminary study of a type of reputation-based blacklist, namely those used to block unsolicited email, or spam, and shows that, for the network studied, these blacklists exhibit non-trivial false positives and false negatives.

Characterization of Blacklists and Tainted Network Traffic

Nine different RBLs from three different categories are used to perform the evaluation of RBL tainted traffic at a large regional Internet Service Provider.

Critter: Content-Rich Traffic Trace Repository

Critter connects end-users willing to share data with researchers and strikes a balance between privacy risks for a data contributor and utility for a researcher.

Manufacturing compromise: the emergence of exploit-as-a-service

DNS traffic from real networks is used to provide a unique perspective on the popularity of malware families based on the frequency that their binaries are installed by drivebys, as well as the lifetime and popularity of domains funneling users to exploits.

Global adversarial capability modeling

A model of global capability advancement, the adversarial capability chain (ACC), is proposed to fit the need for cyber risk analysis to better understand the costs for an adversary to attack a system, which directly influences the cost to defend it.

Abuse of Customer Premise Equipment and Recommended Actions

Three recommendations are presented: provide for continuous software upgrades of CPE, implement source address validation, and encourage the community to incentivize manufacturers and providers to take responsibility for the results of poor configuration and design choices.

PREDICT: a trusted framework for sharing data for cyber security research

The formatting guidelines for ACM SIG The Protected Repository for Defense of Infrastructure against Cyber Threats (PREDICT) are described, which establish a trusted framework for sharing real-world security-related datasets for cyber security research.

On the Design of a Cyber Security Data Sharing System

An analysis of four of the major challenges to cyber security information sharing is presented and technical solutions based on the current state-of-the-art that would overcome them are highlighted.