M.Sc. student in information security at Gjøvik University College. Hobby data scientist! @pcbje
Don't wanna be here? Send us removal request.
Text
A Review of The UCS-RFID System
Delivered for evaluation in the course IMT4581 - Network Security at GUC. The PDF version is available here.
Achieving satisfactory security in Radio Frequency Identification (RFID) systems have been bothering researchers for years. Though numerous systems for securing RFID communications have been proposed, they all fall short on various aspects of security and privacy. The main issue is that the low-cost items used to identify objects, called tags, do not have the computational power to perform sophisticated operations such as random number generation. This paper will review a system proposed by Basel Alomair et. al. called UnConditonally Secure RFID (UCS-RFID). The system is designed to enable RFID tags to use random numbers generated by the reader in order to ensure privacy and integrity during authentication. The paper will also review how the protocol is vulnerable to a type of attack where adversaries is able to trace tags, and thereby their owners.
1 INTRODUCTION
Though Radio Frequency Identification (RFID) systems for quite some time have been projected as the next technological revolution, it has not yet gained the commercial foothold many anticipated. There may be several reasons for this, but the lack of a generally accepted security ought to be one of them. RFID systems consist of three types of components: tags, readers, and databases [1]. The tags are typically silicon-based components that may be attached to other objects for unique identification. Such identification is achieved by assigning a unique identification number to each tag. The RFID readers are computationally powerful devices that identify RFID tags using radio waves. Because of their computational power, RFID readers are also a lot more expensive than the tags. Finally, the databases are connected to the RFID reader over a network and are used to retrieve information about a given tag. The scope of this paper is limited to the security of communication between a tag and a reader.
The applications of RFID systems seem limited only by our imagination. However, as more and more applications of RFID are suggested, the concerns about the security risks associated with the technology are also growing. When reviewing the security issues with RFID, there are three classes of risks, these are: integrity, privacy, and availability. First, integrity in RFID systems means that the tags or readers are not altered or copied in such a way that the counterpart treats it as something it is not [7]. Second, privacy means that it is difficult for malicious readers to recognize tags it is not supposed to access [8]. Finally, the availability of RFID systems means that an attacker is not able to disrupt the operation of benign systems. Such disruption may be caused by physically blocking the radio waves [3] or by altering the state in either readers or tags in such a way that they are not able to communicate with each other again. The latter attack is called a desynchronization attack [7].
RFID systems are not alone in having these kinds of risks. What makes these systems special is that the low-cost tags for the time being lacks the ability to perform sophisticated computational operations such as random number generation and public key cryptography.
With the presented classes of risks and the performance restrictions of RFID systems, this paper will focus on a proposed protocol called UnConditionally Secure RFID (UCS-RFID) [10]. UCS-RFID enables tags to enjoy the computational power of the reader, without having to do any cryptographic operations on their own. However, as it will be presented later in the paper, UCS-RFID is not without weaknesses. In order to review UCS-RFID and its potential flaws, the remainder of the paper is structured as follows: Section 2 will present a RFID primer, describing the general RFID system in more details. Section 3 describes the UCS-RFID system, and reviews the possible weaknesses and attacks. Finally, section 4 will provide a summary of the paper and some suggestion for further research on the concept of unconditionally secure RFID.
2 BACKGROUND
This section will review some of the underlying concepts and limitations in the current RFID technology. The section starts with a review of the infrastructure, before describing why we should focus on RFID security, and what models we have to evaluate the security in proposed protocols.
2.1 The RFID technology
RFID is a technology that enables contact-less and non-line-of-sight identification of objects [1]. The RFID infrastructure consists of tags to be identified and readers that can interrogate the tags. The readers are typically connected to a backend database over a network. This database may contain information regarding either an individual tag, or a group of tags. Sanjay E. Sarma and other researchers connected to the Auto-ID Center summarized back in 2003 the research efforts to design a complete RFID infrastructure. The design they presented was a minimalist design where information about each tag was retrieved from a database by the reader using an Object Name System (ONS). ONS has many similarities the Domain Name System (DNS), which is used for locating the corresponding servers for a domain such as example.org [1]. In their summary there are essentially two types of tags. First, there are the active tags that contain an internal power supply. The other type of tags is those powered by harvesting energy from the radio waves sent from the reader. These are called passive tags. The active tags had usually more computationally powerful than the passive, but also more expensive. In this paper, only the passive tags are considered. Sarma et. al. presents passive tags as cheap microchips made up of silicon and a small antenna [2]. They have limited power supply and have a relatively low amount of gates for doing computation [1]. Another consequence of the limited power supply is that the antenna size is limited, and thereby also the maximum operating distance of these tags[4]. However, the RFID readers are capable of processing several hundred tags each second [1]. The details of how the reader and the tags communicate using radio waves are beyond the scope of this paper, but a review on the details is given by Sarma et. al. in [1].
Currently RFID is used mostly in industrial processes such as monitoring production chains. However, it seems fairly certain that the RFID technology at some point will be used by the community in general. When at a supermarket, it would be quite useful if you through the Internet could figure out what you currently have in your refrigerator, or being notified when an item is about to expire. RFID has also been proposed in the medicine industry, where an RFID tag attached to an unconscious person could supply paramedics and doctors with information such as allergies, or what medicine he or she is currently using. Though the latter example maybe is far fetched, it illustrates how important it is to develop good security systems that ensures the integrity of the data the tags represent, and the privacy of the owner.
2.2 Aspects of security in RFID
Finding a proper way to ensure the security of RFID systems are crucial if it is to become the significant part of society many have anticipated. A problem with securing these systems is that the tags must be produced as cheap as possible. As stated earlier, the RFID tags are usually made up of a small antenna attached to a silicon-based microchip. Though technology ought to push for increasing the power of low-cost RFID tags, the number of gates for security measures seems to remain unchanged at 200-2000 gates [5, 9]. In contrast, the well-known AES crypto algorithm requires about 20000-30000 gates [2].
When designing security protocols, researchers therefore have two options: They can either anticipate that the number of gates will increase so that strong cryptographic algorithms can be used. Or they can design the security protocol without using crypto algorithms and therefore needing fewer gates. As another consequence of the limited number of gates, RFID tags have a poor ability to generate pseudorandom numbers. Most security systems for RFID attempt to in some way imitate proper algorithmic mechanisms while keeping the number of gates needed as low as possible.
In order to measure the amount of security in a RFID protocol, we need to define models for the capabilities and goals of the attacker. There is also a need for a measurement to determine whether certain parts of the security have been compromised. Next, such a model of an adversary and measurement of privacy in RFID systems is presented.
2.3 Adversarial and privacy models for RFID systems
Avoine describes an adversarial model for RFID system by separating the means and the goals of the attacker [6]. An adversary has essentially two goals: Disrupting normal operation of RFID systems, and tracing tags, i.e., violating the privacy of the RFID tags and their owners. These goals are achieved by performing a set of operations on an accessible system. Avoine presents five different actions an attacker can execute in order to send messages to the tags or the reader, and to monitor the responses. These actions are Query, Send, two variants of Execute1, and Reveal. The details of these can be reviewed in [6].
When considering privacy, the attacker can use these actions in order to try to identify a single tag several times in order to trace it. Avoine presents two types of traceability: existential and universal. There is an important difference between the two. If an attacker is able to recognize a tag at any two times, she has achieved universal traceability. If the attacker under some conditions, or by coincidence is able to recognize a tag, there is existential traceability [6].
Juels and Weiss present in their work a model for describing and testing the privacy of a RFID system [8]. The model consists of the two phases learning and guessing. Before the learning pause can be initiated, a set of generated keys must be loaded into the tags and the database connected to the reader. In the learning phase, the attacker is able to perform up to a predefined number of available operations. These operations include interrogating the tags or the reader, and changing the key of all but two tags. Then, in the guessing phase, the attacker randomly chooses two tags that it did not change the keys on. The attacker is then presented with one of these two tags, and may again perform a predefined number of operations on the tags. The attacker may however not change the key of the tag it was presented. The attacker now tries to guess which of the two tags it was presented. If the system has leaked enough information to give the attacker a significant advantage in determining the correct tag, the privacy of the system is compromised.
Now that model for evaluating the security of RFID protocols in general, it is time to take a closer look on the UCS-RFID protocol.
3 THE UCS-RFID SYSTEM
This section will present the RFID system called UnConditionally Secure RFID (UCS-RFID) as Alomair, Lazos and Poovendran proposed it in [10]. First, an introduction to how mutual authentication between tags and a reader is achieved will be given. Then follows a summary of which assumptions the authors make on the adversary and on different aspects of security in the UCS-RFID system.
3.1 The UCS-RFID protocol
Their motivation for designing this system was to draw researchers attention to low-cost systems where the tags can enjoy the computational power of the reader. For the computational capabilities of the tags, they assume that the tags are able to do bitwise operations in addition to modular multiplication and addition. They do not expect the tags to be able to perform complex operations such as hash functions. When a tag is produced, both the tag and the database connected to the reader is loaded with two values, a unique identifier A, and a secret key K. K consist of 5 sub keys which are used for different tasks during an authentication. See table 1 for a complete list of the different parameters involved in the UCS-RFID.
The most important part of the UCS-RFID system is to enable the reader to securely and secretly send a randomly generated nonce to the tag. If such a transfer were achieved, this would enable the reader and the tags to communicate while preserving message integrity and enable secret generation of new keys on the tags after authentication. Securely and secretly updating keys is crucial in order to preserve the privacy of the tag owner, as well as preventing adversaries from performing desynchronization attacks on the system.
Table 1: Parameters in the UCS-RFID system as Alomair et. al. presented it in [10]. Zp denotes the finite integer ring of p, and Z∗p denotes the multiplicative group modulo p, where all elements are non-zero and relatively prime to p.
The process of achieving mutual authentication between a tag and a reader starts by the reader sending a Hello message to the tag. A visualization of a complete UCS-RFID run can be reviewed in [10]. When a tag receives the Hello message from the reader, it responds with its current identifier A. Then, when the reader receives A from the tag, it loads the key K corresponding with A from the database. If no key is found, the authentication is terminated. If the key was retrieved, the reader generates the random nonce n and computes the values B and C:
B ≡ n + kb (mod p) (1) C ≡ n ∗ kc (mod p) (2)
Note that p is determined prior to the generation of K. When the tag receives B and C, it authenticates the reader by verifying that:
(B−kb)∗kc ≡C (modp) (3)
Again, if equation 3 is false, the authentication process i terminated. If equation 3 is true, that proves that the reader knows kb and kc and the reader is authenticated. It is now time for the tag to authenticate itself to the reader. This is done by computing the value D and sending it to the reader. D is computed by:
D = nL ⊕ kd (4)
Where ⊕ denote bitwise XOR. Upon receiving D from the tag, the reader can confirm that the value is correct by computing its own D and comparing it to the received one. If the values are the same, it is proven that the tag knows kd, and the tag is authenticated. Now that mutual authentication has been achieved, both the reader and the tag updates A(0) to A(1), and the corresponding K (ka(1), kb(1), kc(1), kd(1), ku(1)) using the sub key k(0) and predefined formulas, which can be reviewed in [10]. A full UCS-RFID run is now successfully completed. Note that the sub key ku is never broadcasted between the tag and the reader.
Now its time to look at the security features of the UCS-RFID system.
3.2 Security in UCS-RFID
Alomair et. al. uses an adversary and privacy model similar to those presented earlier in this paper [10]. In addition to determine the capabilities of an adversary, they provide a model for when security in the mutual is compromised, i.e., that the adversary can impersonate as either a tag or a reader. The main idea is that the adversary can monitor legit authentications between tags and a reader, and then try to successfully authenticate as either a tag or a reader to the other.
In their paper, Alomair et. al. provide a proof using its adversary model that, given strong random generation by the reader, the protocol is secure in terms of reader and tag authentication [10]. They claim that it would require an adversary to observe about 240 consecutive successful authentications in order to determine K. However, this number has later been questioned by Abyaneh in [11]. They also describe how, using an adversary model that can block messages from being received, the UCS-RFID is resistant to desynchronization attacks.
The fact that a tags key only is updated after a successful protocol run, it may be possible for an adversary to trace a tag between each key update. After they key is updated, the adversary has to interrogate the tag to get its new identifier A. This is only a problem if the adversary is capable of combining the received identifier and the corresponding tag. This could be achieved by physically accessing the tag, or in some way becoming certain that no other tags could have yielded that identifier.
4 DISCUSSIONS AND CONCLUSIONS
This paper has in presented the UnConditionally Secure protocol for RFID (UCS-RFID). In order to understand the various limitations and possibilities in low-cost RFID systems, a review of RFID technology and security is given. The aim of the UCS-RFID system is to ensure security by enabling cheap RFID tags to use random numbers for security purposes, without having to generate them on their own. Such random numbers enables enforcement of integrity and privacy in RFID systems. UCS-RFID provides privacy for the tag owner that should be sufficient for most uses, if the tags are authenticated often and the adversary doesn’t have physical access to the tags.
It is a problem that a tag will broadcast the same identifier at each time it is interrogated between each completed authentications. In order to mitigate this, the tag could either be stateful or be able to generated strong pseudorandom numbers. Whenever a RFID system relies on that the tag and the reader are in the same state, it is subject for desynchronization attacks, i.e., that the reader and the tag is in different states and therefore isn’t able to communicate properly. Another option is to load the tags with many different identifiers. This significantly increases the search space when the tag is to be singulated, and also opens for the possibility of denial-of-service attacks by extracting all identifiers from the tag and thereby disable it from providing the reader with a valid identifier. Enabling the tags to generate their own random numbers, demands increased computational power and therefore increased production cost. One can always assume that Moore’s law some day will result in cheap tags that is able to perform sophisticated computational tasks such as random numbers and cryptographic algorithms. However, as these tags are meant to some day replace the barcodes on consumer articles, every cent saved in production matters, and it seems unlikely that any supplier will pay much extra for the milk cartons to support public key cryptography.
There is a jungle of proposals for how RFID systems can be designed, and it does not seem to be much consensus between researchers on which path is the best to follow. Maybe the thoughts behind the UCS-RFID systems can inspire others to create even better systems and therefore getting closer to the next technological revolution. Even though there has been designed several models for privacy in RFID systems, there hasn’t been much focus on just how these systems has to be in order for the community to accept it. Trying to identify an acceptable end-state for low-cost RFID systems could therefore be an interesting subject for further research.
References
[1] S. E. Sarma; S. A. Weiss; D. A. Engels, RFID Systems and Security and Privacy Implications, 2003.
[2] S. A. Weiss; S. E. Sarma; R. L. Rivest; D. A. Engels, Security and privacy aspects of low-cost radio frequency identification systems, 2004.
[3] T. Dimitriou, A Lightweight RFID Protocol to protect against Traceability and Cloning attacks, 2005.
[4] R. Weinstein, RFID: A Technical Overview and Its Application to the Enterprise, 2005.
[5] A. Juels; S. A. Weis, Authenticating Pervasive Devices with Human Protocols, 2005.
[6] G. Avoine; Adversarial model for radio frequency identification, 2005.
[7] A. Juels., RFID security and privacy: A research survey, 2006.
[8] A. Juels; S. A. Weis, Defining Strong Privacy for RFID, 2007.
[9] L. Kulseng; Z. Yu; Y. Wei; Y. Guan, Lightweight Mutual Authentication and Ownership Transfer for RFID Systems, 2010.
[10] B. Alomair; L. Lazos; R. Poovendran, Securing low-cost RFID systems: An unconditionally secure approach, 2011.
[11] M. R. S. Abyaneh, Passive Cryptanalysis of the UnConditionally Secure Authentication Protocol for RFID Systems, 2012.
3 notes
·
View notes
Text
Visualization of votes in the Norwegian Parliament
The images below visualize how much the norwegian political parties agree in the Parliament during two different periods.
An edge between two members indicates that they have voted equally in more than 50% of the cases within the given time frame, excluded where the votes were unanimous.
November/December 2011 (25 cases).
March 2012 (17 cases).
The graphs are generated using Gephi.
0 notes
Text
Visualizing k-nearest neighbors with Gephi
In the course IMT4612 Machine Learning and Pattern Recognition 1 at GUC we were given an assignment to classify some objects into one of two classes using the k-nearest neighbors algorithm. We were given a training set consisting of 420 examples with class labels. The the first value of each sample was the class label and the rest was attributes:
... 1 0.500000 0.250000 0.000000 0.250000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.250000 2 0.666667 0.666667 0.000000 0.000000 0.000000 0.333333 0.000000 0.000000 0.000000 0.000000 0.000000 1 0.000000 0.000000 0.000000 0.333333 0.333333 0.666667 1.000000 0.000000 0.000000 0.000000 0.000000 ...
I wanted to visualize the training set in Gephi, so I wrote a small plugin that added an edge to each of the k-nearest neighbors for every sample in the training set. The process that added the edges had no knowledge of the class lables. I must admit that I consider the result to be pretty awsome!
Using k=3:
The color of a node indicates its class label. Below are examples using other k-values.
k = 0
k = 1
k = 30
1 note
·
View note
Text
Programmatically merging columns to time interval in Gephi 0.8a
During the last month, I have been playing around with Gephi 0.8a. Gephi is a beautiful, open source, graph visualization platform written in Java. I discovered Gephi after writing a paper on Social Network Analysis last fall.
While implementing an import mechanism for remote data over HTTP, I realized that using the timeline with this data was harder than expected. Now, I might be way off target here, but it seems to me that Gephi is more fond of importing and exporting complete data sets into either GEXF-files, or neo4j. Both of these are of course useful, but I am more interested in the process of knowledge discovery, where nodes are added as they are referenced by existing nodes. A tool that is doing a great job at this is Maltego. However, Maltego is a commercial tool and the free edition they supply unfortunately has some major restrictions.
Programmatically adding nodes and edges on the fly in Gephi is pretty straight forward. However, I haven't been able to find a neat way to put these on the timeline in order analyze how the graph evolves over time. My solution is to programmatically merge the columns containing the start and end values into an interval. This can be achieved by doing someting like this:
public void mergeDateColumnsToInterval(String startColumnID, String endColumnID, String dateFormat, String defaultStartTime, String defaultEndTime) { AttributeTable table = Lookup.getDefault().lookup(AttributeController.class).getModel().getNodeTable(); AttributeColumnsMergeStrategiesController acmc = Lookup.getDefault() .lookup(AttributeColumnsMergeStrategiesController.class); acmc.mergeDateColumnsToTimeInterval( table, table.getColumn(startColumnID), table.getColumn(endColumnID), new SimpleDateFormat(dateFormat), defaultStartTime, defaultEndTime); }
Gephi will automatically detect the interval and bring up the timeline.
This method assumes that the columns referenced already exists. A drawback with this technique is that the method above has to be called every time nodes or edges are added to the graph. An complete example can be found here. The example requires the modules Attributes API, Data Laboratory API, Graph API, Lookup API and Visualization API. You can test the example by adding the example somewhere inside the Gephi source code and right-click anywhere on a graph after rebuilding.
1 note
·
View note
Text
Detecting and Mitigating Fast-Flux Service Networks
Written as part of evaluation in IMT4561 Applied Information Security.
1. INTRODUCTION
The aim of this paper is to review state of the art techniques for detecting and mitigating Fast-Flux Service Networks. This will be done by reviewing recently published work on the topic. It is however important to not only understand these techniques, but also to have an understanding of context surrounding FFSN. The reason for this is that to understand how something is used, one needs to understand why it emerged. Because of this to is also the goal of this paper to provide an introduction to some of the topics closely related to FFSN, This includes proxy servers, Content Distribution Networks (CDN), spoofing Domain Name System (DNS) servers and how services can be taken down by force.
Over the last couple of years, there has been written numerous articles regarding FFSN and how they can be detected. Techniques for mitigating FF service networks seems to have received less attention, probably due to the fact that many of the techniques are common to botnets in general.
Much of the published provide excellent descriptions of their respective topics, but it is my opinion that they do not to fully provide the reader with an overview of FFSN. This involves combining what FFSN are, why it have emerged, and why they can be difficult to detect. What follows is a brief overview of previous work worth mentioning in regard to this article. They are also the main sources of information gathered for this paper.
1.1. Related work
The Honeynet Project’s article from 2007[3] is hard to miss when searching for information about FFSN. It is cited by almost any publication on the topic to date and gives an excellent introduction to how these networks operates.
[6] and [1] gives a further description of the attributes of known FFSNs, having focus on the lifespan and size, i.e., the number of compromised machines, of botnets utilizing FFSN. Extensive work has been put into finding a method to quickly and accurately decide whether a given domain belongs to a FFSN. [7] and [8] describes how a domain can be classified as part of a FFSN by regularly polling the DNS server to examine how a domains records change over time. [4] goes even further by gathering data from active and passive sensors in order to identify FFSN within minutes. These articles provide a good foundation for how FFSNs can be distinguished from the benign Content Distribution Network (CDN) and Open CDN.
The Internet Corporation for Assigned Names and Numbers (ICANN) published a warn- ing concerning FFSNs as early as 2008[17]. The article defines some of the terminology concerning FFSN, as well as a introduction to the anatomy and workings of FFSNs in general.
Finally, on how botnets can be mitigated, [10] gives a good and thorough description on the specific case of peer-to-peer botnets.
1.2. Outline
Section 2 provides the background of botnets, FFSN and other concepts closely related to the topic. This include Open Content-Distribution Networks, which without the proper tools will be easy to confuse with FFSN. It also review common applications for FFSN and some of the mitigating techniques which applies to botnets in general.
Section 3 gives an presents existing techniques for detecting FFSN. It will review FFSN measurement with both active sensors, i.e., polling a DNS server, and passive sensors, i.e., listening to data traffic. The section will also review some of the techniques which can be applied to evade those kinds of measurement.
In Section 4 some of the techniques for mitigating FFSNs are presented. It will present a technique for locating the server hidden behind the proxies, popularly called the mothership. It also describe an interesting technique for avoiding being located.
Discussions, suggestions for further work and conclusions are presented in section 5.
2. BACKGROUND
Fast-Flux service networks is a technique for using compromised computers in a botnet to hide a servers location[4]. Comprised computers, or agents, who is part of a FFSN serve as proxy which relays the request from the client to the hidden server, and the response back to the client. There are essentially three ways to achieve this. The first is to point the Domain Name System (DNS) A records for a domain to the Internet Protocol (IP) addresses of compromised computers. This is usually referred to as Single-Flux service networks. Note that when a client requests an available IP for a given domain, multiple of these A records may be returned. When a client wants to load contents from e.g., www.example.org, it first finds the reference to the nameserver which stores the A records. Fluxing (i.e., regularly changing) this or these IP’s is called Double-Flux service networks. The third method defined is simply Single-Flux and Double-Flux combined[4]. The difference between Single-Flux service networks and Double-Flux networks can be observed in figures presented in[3]. In the remainder of this paper, there will not be much focus on the differences between Single- Flux and Double-Flux Service Networks.
In order to have a FFSN, the adversary has to obtain a botnet. Getting hold of a botnet can be quite easy, and you can rent a large botnets tailored for your needs if you know where to look. A botnet consists mainly of two parts. First the adversary needs a mechanism to compromise computers. This is usually done by developing a trojan horse, which masquer- ades as a legit application, but in fact gives the advisory more or less complete control of the system. Next, the user needs a mechanism to communicate with the compromised comput- ers, issue commands and updating the malware. This is usually referred to as the botnet’s Command and Control mechanism (C&C). Traditionally, the chat protocol Internet Relay Chat (IRC) has been extensively used as a C&C platform. FFSN can be used with every protocol which uses DNS, but HTTP seems to be the most popular protocol for this use[4].
Compromised computers has the habit of being shut down from time to time. This forces the FFSN owner to change the A records regularly in order to prevent that all A records point to unavailable computers. Each A record has a property that tells the client how long a given A record is valid. This property is called Time-To-Live (TTL) and is for regulardomains the value is recommended to be set to several days.[4]. For FFSNs however, this value is usually much lower, and TTL’s on less that 15 minutes is not uncommon[3][4].
While it is easy to detect Fluxing and low TTL’s, other benign network types share the same properties. Such networks range from the simple load-balancing Round-Robin DNS algorithm, to advanced high-availability network architectures such has content distribution networks[12]. When using the Round-Robin method, the user defines multiple A records for a given domain. When a client request the IP of the domain, the first A record is returned. On a second request, the next A record is returned. This method enables load-balancing of request to a number of different hosts or servers. It is important to note that the A records in Round-Robin DNS generally do not change, and that the IP’s is usually in the same subnet. This makes quite easy to distinguish Round-Robin from FFSN.
Round-Robin DNS (RRDNS) do provide some load-balancing, however it is quite in- effective on a larger scale. Advanced archtectures has been developed for ensuring high availability and load-balancing of web-services on a large scale. One of these archtectures is called Content Distribution Network (CDN). The aim of CDN is to move the content closer to client. This can be done by replicating the service over several servers spread around the world. When a client then requests available IP addresses of the domain, the IP addresses to the servers closest to the client is returned[11]. This can reduce the average load time of the service, in addition to making the service more resilient to total unavailability[11]. If one of the servers in the CDN goes down there are others ready to take its place, and the service remains available. The reason why RRDNS an CDN is generally not used for hosting content is that they are too easy to trace and shut down[8].
There are two major similarities between CDN and its evil twin brother FFSN. First, as mentioned, the TTL of its A records are changed frequently. Second, the A records of a domain in these networks usually has a large IP geometry, i.e., the IP addresses for the domain are not in the same subnet[8]. Being able to identify the differences between FFSN and CDN has received a lot of attention over the last years, and should be considered the one of the key factors to enable reliable detection of FFSN. Some of the proposed methods for distinguishing FFSNs and CDNs will be reviewed later in this paper. It should be noted that CDNs and Round-Robin DNS are not suitable as alternatives to FFSN. This is due to the ease of identification and localization of servers that is part of these networks.
One of the main motivations for an adversary to use FFSNs is, as with CDN, to ensure high service availability. However, FFSNs are utilizing an additional quality which ensures this availability. This quality is that the server is actually hidden for the client by the compromised computers. This makes it more difficult for law enforcement to identify where the service originates, which in turn makes it harder to shut down the service by force. An alternative method for taking down these types of services is to make the Intenet Service Providers (ISP) block a specific domain which is known to be a part of the FFSN. This is discussed later in the paper. Another related feature that is appreciated by adversaries using FFSNs is that because it is not possible to contact the hidden server directly. For a client sending a request to a domain using FFSN, the compromised computer will appear as the final destination for the request. This makes it more difficult to identify the adversary behind the crime.
The most popular application of FFSNs seems to be hosting of illegal content. FFSN can however also be considered an application of botnets. Botnets can use FFSN to hide its C&C central, commonly referred to as the mothership. This makes it feasible to let the adversary remain hidden while the mothership issues commands to its compromised computers. The location of the mothership in a FFSN can however be estimated by utilizing a slightly modified version of method called Constraint-Based Geolocation (CBG)[14]. The idea of CBG is to send request to a server from multiple computers and measuring how long it takes to receive the response. All the computers used to send requests needs to have a known location. A network of such computers with known locations can be found atwww.planet-lab.org. It is possible to use this technique to locate the mothership behind a proxy. To do this, it has to be taken into account that the IP address where the request is sent, isn’t really the last stop. Though this fact adds uncertainty to the real response time, the large amount of proxy hosts makes it feasible to do some quite good estimations of the motherships location. This will be further reviewed later in the paper.
Now that a brief introduction to the nature of FFSNs has been given, it is time to study how these networks may be detected. The techniques and results reviewed in the following sections is extracted from published scientific articles.
3. DETECTING FAST-FLUX SERVICE NETWORKS
This section will review some of the proposed methods for detecting Fast-Flux Service Networks. To enable the detection of FFSNs, some metrics on the nature of FFSNs has to be defined. The most relevant background on the attributes of FFSN is Caglayan et al.[4][5][6], Holz et al.[12][1], Passerini et al.[7] and Campbell[8].
3.1. The observable attributes of Fast-Flux Service Networks
The works of Caglayan, Holz and Passerini does have many similarities when it comes to describing the observable attributes FFSNs. They do however have a somewhat different view on the attributes. There is also some attributes that to some extent are unique for the respective works. Next follows a review of which attributes that can be used to identify FFSNs.
Caglayan is putting emphasis on four different attributes possessed by FFSNs: (1) Time- To-Live (TTL) is how long each of the A records is set to be valid. (2) The Fast-Flux Activity Index is a measure of how fast the A records for a domain fluxes (i.e., changes IP). (3) The Footprint Index is a measure of how widely spread the IP addresses of a given domain is. (4) Guilt by association is used to identify IP addresses and nameservers which has been previously identified as part of a FFSN.
When wanting to identify FFSNs from other benign networks, it is observed that FFSN domains usually has a much lower TTLs than regular domains[6]. This is observation is though not enough due to the fact that CDN networks also usually have low TTLs on their domains[12]. The main reason for setting the TTL low is high availability.
High availability can be enforced with two components: (1) A nameserver which stores what IP addresses is valid for a given domain, and (2) a piece of software which is regularly polling the IP addresses valid for that domain. Imagine now that the owner of the domain example.com has two machines at her disposal: One powerful and expensive server, and one smaller and slower server. She wants to provide her users with the best possible experience, and sets the IP address of the powerful server as the only A record for the domain. At the same time a small program is running on the less powerful server. This software has two tasks: Every minute it send a request to the powerful server and uses the response to verify that it is available. If the server does not respond, it can be assumed that it is unavailable. This can be due to overcapacity, loss of power etc. Now knowing that the primary server is unavailable, the small server changes the A record of the domain to its own IP address. If the TTL of the A record was set to a high number, e.g., 6 hours, chances are that the calling client would keep attempting to reach the now unavailable IP address for quite some time. However if the TTL was low, e.g., 10 minutes, the service would become available on the less powerful server within reasonable time.
This is a very simple example with limited availability gain for the service owner, but the same concept applies to benign CDNs and malicious FFSNs. In FFSNs the unavailability of a given server is probably due to that the victim has powered of his laptop or desktop. On CDNs servers may become unavailable due to for example large amount of network traffic. These factors the owners of both FFSNs and CDNs to keep their TTLs relatively low. The Activity Index is a measure for how fast the IP addresses for a given domain is changing. This is related to the TTL, but addition to detect mere changes to the A records, the number of new, previously unknown IP addresses in the FFSN is also recorded. It is however not stated any claim as to how the Activity Index can be used other than to merely support whether a domain is used in a FFSN or not.
The Footprint Index is a measure for how spread the IP addresses for a domain are. Such spread can usually be determined by looking at the Internet Service Providers (ISP) for the IP addresses assigned to a given domain. Though both FFSNs and CDNs has low TTLs, the owners of CDNs usually have all their servers connected to the internet through the same, or a few ISPs[12]. Because the IP addresses in FFSNs are connected to the internet though a wide range of ISPs, the Footprint Index provides a quite good measure for detecting FFSNs[6]. A factor which is making the Footprint Index less useful is, as Caglayan states, that it requires the FFSN to monitored for a longer period of time in order to generate reliable results.
The work of Holz and Caglayan has many similarities, but they do have slightly different approaches on detecting FFSNs. As Caglayan, Holz et al. are defining a set of distinguishing parameters based on two known restrictions of FFSNs[12]: (1) IP diversity. This is somewhat similar to Caglayan’s Footprint Index, but Holz focuses on the fact that the attacker does not have control over which IP addresses is available for his domain. (2) No physical agent control. While owners of CDNs and other benign networks has control over the machines that is used to make the service available, the owners of FFSNs usually has no control over the availability of the compromised machines. This is quite related to the Activity Index and the issue of high availability and low TTL described in the previous section.
While these observations is quite similar to Caglayan, Holz are using them differently in order to create parameters which can distinguish FFSNs from other networks. Holz has defined three distinguishing attributes: (1) Caglayan used the restriction of no physical agent control to measure how fast the A records where changing (Activity Index). Holz, on the other hand, observes that the lack of control over the availability of the machines, usually results in that the owner creates many A records for the domain. The argument of the FFSN owner can therefore be: If there are many valid IP addresses, one of them is likely to be available. (2) The number of nameservers used for a domain, which relates to the detection of Double-Flux Service Networks, which is described earlier. (3) The number of Autonomous System Number (ASN), which is similar to the Footprint Index. The number of ASN indicates how many different origins the IP addresses for a domain has. This may for example be measured on the number of different ISPs represented among the IP addresses for the domain.
While the work of Passerini et al.[7] does include most of the attributes defined by Caglayan and Holz, they are organizing the are grouping the attributes in a different way. In addition Passerini gives more attention to attributes that may help decide that a network is benign, rather than looking solely for suspicious behavior. When identifying attributes, or features, which will be used for identifying FFSNs in their FluXOR application, Passerini et al. is dividing the attributes into three groups: (1) features characterizing of the domain name to which the suspicious hostname belongs to, (2) features characterising the degree of the availability of the network that is potentially associated with the suspicious hostname, and (3) features characterising the heterogeneity of the potential agents of the network. In the first group, Passerini puts the attributes domain registrar and domain age. These two attributes are not considered by other work reviewed in this paper. This may be called too opinion-specific, Passerini claims to have observed that some registrars are more commonly used than others in FFSNs, but does unfortunately not refer to any statistics backing up this claim. Passerini argues that the reason for that a few registrars is popular among FFSN owners is that they reside in countries with less strict laws against cyber crime. Concerning domain age, Passerini claim that benign domains usually gets much older that maliciousones. While this claim may be easy to agree upon, there is not referred to any statistics on the age of domains.
In the second group of factors for defining FFSNs attributes, Passerini puts the number of distinct A records associated with a domain and the TTL of these records. These attributes are closely related to the work of Caglayan and Holz. As mentioned earlier, the number of A records and TTL alone may not be sufficient to distinguish FFSNs from CDNs. The attributes in the third group enables the FluXOR application to detect FFSN behavior. These attributes are also closely related to the work of Caglayan and Holz and include: (1) The number of different subnets in the network. This relates to the Footprint Index defined by Cagalyan and IP diversity defined by Holz. (2) The number of different autonomous systems. An autonomous system is defined by Hawkinson as a connected group of one or more IP prefixes run by one or more network operators which has a SINGLE and CLEARLY DEFINED routing policy[18]. This is somewhat to Footprint Index and IP diversity, but is not limited to different IP addresses being on the same subnet. (3) Number of distinct resolved qualified domain names. Even though the A records for a domain return multiple different IP addresses, the qualified domain name for these IP addresses can be identical for these IP addresses. Similar qualified domain names for the IP addresses, beyond the chance of luck, is assumed to be impossible for FFSNs. This attribute can therefore help to prove that a domain is benign. (4) Number of distinct assigned network names. This is the name given the network by the registration authority e.g., the ISP. As with the previous attribute, multiple IP addresses can share a network name, and similar network names can therefore be an indication of that a network is benign. (5) Number of distinct organisations. Organizations can also own multiple networks, such as those described in (4). So even though different IP addresses belong to different networks, they may still belong to the same organization. This may also add to the belief the domain pointing to these IP addresses, is benign.
In contrast to Passerini et al., Campbell et al. does not describe the details of FFSNs in such detail[8]. Campbell is simply monitoring the A records of a domain and compares new IP addresses with previously registered subnets for that domain. If a new IP address is not part of any existing subnets, a statistical method called Random Walk is used in order to estimate whether domain is performing Flux activity or not. Campbell is also utilizing dynamic black and white list in order to quickly label subnets with the same value as its related subnets. This technique is similar to the guilt by association technique described by Caglayan.
3.2. Fast-Flux behavior
When describing the behavior of FFSNs, Caglayan et al. creates four groups[6]: (1) Short term behavior, (2) long term behavior, (3) organizational behavior, and finally (4) op- erational behavior. When describing the short term behavior of a FFSN, the TTL and especially the Activity Index is taking into account. Questions one might ask concerning the short term behavior is therefor whether a given domain is performing Flux Activity, i.e., changing IP addresses or nameserver at a given time. While short term behavior focuses only on the individual actions in a FFSN, the long term behavior is more concerned with how FFSNs evolve over time. Key factors in long term behavior is the expected lifetime of a FFSN, and the number of IP addresses, nameservers and domains which is identified as part of the FFSN. Studies show that there is a strong relation between the number between the number of domains involved in a FFSN, and the expected lifetime of the network[6]. Note that the lifetime of a FFSN in a large degree depends of the type of illegal content which the FFSN is used to conceal[6].
Nazario and Holz has also monitored the long term behavior of FFSNs[1]. While Caglayn collected data on live FFSNs for a year[6], Nazario and Holz monitors their FFSNs for five months. In their work they use a tool called ATLAS to perform DNS mining, anddivide the long term behavior of FFSNs into five categories: (1) discovery, (2) lifetimes, (3) membership, (4) visibility, and (5) distinct botnets. Though not defined explicitly, the categories discovery and visibility is assumed to relate to the amount and intensity Fast- Flux activity, such as number of A records and the Fluxing of these. Nazario argues for how the use of so-called sleeper domains, which are domains that stays inactive for a longer period of time before it takes part of the FFSN. The lifetimes group relate to the lifetime of the monitored domains. Nazario define the active timeline as the time between data was first gathered on the domain, until the domain becomes inactive. The reason for a domain going inactive is usually that the domain has been taken down by forced by the ISP, or that the A records for the domain has been changed to point to servers controlled by the ISP. The latter is commonly referred to as parking. (3) Relates to how many botnets a given IP address is associated with. In their study the IP addresses associated with a botnet was on average also associated with 14 other FFSN domains[1, p. 5]. Finally, distinct botnets relate to the issue of mapping which domain belongs to which botnet.
An observation made by Campbell is that there is a different development in the number of subnets associated with FFSNs and CDNs. The number of observed subnets in CDN increases quite rapidly while the CDN is young, and seems to even out after a while, when all the IP addresses in the network has been observed. Because FFSNs does’t pay for its servers and IP addresses, it has a much larger pool of IP addresses. The number of subnets therefore usually continues to increase much longer than CDNs[8].
With the term organizational behavior, Caglayan is referring to how the FFSNs and the associated botnet is organized. This behavior can be related to the Command and Control (C&C) of the botnet. The organizational behavior also affects the social network of the FFSN and thereby affect the guilt by association attribute of the network. The social network of a FFSN can be viewed as a multigraph where the domains, nameservers and compromised hosts are nodes and the relation between them (e.g., IP addresses) are edges. Two nodes which is directly or indirectly connected to each other is said to be in the same cluster[6]. Relations between the nodes in the social network is detected by monitoring the suspected FFSN. The forces the adversary to choose a tactic in order by balance the need for availability and control of his botnet. Aggressive fluxing of domains, i.e., chaining the A records of the domains may provide higher availability than less frequent fluxing. A consequence of this, however, is that services monitoring the domains are able to create a larger social network over the botnet, and is thereby able to identify the malicious behavior faster.
The last group in Caglayan’s definition of behavior in FFSNs is the operational behavior. Operational behavior relates to where the agents, or compromised computers, reside. This may be the geographical location or in terms of Autonomous Systems Numbers (ASN) ASN is a unique number which identifies a Autonomous System, which is described earlier. In order to measure this operational behavior, Caglayan et al. looks at the relation between the number of domains and the number of countries or ASNs[6].
3.3. Analyzing FFSN attributes
Having reviewed attributes which can be used to identify FFSNs, it’s time to take look at how these attributes can be analyzed. The aim of this analyzation is to determine whether a domain is part on a FFSN not.
Holz et al. is dividing the total number of A records with the number of A records returned in a single lookup in order to determine the fluxiness of a domain. A low fluxiness score may be evidence of a traditional domain with a fixed set of IP addresses. A high fluxiness score may indicate the use of CDN or FFSN[12]. Holz is also assigning a flux-score to the domains. The flux-score is calculated with
f(x) = w1 ∗ nA + w2 ∗ nASN + w3 ∗ nNS
The w values represents different weights and nA is the total number of A records. nASN is the number of unique Autonomous System Numbers and nNS is the number of different nameservers. Holz use the bias value b to determine the threshold which the domain has to exceed in order to be classified as a FFSN. the w values should according to Holz be adjusted periodically in order to adjust to adversary mimicry. The reason for this is that FFSN owners can adapt his network to static values in order to disguise his domains among benign domains.
Caglayan et al. are using techniques from artificial intelligence when they are creating a Bayesian belief network This bayesian belief network is using the TTL, the Activity Index and the Footprint index to create a probabilistic assessment of whether a domain is associated with a FFSN or not[4].
When the FluXOR application by Passerini et al. is out detecting live FFSNs, it first analyses the attributes of the domains themselves. As mentioned earlier, these attributes is the domain age and the domain registrar. After a short time, FluXOR extracts the domains it considers suspicious and send them to further analysis[7]. During this analysis, the attributes mentioned earlier in this section is analyzed. Based on the values found in the analysis FluXOR determines if it thinks the domain is associated with a FFSN by using a Bayesian classifier[7].
While Campbell also is using a probabilistic model for determining whether a domain belongs to a FFSN or not, his approach is rather different from the work of Caglayan and Passerini. Each time there is detected a change in a monitored A record, its domain is checked for being associated with a FFSN. This is done by comparing the new A records /12 subnet (i.e., the first two numbers in an IPv4 address) to known subnets for that domain. If it is not, a statistical method is run based on the previous decisions for that domain. While the results of the previous random walks is an important part of the analysis, it is also recognized that given a change in an A record, there is a greater probability that the A record belongs to a FFSN domain[8]. Note that they in their experiments they tried at various sizes of such subnets, e.g., /8 and /16 subnets.
3.4. Gathering data on Fast-Flux Service Networks
Knowing what attributes to look for and how to analyze them in order to identify FFSNs, the focus now turns to how these data can be gathered. In their work, Caglayan et al. is describing two main types of sensors for gathering data: active and passive sensors[6]. Caglayan implements three active sensors. These can be considered the applications of the TTL, the Activity Index and the Footprint Index, which is reviewed earlier. The difference between the active and the passive sensors is that the active sensors are sending requests to the respective domains. The passive sensors merely intercepts requests initiated by other clients by using a technique called passive DNS replication[19]. When using passive sensors, the observer is not in control of the amount of data. This may result in an uneven distribution of data in the different intervals of the monitoring. Beyond this difference, the data from active and passive sensors are treated in the same manner.
When gathering data using passive sensors, the observer has to place the application intercepting the data a place on the network where the data is available. In organizations this would typically mean that the interceptor is connected to the router responsible for connecting the organization’s local interanet with the rest of the world. An other option is to intercept data packages transmitted between computers and the wireless router they are connected with, however this certainly raises some ethical and/or legal questions. While Campbell et al. was monitoring the network traffic of a laboratory during their experiments, Perdisci et al. where in their experiments allowed to set up their passive sensors in front of the DNS servers of a large american ISP[9]. Because such monitoring provides a more real- istic data, there might therefore be a problem with low amount of requests for A records[7]. Campbell et al. tries to mitigate this by multiplying each domain lookup request[8].
4. MITIGATING FAST-FLUX SERVICE NETWORKS
In the real world, there is usually not enough just detect the presence of a Fast-Flux service network. It is also desirable to be able to do something about. The techniques for taking down FFSNs are pretty much the same for taking down any type of botnet. There are however certain techniques that relates to the fact that FFSN is capable of concealing a server or a set of servers behind the compromised computers which makes the botnet. This section will review some of the techniques which can be utilized in mitigating FFSNs, i.e., stopping the malicious activity.
The problem of mitigating FFSNs has not received as much attention as the problem of detecting them. Much the of the reason for this may be that because FFSNs is a central part, or an application of botnets, they can be mitigated by mitigating the botnet. Holz et al has provided a short overview of the available methods for mitigating FFSNs in[13]. In these slides he is dividing the techniques into two main groups: (1) Domain blacklist, and (2) Identifying the control node. The first group mitigates the FFSN by disabling the domain, or domains, associated with the network. There are essentially two ways of disabling a domain. Technically, both of these methods pretty straight forward to implement, but it does require some special privileges. First, the domain can taken down by the registrar which is responsible for that domain. This, requires the cooperation of the registrar, which sometimes can be quite difficult to get. The lack of such cooperation can be caused by both diplomatic and legal factors. The other way of disabling a domain is by blocking user from accessing the domain. This is usually done by an ISP and is therefore under national control. There also exists international treaties where countries agree to cooperate on disabling domains serving illegal content. An example of such treaty is the Cybercrime Convention from 2001[20]. Disabling a FFSN domain may however not be an efficient way of taking down the whole network. A single FFSNs may be associated with several thousand domains[6], and there is estimated to be registered well over one hundred thousand new domains each day[21]. In addition SSAC has observed FFSNs exploiting a common practice among registrar called domain tasting, where the user can try the domain for a couple of days and then return it to the registrar without getting billed[17].
Clearly, there is a need for additional techniques. One efficient way of taking down an illegal service would be to confiscate the server providing the illegal content. Because there are generally much fewer content providing computers than compromised computers and domains, taking out these would do more damage on the illegal service. In addition, replacing such a computer is much more expensive than new domains or new compromised computers. The problem is that in order to take out this or these computer, physical access is needed. In order to get physical access you need to know where it is, in addition to a subpoena. Because the computer his hidden behind numerous of compromised host, locating these computers is not easy. There are mainly two different was in order to locate the hidden computers. The first is to detect a hidden computer’s IP address by monitoring going to and from one or several compromised hosts. This usually requires the cooperation of the ISPs of the compromised hosts. The second technique is to use a modified version of Constraint Based Geolocating (CBG) to provide an estimation of the hidden computer’s location. While using the modified CBG, the distance between the compromised host and the hidden computer is considered one extra hop. Though there may be several network nodes between the two, the number of compromised hosts combined provides a pretty good estimation. This is estimation will however not be enough alone in order to locating the exact position, but might help to identify which country or region the computer resides in[16]. This type of geolocation may also be made more difficult by utilizing techniques for evading the geolocation of an IP address. One way of doing this is by adding an extra delay to the responses. This may result in a more uncertain estimation of the hidden computer’s location, but more advanced implementations of the technique may actually fake another location[14][15].
5. DISCUSSION AND CONCLUSIONS
This article has reviewed some of the recently proposed techniques for detecting and miti- gating FFSNs. It has been reviewed how such identification can be performed by identifying and measuring the observable attributes of FFSN domains. It has also been discussed how these attributes can be analyzed in order to classify a domain as part of a FFSN or not. In the discussion it has been distinguished between the short-term and long-term behavior of FFSNs. Finally, some of the techniques which can be used for mitigating FFSNs and botnets in general where reviewed.
Based on the review done in this article, it may be concluded that there are three main techniques for classifying whether a domain is part of a FFSN or not. The first is to observe over a period of time how often the IP addresses for a domain changes. This may indicate the presence of a high-availability service which may be benign, so this technique is usually not enough by itself. The next technique is too observe who owns the IP addresses associated with a domain. If a domain changes its IP addresses often, and these addresses belongs to different internet service providers all over the world, it is a good indication of FFSN. The first two techniques combined may provide reliable and accurate in order to detect FFSNs, but because they have to observe each domain for a longer period of time, it may proved too slow detection. In order to speed up the detection we may use the third technique: domain blacklisting and whitelisting. When doing blacklisting we may utilize two facts: (1) An IP address associated with a FFSN domain is on average associated with several other FFSN domains. (2) Each FFSN domain is associated with a large number of IP addresses. These two facts may enable us to generate a social network of the domains and IP addresses. Doing analysis on this social network may help us classify domains as part FFSNs much faster. However, there is a higher probability of false positives while doing blacklisting and whitelisting alone. The technique should therefore be used together with the to first techniques. By whitelisting IP addresses and domains that we have confirmed as benign may also reduce the number of false positives. Whitelisting is limited by the amount of domains and IP addresses we are able to confirm, and also provides advisories an opportunity to remain undetected by being erroneously be put on the whitelist.
An efficient mitigation of FFSNs are largely dependent on legal international coopera- tion. Having directives which enables national law enforcement to cooperate with foreign colleagues is crucial for efficiently taking down malicious services by force. In addition there needs to be raised some ethical questions concerning how to deal with the compromised computers exploited by the botnet owners.
5.1. Further Work
As for further work, it would be very interesting to see some experimental social network analysis of the domains, nameservers and IP addresses in a FFSN. There exists tools such as Maltego and NetGlub which enables automated structuring and visualization of such networks. There will be a need for a tool that is able to gather DNS information and preparing them for network analysis. It should be interesting to get a measure of just how efficient dynamic blacklisting and whitelisting of domains may be on FFSNs.
Much of the current research are also using subnets in order to classify domains as ma- licious or benign. This technique is suited for the still most popular IPv4 address space. Although not very widespread yet, it could be interesting to see how well suited such anal- ysis is for the new IPv6 address space. Being immensely much larger that IPv4, IPv6 could potentially become more popular. A current limitation is that few personal computers use IPv6 while being connected to the internet. This is however likely to change in the time to come as IPv4 is about to run out of available addresses.
REFERENCES
[1] Nazario, J.; Holz, T.; , As the net churns: Fast-flux botnet observations, Malicious and Unwanted Software, 2008. MALWARE 2008. 3rd International Conference on , vol., no., pp.24-31, 7-8 Oct. 2008
[2] Porras, P.; Saidi, H.; Yegneswaran, V.; , A Multi-perspective Analysis of the Storm (Peacomm) Worm, 2007. http://www.cyber-ta. org/pubs/StormWorm/.
[3] The Honeynet Project. Know Your Enemy: Fast-Flux Service Networks, 2007, http://www.honeynet.org/book/export/html/130
[4] Caglayan, A.; Toothaker, M.; Drapeau, D.; Burke, D.; Eaton, G.; , Real-Time Detection of Fast Flux Service Networks, Conference For Homeland Security, 2009. CATCH ’09. Cybersecurity Applications & Technology , vol., no., pp.285-292, 3-4 March 2009
[5] Caglayan, A.; Toothaker, M.; Drapaeau, D.; Burke, D.; Eaton, G.; , Behavioral analysis of fast flux service networks, Proceedings of the 5th Annual Workshop on Cyber Security and Information Intelligence Research: Cyber Security and Information Intelligence Challenges and Strategies, pp.1-4, 2009
[6] Caglayan, A.; Toothaker, M.; Drapaeau, D.; Burke, D.; Eaton, G.; , Behavioral Patterns of Fast Flux Service Networks, System Sciences (HICSS), 2010 43rd Hawaii International Conference on , vol., no., pp.1-9, 5-8 Jan. 2010
[7] Passerini, E.; Paleari, R..; Martignoni, L..; Bruschi, D.; Zamboni, D.; , FluXOR: Detecting and Monitor- ing Fast-Flux Service Networks, Detection of Intrusions and Malware, and Vulnerability Assessment , pp.186-206, 2008
[8] Campbell, S.; Chan, S.; Lee, J.R.; , Detection of Fast Flux Service Networks Citeseer
[9] Perdisci, R.; Corona, I.; Dagon, D.; Lee, W.; , Detecting Malicious Flux Service Networks through Passive Analysis of Recursive DNS Traces, In Proc. of 25th ACSAC, 2009
[10] Holz, T.; Steiner, M.; Dahl, F.; Biersack, E.; Freiling, F.; , Measurements and mitigation of peer-to-peer-based botnets: a case study on storm worm, Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats, pp 1-9, 2008
[11] Ao-Jan Su; Choffnes, D.R.; Kuzmanovic, A.; Bustamante, F.E.; , Drafting Behind Akamai: Inferring Network Conditions Based on CDN Redirections, Networking, IEEE/ACM Transactions on , vol.17, no.6, pp.1752-1765, Dec. 2009
[12] Holz, T.; Gorecki, C.; Rieck, K.; Freiling, F.C.; , Measuring and Detecting Fast-Flux Service Networks, Intelligent Data Analysis pp 24-31, Citeseer, 2008
[13] Holz, T.; Gorecki, C.; Freiling, F.; Rieck, K.; , Detection and Mitigation of Fast- Flux Service Networks, In: Proceeding of the 15th Annual Network & Distributed System Security Symposium (NDSS 2008)
[14] Gill, P.; Ganjali, Y.; Wong, B.; Lie, D.; ,Dude, where’s that IP?: circumventing measurement-based IP geolocation, Proceedings of the 19th USENIX conference on Security, 2010
[15] Muir, J.A.; Oorschot P.C.; , Internet geolocation and evasion , Technical Report TR-06-05, Carleton University School of Computer Science, April 2006
[16] Castelluccia, C.; Kaafar, M.A.; Manils, P.; Perito, D.; Geolocalization of Proxied Services and its Application to Fast-Flux Hidden Servers, In Proc. of 9th IMC, 2009.
[17] CANN Security and Stability Advisory Committee, SAC 025: SSA Advisory on Fast Flux Hosting and DNS, March 2008.
[18] Hawkinson, J.; Bates, T.; , Guidelines for creation, selection, and registration of an Autonomous System (AS), RFC1930, 1996
[19] Weimer, F.; , Passive DNS Replication, In Proceedings of 17th Annual FIRST Conference on Computer Security Incident Handling, 2005.
[20] Convention on Cybercrime, http://conventions.coe.int/Treaty/en/Treaties/ html/185.htm, 2001
[21] DomainTools.com: Domain Counts & Internet Statistics, http://www.domaintools.com/internet-statistics/
0 notes