Home   Stats   Download   News   FAQ   Papers   Contact  

Summary:
Data Range: Fri Feb 11 08:16:52 EST 2005 to Wed Feb 19 09:45:06 EST 2014
Total Tests: 20426
Unique IPs tested: 16200
Unique Routed Prefixes tested from: 8866
Unique ASes tested from: 2786
Spoofable
We used several levels of aggregation to simplify and clarify the data for these charts. For client IP addresses that run multiple tests with conflicting results, we only use the most recent valid test, ignoring tests that could not determine whether spoofing was possible.

We mapped each IP address to its network prefix as seen in Route Views BGP tables (manually collected from the route-views.routeviews.org text dumps), and use the most recent 12 months of tests from IP addresses within any given prefix. Prefixes in which all tested client addresses result in the same status are labeled as "spoofable" or "unspoofable"; prefixes with conflicting results from different IP addresses are labeled "inconsistent". We extrapolate our results to the entire announced address space by assigning each prefix's status to every IP address covered by that prefix.

To infer the status of ASes, we count the status of each network prefix a given AS announces into the BGP table, and compute the fraction of prefixes that permit spoofing versus total tested prefixes from the AS in question. The inconsistent ASes are subdivided into those with less than half their nets considered spoofable (which are labeled "partly spoofable") and those with at least half spoofable (which are labeled "mostly spoofable").

Spoofing over time
Spoofing over time
This graph plots the spoofability of prefixes, address space, and ASes over time. In order to compensate for the generally low rate of testing (and to prevent visual clutter), all tests since 6 months before the specified date are included in the spoofability calculation, and all the "inconsistent" prefixes, addresses, or ASes are considered to be "spoofable". We do not use the same aggregation method as we do with the pie charts, because we want to record changes within prefixes and ASes instead of determining their current state. In addition, the number of tests given are only for the previous month, not the previous 6 months as with the spoofability values.
Source address filtering:
Filtering Filtering
Each test run spoofs addresses from adjacent netblocks, beginning with a direct neighbor (IP address + 1) all the way to an adjacent /8. The following figure displays the granularity of source address filtering (typically employed by service providers) along paths tested in our study. If the filtering is occurring on a /8 boundary for instance, a client within that network is able to spoof 16,777,215 other addresses. Using the tracefilter mechanism, we measure filtering depth; where along the tested path (from each client to the server), filtering is employed. Depth represents the number of IP routers through which the client can spoof before being filtered.
AS Degree AS Degree
Client tests originate at an autonomous system, i.e. a service provider. Here, we analyze the distribution of successful spoofing in relation to the size of the provider. Using DNS heuristics, we analyze the distribution of results across different types of clients.
check = Source address filtering in place
  Private     Valid     NAT  Client Count
check check check 7622
check check   5061
check     1269
  check   8
      112
Domains
Each test run attempts to send IP packets with different spoofed addresses in order to infer provider filtering policies. Private sources are those defined in RFC1918: e.g. 10/8, 172.16/12, 192.168/16 prefixes. Valid sources addresses are those present in BGP routing tables. NAT sources are unable to spoof through their NAT setup.
Geographic Distribution:
We assess the geographic distribution of clients in our dataset both to measure the extent of our testing coverage as well as to determine if any region of the world is more susceptible to spoofing. We use CAIDA's plot-latlong package to generate geographical maps.
GeoAll GeoSpoof
Location of client tests Location of spoofable networks
Failed Spoofs:
Predictably, some percentage of machines will not be able to spoof IP packets regardless of filtering policies. Some reasons are described in our FAQ. We exclude failed clients from our summary results but characterize some of the underlying reasons for failures that we are able to detect below:
Total Completely Failed Spoof Attempts: 9750
Failed as a result of being Behind a NAT: 7622
Failed as a result of (non-Windows) Operating System block: 295
Failed as a result of Windows XP SP2: 542
Failed as a result of other reasons: 1291
[note]
IPv6 Spoofing:
We began IPv6 probing with version 0.8 of the tester client.
Unique IPv6 Sessions: 421
Spoofing rate (valid IPv6): 0.386%
Spoofing rate (bogon IPv6): 0.362%
Spoofing rate (link-local IPv6): 0.000%
About:
This report, provided by CMAND, intends to provide a current aggregate view of ingress and egress filtering and IP Spoofing on the Internet. While the data in this report is the most comprehensive of its type we are aware of, it is still an ongoing, incomplete project. The data here is representative only of the netblocks, addresses and autonomous systems (ASes) of clients from which we have received reports. The more client reports we receive the better - they increase our accuracy and coverage.

Download and run our testing software to automatically contribute a report to our database. Note that this involves generating a small number of IP packets with spoofed source addresses from your box. This has yet to trip any alarms or cause problems for our contributors, but you run the software at your own risk. The software generates a customized report displaying the filtering policies of your Internet service provider(s).

Feedback, comments and bug fixes welcome directly or on the Spoofer Mailing List. Contact Rob Beverly rbeverly email for more information. This page is regenerated six times daily. Last generated Wed Feb 19 09:45:06 EST 2014.

Individual clients are counted singly regardless of the number of tests performed.

Process Time: 0.126sec