1. Introduction
The 802.11-based access networks are popular
and are used widely in public area wireless networks. However, because of this
widespread implementation, the 802.11-based networks have become an attractive
target for potential attackers.
The
802.11-based protocols are decentralized, fair and efficient which makes them
widely used protocols in the wireless networks. If one or more of these
characteristics are compromised, it can disrupt the services to the network.
In
this project we focused on the threats posed by denial-of-service (DoS) attacks
against 802.11’s MAC protocol. Here we show, how the vulnerabilities in the
802.11 MAC protocol allow an attacker to disrupt service to the network using
relatively few packets and low power consumption.
The
vulnerabilities exploited are referred from couple of technical papers on
Denial-of-service attack on 802.11 protocols and we have just implemented them
and analyzed the results.
You can refer to the paper "802.11 Denial-of-Service Attacks: Real Vulnerabilities and Practical Solutions" by John Bellardo and Stefan Savage for relevant information.
You can refer to the paper "802.11 Denial-of-Service Attacks: Real Vulnerabilities and Practical Solutions" by John Bellardo and Stefan Savage for relevant information.
2. IEEE 802.11 Architecture
For
understanding of IEEE 802.11 Architecture, we referred to the document “A Technical Tutorial on the IEEE 802.11
Protocol”, By Pablo Brenner, Director
of Engineering which is available online for reading. The further discussion about the project is based on the concepts
discussed in the mentioned document.
3. Media access vulnerabilities exploited
Here,
we will be making changes to the protocol implementation of 802.11 in the
compromised system and make it work as a jammer system.
Attack
on Virtual carrier sense
As
discussed in the IEEE 802.11 architecture, the virtual carrier-sense mechanism
is used to mitigate collisions from hidden terminals. Each 802.11 frame carries
a Duration field that indicates the number of microseconds that the channel is reserved.
This value, in turn, is used to program the Network Allocation Vector (NAV) on
each node. Only when a node’s NAV reaches 0 is it allowed to transmit.
As
an attacker (jammer) we have exploited this feature by asserting a large
duration field, thereby preventing well behaved clients from gaining access to
the channel. While it is possible to use almost any frame type to control the
NAV, including an ACK, using the RTS has some advantages. Since a well-behaved
node will always respond to RTS with a CTS, the jammer may co-opt legitimate
nodes to propagate the attack further than it could on its own. Moreover, this
approach allows an attacker to transmit with extremely low power or using directional
antennae, thereby reducing the probability of being located. The maximum value
for the NAV is 32767, or roughly 32 milliseconds on 802.11b networks, so in
principal, the jammer need only transmit approximately 30 times a second to jam
all access to the channel. Finally, it is worth noting that RTS, CTS and ACK
frames are not authenticated in any current or upcoming 802.11 standard.
Attack on
fairness implemented in the protocol
We
modified the duration field to maximum (32767) for all the frames sent out from
the jammer system so that it will not allow other systems to transmit data to
maximum possible duration of time, but that only ensures that other nodes will
not transmit for the period of NAV.
We
saw that, each station to choose a Random Number (n) between 0 and a given number,
and wait for this number of Slots before accessing the medium, always checking
whether a different station has accessed the medium before.
If
suppose there are 2 stations (A and B) in the network which are contending for
the channel. Suppose a station A chooses a random number as say 15 and another
station B chooses a random number as 7 and set the counters to the selected
random number. Both the stations decrement the counter after each slot and wait
till the counter reaches 0. This is done without any prior knowledge about the
slot in which the other station has planned to transmit. On 7th
slot, station B senses the channel and as it is free, it starts transmitting
data in that channel. After the station A hears communication from the station
B, it sets its NAV as per the duration field in the RTS-CTS frames and stops
decrementing its counter. After the NAV period the station A starts
decrementing the counter from the value it stopped last time. This is done to
ensure that a station is not refrained from transmitting for long because some
other station is always choosing a smaller value than it has chosen. So after
some trials station A will be able to transmit. This way the protocol ensures
fairness in the system.
The
random number that a station chooses for back off is chosen between CWMIN and
CWMAX values which are maintained for each of the access categories the
implementation have maintained. By modifying these CWMIN and CWMAX values, we
can change the range within which the random number will be chosen.
Here
in the jammer we have modified the values of CWMIN and CWMAX and set it to
zero, so that the jammer is forced to choose the random number as 0, i.e. the
first slot available after the end of current ongoing transmission. This way,
the jammer system will always choose the very first slot available for
transmission and will not let any other system to transmit the data.
4. Understanding the ath5k source code
Not
a detailed documentation on ath5k source is available online, so we will try to
give an overview about the source code we traversed while understanding the
implementation of the 802.11 protocol in ath5k. While understanding the code we
focused more on the code which we thought was relevant for our project.
The
structure of our implementation is first we will discuss some important
structures that are used frequently in the ath5k source and then we will
describe how we started and ended up in the implementation we wanted to do.
4.1. Important structures in ath5k
Below are some important
structures and enums in the ath5k source code with a brief description about
the same.
struct
ieee80211_hw :
This
structure contains the hardware configuration information like number of
hardware transmit queues, rate control algorithm for the hardware, etc. This
structure is passed from mac80211 to the driver while calling ath5k_tx()
function in mac80211-ops.c to get "priv", a pointer pointing to
private area that was allocated for driver use along with this structure.
struct ath5k_hw
:
This
structure represents the ath hardware. It gives the device state associated
with an instance of device. It consists of pointers to structures like
ieee80211_hw, ieee80211_channel, ath5k_desc, ath5k_txq, ath5k_txq_info, and
function pointers of transmit and receive descriptors.
struct ath5k_txq
:
Struct
ath5k_hw has 10 queues of type "struct ath5k_txq" which denotes the
queue in hardware.
It
has fields such as queue number, number of queued buffers, max allowed number
of queued buffers, etc. Each of the 10 such queues in struct ath5k_hw are
initialized in ath5k_init() function in base.c. This structure pointer is also
passed to the function ath5k_tx_queue() in base.c from ath5k_tx() in
mac80211-ops.c. Using this structure, it is checked in ath5k_tx_queue()
function if the queue is already at its maximum size or not.
struct
ath5k_txq {
unsigned
int qnum;
u32 *link;
struct
list_head q;
spinlock_t lock;
bool setup;
int txq_len;
int txq_max;
bool txq_poll_mark;
unsigned int
txq_stuck;
};
enum
ath5k_tx_queue_id
:
It
gives the queue numbers as per the type of queue.
enum
ath5k_tx_queue_id
{
AR5K_TX_QUEUE_ID_NOQCU_DATA
= 0,
AR5K_TX_QUEUE_ID_NOQCU_BEACON
= 1,
AR5K_TX_QUEUE_ID_DATA_MIN
= 0,
AR5K_TX_QUEUE_ID_DATA_MAX
= 3,
AR5K_TX_QUEUE_ID_UAPSD
= 7,
AR5K_TX_QUEUE_ID_CAB
= 8,
AR5K_TX_QUEUE_ID_BEACON
= 9,
};
Each
element of array "txqs[10]" in the struct ath5k_hw represent one of
the queues with the index number as the queue numbers.
struct
ath5k_hw *ah
txq[0]
--> Data + Voice (VO)
txq[1]
--> Data + Video (VI)
txq[2]
--> Data + Best Effort (BE)
txq[3]
--> Data + Background data (BK)
txq[4]
-->
txq[5]
-->
txq[6]
-->
txq[7]
--> Urgent automatic power save data
txq[8]
--> After Beacon (CAB)
txq[9]
--> Beacon (BEACON)
struct ath5k_buf
:
struct
ath5k_buf {
struct list_head list;
struct
ath5k_desc *desc; /* virtual addrress of desc */
dma_addr_t daddr;
/* physical addrress of desc */
struct sk_buff *skb;
/* skbuff for buf */
dma_addr_t skbaddr;/* physical addrress of skb data */
};
Once
found that the queue is empty and available in ath5k_tx_queue(), data to be
transmitted is copied to the txqueues, struct ath5k_buf represents a single
queued frame(buffer).It consists of pointers to sk_buff and ath5k_desc.
struct
sk_buff *skb;
struct
ath5k_buf *bf;
bf->skb
= skb;
struct
ath5k_desc :
It
is atheros hardware DMA descriptor. It consists of ds_link: Physical address of
the next descriptor, ds_data: Physical address of data buffer (skb), and ud:
Union containing hw_5xxx_tx_desc structs and hw_all_rx_desc.This is read and
written to by the hardware. Once the data is copied to the txqueues,
ath5k_txbuf_setup is called to which ath5k_buf pointer is passed. Then the
pointer to ath5k_desc is obtained from ath5k_buf and passed to the function ath5k_hw_setup_4word_tx_desc
where the control words(hw_5xxx_tx_desc field of struct ath5k_desc)are
initialized.
struct sk_buff :
It
is a socket buffer, where the data to be sent to the above/below layer are put.
The data that is received from the mac80211 by the driver is in sk_buff. These
buffers are in the form of linked list with structure sk_buff having pointers
to next and prev sk_buff.
struct
ath5k_tx_status :
This
is TX status descriptor. This consists of sequence number, timestamp , status
code ,final retry count , RSSI for received ACK, etc. TX status descriptor gets
filled by the hardware on each transmission attThus if we track the
retransmission path, we could find it getting passed to the function
ath5k_tx_frame_completed from ath5k_tx_processq. In the athk5_tx_processq it is
checked for the retry count and hardware function is called to update the
ath5k_tx_status.
struct
ieee80211_tx_info :
This
structure contains skb transmit information.
It
is placed in skb->cb for three uses:
(1)
mac80211 TX control - mac80211 tells the driver what to do
(2)
driver internal use (if applicable)
(3)
TX status information - driver tells mac80211 what happened
It
consists of transmit info flags, union for status data, signal strength of ACK
frame, etc.
In
the ath5k_tx_frame_completed function called from ath5k_tx_processq, the union
of status data in ieee80211_tx_info are filled with ath5k_ts_status.
Example : struct ieee80211_tx_info *info;
struct ath5k_tx_status *ts;
info->status.ack_signal = ts->ts_rssi;
struct
ath5k_txq_info :
This
is the structure that holds the TX queue's parameters like queue type(enum
ath5k_tx_queue), subtype(enum ath5k_tx_queue_subtype), cwmin, cwmax, aifs,
transmission queue flags, constant bit rate period, and waiting time of the
queue after ready is enabled. This structure is populated in the function
ath5k_txq_setup which is called from the init function of the driver module -
ath5k_init. Thus when the driver module is inserted, the queue parameters are initialized.
enum
ath5k_tx_queue :
Queue
types are used to classify tx queues. ath5k_tx_queue enumeration helps to
identify inactive queue, data queue , beacon queue, after beacon queue and
unscheduled automatic power save delivery queue.
enum ath5k_tx_queue_subtype
:
Queue
sub-types are used to classify normal data queues as background traffic,
best-effort (normal) traffic, video traffic and voice traffic. These are the 4
Access Categories defined.0 is the lowest priority and 4 is the highest. Normal
data that hasn't been classified goes to the Best Effort AC.
4.2. Flow of the source code we traversed
After
we identified the vulnerabilities which can be exploited to convert the
standard protocol implementation to work as a Wi-Fi Jammer, we started
understanding the ath5k source code. ath5k being an openHAL implementation, we
get the source code for ath5k driver in all the latest Linux kernel source. The
source files for the ath5k code are located in the path "linux-<version>/drivers/net/wireless/ath/ath5k".
Since our goal was to change the duration field for RTS/CTS frames transmitted
out of the system and to change the CWMIN and CWMAX parameters for the
different access categories, we started searching in a particular direction,
i.e. how the data is transmitted out of the system.
But before we
start we should understand how exactly the driver handles the hardware
resources that it will be using. When we boot the system, by default ath5k
driver is loaded as a removable module. So on system boot, when the driver
module is loaded, the function “ath5k_init ()” is called which is located in “base.c”.
Lets us now see what all functions are executed when the module is loaded on
system boot.
ath5k_init ():
This function is
the first function that is called when the driver module is loaded on system
boot up. This function calls a set of functions which are used to initialize
the hardware parameters, transmission/reception descriptors, function pointers
to the functions which are used for different purposes by the driver code from
different functional sections of the code, etc. The function takes a
"struct ieee80211_hw" pointer as an argument. Let us look into some
of these functions one by one to get an overview of how the above mentioned
parameters are initialized.
When the data is
to be sent out from the system, we use some system buffers to temporarily store
that data before it is sent out from the system. The data that is sent out from
the system can be the management data used to learn the network or can be the
real data that is to be sent out from the system to access and enjoy the
network services. To make use of these buffers, ath5k initializes these buffers
(queues) as per the protocol requirements. The function “ath5k_beaconq_setup ()”
is called to allocate and setup the hardware transmit queue for beacon frames.
This function sets up the queue parameters like AIFS = 2, CWMIN = 15,
CWMAX=1023. Here we setup the transmit queue parameters for Beacon queue
(AR5K_TX_QUEUE_BEACON). When we tried to see the value set for this constant,
we also saw the enum which has values for all the queues i.e. Data, inactive,
beacon, CAB, etc. We found that, we have an array of type “struct
ath5k_txq_info” in “struct ath5k_hw”, which defines queue parameters like queue
type (data, beacon, CAB, inactive), subtype (background traffic, best effort,
video, voice), aifs, cwmin, cwmax, etc. for each of the queues the hardware
has.
ath5k_txq_setup():
The function “ath5k_txq_setup()”
is called for each of the queue sub-types to classify normal data queues. There
are four such queue subtypes (listed under "enum ath5k_tx_queue_subtype")
for the data queue type "AR5K_TX_QUEUE_DATA". The function “ath5k_txq_setup()”
takes a pointer for "struct ath5k_hw", the queue type (Which is Data
queue here) and queue subtype. It initializes the AIFS, CWMIN and CWMAX
parameters with default values for each of the data hardware queues. So
whatever the queue subtype is, these parameters are set to the same value.
After initializing these default parameters for the queue, the function calls “ath5k_hw_setup_tx_queue()”
which actually initializes a transmit queue.
ath5k_hw_setup_tx_queue():
The code below
initializes all the data queues to the inactive state by setting queue equal to
AR5K_TX_QUEUE_INACTIVE and then exiting the loop.
for(queue=AR5K_TX_QUEUE_ID_DATA_MIN;ah->ah_txq[queue].tqi_type!=AR5K_TX_QUEUE_INACTIVE; queue++)
for(queue=AR5K_TX_QUEUE_ID_DATA_MIN;ah->ah_txq[queue].tqi_type!=AR5K_TX_QUEUE_INACTIVE; queue++)
{
if (queue >
AR5K_TX_QUEUE_ID_DATA_MAX)
return -EINVAL;
}
The function
sets up the queue variable values as per the queue for which it is called and
then calls the function “ath5k_hw_set_tx_queueprops()”.
ath5k_hw_set_tx_queueprops():
This function
sets up the queue properties like CWMIN, CWMAX, AIFS, CBR Period, flags, etc.
The function “ath5k_cw_validate()” validates the values of CWMIN or CWMAX that
was set and make the values as one less than nearest value such that the value is
power of 2.
After function
call for “ath5k_hw_set_tx_queueprops()”, the control comes back to the function
“ath5k_txq_setup()” which after setting up a few "struct ath5k_txq"
parameters returns the control to “ath5k_init()”. This way the function “ath5k_txq_setup()”
is called for all four subtypes of type data queue.
After the
hardware queues are setup the function “ath5k_init()” initializes the interrupt
tasklets ah->rxtq, ah->txtq, ah->beacontq and ah->ani_tasklet. While
initialing these tasklets, the function “tasklet_init()” initialize the
function pointers to the function addresses which are supposed to be called on the
occurrences of specific interrupts.
ath5k_tasklet_tx():
As we were
looking for transmission path of the data, we tried to look into the function “ath5k_tasklet_tx()”.
Though later we realized that the function is used for the re-transmission of
the frames, we will try to brief about its implementation. As it is discussed
in description of "struct ath5k_hw", the structure denotes the ath
hardware and it has 10 queues of type "struct ath5k_txq" which
denotes the queue in hardware. The function is called by an interrupt when
frames are to be re-transmitted. When the frames are to be re-transmitted, the function
scans through all the 10 queues and process if the condition "if
(ah->txqs[i].setup && (ah->ah_txq_isr_txok_all &
BIT(i)))" is satisfied. The conditions checks for each of the queues if
the queue parameters are setup and if one of the high bits of the value
ah->ah_txq_isr_txok_all matches with high bits of BIT(i) for that particular
i as queue number. As we understood it, only queue numbers 0 and 2 are
processed as value of ah->ah_txq_isr_txok_all is 5, i.e. 00000101 and BIT(0)
is 00000001 and BIT(2) is 00000100 which makes the condition as true. The
function “ath5k_tx_processq()” is called to process the frames to be
re-transmitted for that queue.
ath5k_tx_processq():
The function
checks all buffer entries for the particular queue it is called for and if any
buffer (frame) was not processed last time it is re-transmitted using the
function “ath5k_tx_frame_completed()”. When a frame is successfully transmitted
its bf->skb pointer is put to NULL, hence if the bf->skb is not equal to
NULL, it means the buffer was not processed last time. Initially our
understanding was that the function “ath5k_tasklet_tx()” is called to transmit
the frames in the first go, but after we tried to put some debug messages in
the code, we understood that the function is called for the re-transmission of
frames.
To see the debug
messages, we have to first enable them and then put them into the code. The
debug messages are enabled in ath5k by executing commands as below.
cat /sys/kernel/debug/ieee80211/phy0/ath5k/debug
echo all
>
/sys/kernel/debug/ieee80211/phy0/ath5k/debug
cat /sys/kernel/debug/ieee80211/phy0/ath5k/debug
The output will
show the debug level set to 0xffffffff which means all the debug messages are
enabled.
DEBUG LEVEL:
0xffffffff
reset
+ 0x00000001 - reset and initialization
intr
+ 0x00000002 - interrupt handling
mode
+ 0x00000004 - mode init/setup
xmit
+ 0x00000008 - basic xmit operation
beacon
+ 0x00000010 - beacon handling
calib
+ 0x00000020 - periodic calibration
txpower
+ 0x00000040 - transmit power setting
led
+ 0x00000080 - LED management
dumpbands
+ 0x00000400 - dump bands
dma
+ 0x00000800 - dma start/stop
ani
+ 0x00002000 - adaptive noise immunity
desc
+ 0x00004000 - descriptor chains
all
+ 0xffffffff - show all debug levels
The debug
messages can be put into the code by writing the "printk" function
call as below.
printk(KERN_ALERT
"message to be printed");
printk(KERN_INFO
"message to be printed, hardware queue no = %d",queue);
The function “ieee80211_register_hw()”
registers the hardware device for the use by the ath5k driver.
After we went through
this all code of “ath5k_init()”, we didn't find any place from where we could
say the frames are actually transmitted. We tried to put debug messages at
various places in the code we had traversed till now and tried to send data
using iperf or ping. But we were not able to see the debug messages they we expected.
When we were not
able to see any way out, we started searching for any documentation for ath5k
and there we came across a document which tells about the transmission path in
ath5k. The document is available at www.campsmur.cat/files/mac80211_intro.pdf. There
we understood that there is a path for the data messaged which we never
traversed. Though we found about the path very late, we are happy that we found
it. So we again started going through the functions in that path. First
function we traversed in that path was “ath5k_tx()” in mac80211-ops.c.
ath5k_tx() :
The function
receives the skb to be transmitted and Finds the appropriate hardware queue for
that skb. This queue number is assigned to the skb before it is passed to the
driver, so couldn't find on what basis the skb is allocates to any particular
queue. From this function there is a call to function “ath5k_tx_queue()”.
ath5k_tx_queue() :
This function
first checks for the availability of the queue for putting the skb into it. If
the queue is not available due to some reason it will stop the queue. It may
also drop the frames. When the queue is available and the skb is successfully
assigned to "struct ath5k_buf" pointer, the function “ath5k_txbuf_setup()”
is called which prepares the frame for transmission.
ath5k_txbuf_setup() :
In this function
first time we saw the frame duration value being populated. The function “ieee80211_rts_duration()”
adds the CTS duration, Data frame duration and ACK duration values calculated
from function calls to “ieee80211_frame_duration()” and assigns it to duration
field. This was as per it is there in the protocol implementation and so we
understood that we were on right track. So this way the
value is assigned to the frame duration field. The function “ieee80211_rts_duration()”
only gets called when the condition
if (rc_flags & IEEE80211_TX_RC_USE_RTS_CTS) is true.
ath5k_hw_setup_4word_tx_desc() :
We found from
online "doxygen" documentation for ath5k that the function pointer
ah->ah_setup_tx_desc points to the function “ath5k_hw_setup_4word_tx_desc()”.
So we started looking the code for “ath5k_hw_setup_4word_tx_desc()”. The
function initializes a 4-word tx control descriptor found on MAC chips. Here we
found that the word "txctl2" is assigned the duration field value
that the code initialized using function “ieee80211_rts_duration()”. The value
is assigned as "txctl2 |= rtscts_duration &
AR5K_4W_TX_DESC_CTL2_RTS_DURATION" which means this statement assigns the
value of either rtscts_duration or AR5K_4W_TX_DESC_CTL2_RTS_DURATION, whichever
is smaller. As this is the value that will be used by the underlying atheros
MAC chip to assign values to the RTS frames sent out from the system, we
thought of tweaking the code here to make the effect.
Whenever an skb
buffer is passed to the MAC chips for transmission on the channel, the skb is
transmitted with a descriptor attached to it. The MAC chip uses this descriptor
to set appropriate parameters, e.g. duration field, to the frame to be
transmitted on the channel.
So we made the
changes to the txctl2 value and changed the statement as "txctl2|=AR5K_4W_TX_DESC_CTL2_RTS_DURATION".The value of AR5K_4W_TX_DESC_CTL2_RTS_DURATION is 32767, which is the maximum
possible value for the duration field. After we made the changes we compiled the
code again, inserted the module and started testing with the setup we created.
We were able to see the changes when we analyzed the frames sent out of the
jammer system. The details of how we tested and results we got are discussed in
the testing and observations section.
As we mentioned
when we identified the vulnerabilities of 802.11 protocols standard, that
setting duration field to maximum value makes sure that the other systems in
the network does not transmit for that much time. But after the NAV period is
over they again contend for the channel and due to fairness implemented in the
protocol, get the opportunity to transmit after some attempts. To not to let
other system stop transmit after NAV period is over we planned to change CWMIN
and CWMAX values to 0 so as to force jammer system to choose the random number
as 0 and transmit in the very first slot that is available.
Till now we
encountered the CWMIN and CWMAX parameters in the code only once. That is in
the function “ath5k_txq_setup()”. So we went to the code and made the changes
to the DATA+Best Effort queue parameters and changed CWMIN and CWMAX to 0. We
tested it again and we were not able to see any significantly better results
over our previous test results. May be changing the duration field value to the
maximum value itself was working much better and not letting other systems to
transmit and so we were not able to see much better output.
By this time we
had got in contact with a couple of members of the ath5k developers forum and
we discussed our observations with them. One of them suggested that we won't be
able to see the difference unless we send the data from the jammer system at
higher rates. The other one said, that the CWMIN and CWMAX changes we did in “ath5k_txq_setup()”
are periodically refreshed and we will have to make the changes in the function
that is called when the valuies are refreshed. So as we were not sure about any
of the approach and we wanted to see the results as soon as possible, we
decided to do both the things together.
5. Implementation details
5.1. Network setup
The network
configuration for the project is as shown in the below diagram.
As
seen in the figure, the environmental setup involves a jammer system, client
system, an access point system to which jammer and client are connected and a
monitor system to monitor the transmissions.
Access
Point : Since
our aim is to implement a jammer, to check its implementation we need a
network. Hence a network was set up with an AP (labAP) and 3 systems.
Jammer
System : Gets connected
to the access point set up (labAP), and jams
the network by denying service to the client.
Client System : Gets connected
to access point and tries to send data to other systems in the network. Client
is needed to check the performance of jammer implementation i.e. to see how
well jammer is denying service to it.
Monitor System : To view the
changes made by the jammer in the frames it is sending, we need a monitor
system.
Following
is the general system configuration of all the systems mentioned above.
·
Network Interface Card : AR5413
802.11abg (Atheros Communications Inc)
- Linux Distribution : Fedora 17
- Driver : Ath5k - ath5k is a completely FOSS Linux driver for Atheros wireless cards. Ath5k now just calls hardware functions directly.
- Linux Stack : Mac80211 - mac80211 is the Linux stack for 802.11 hardware that implements only partial functionality in hard- or firmware.
- Kernel Version : 3.4.4
Access Point
Configuration
Atheros AR5413
wireless network interface card is configured as an access point. Currently the
default driver of atheros cards is ath5k. When using ath5k as the driver, following
are the steps to be performed in setting up an access point.
1.
Task
: Turn off the network manager
Commands : To turn it off temporarily : service
NetworkManager stop
To check the status : service NetworkManager status
To turn it off permanently : chkconfig
NetworkManager off
2.
Task
: Start the network service
Command :
service network start
3.
Task
: Start the network service
Command :
service network start
4.
Task
: Create a ath_pci.conf file in /etc/modprobe.d/ with the following contents
alias wifi0
ath_pci
alias
ath0 ath_pci
options
ath_pci autocreate=ap
In
the current setup, wired interface is connected to internet and wireless is
not. To share the same network segment, we need to bridge or add some forward
rules between the interfaces. Here we have used bridging.
5.
Task
: Install bridge-utils
Command
: yum install bridge-utils
6.
Task
: Create a configuration file ifcfg-br0 in /etc/sysconfig/network-scripts/
for the bridge interface with the
following contents
DEVICE=br0
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.1.50
NETMASK=255.255.0.0
GATEWAY=192.168.3.254
ONBOOT=yes
TYPE=Bridge
7.
Task
: Create a configuration file ifcfg-ath0 in /etc/sysconfig/network-scripts/
for the atheros wireless interface with the
following contents
TYPE=Wireless
DEVICE=ath0
HWADDR=90:a4:de:f7:3d:40
BOOTPROTO=static
ONBOOT=yes
BRIDGE=br0
USERCTL=yes
ESSID="labAP"
MODE=Master
RATE=54M
IWCONFIG="txpower
63mw nickname gateway"
8.
Task
: Create a configuration file ifcfg-wifi0 in /etc/sysconfig/network-scripts/
for the wifi interface with the following
contents
TYPE=Wireless
DEVICE=wifi0
ONBOOT=no
HWADDR=90:a4:de:f7:3d:40
BOOTPROTO=dhcp
USERCTL=no
9.
Task
: Edit the configuration file of wired interface(/etc/sysconfig/network-scripts/ifcfg-p3p1).
UUID="318b5ff3-9cce-49f9-9ca3-baf01f9558fb"
NM_CONTROLLED="yes"
BOOTPROTO=none
DEVICE="p3p1"
ONBOOT="yes"
HWADDR=00:15:60:9E:B4:21
TYPE=Ethernet
BRIDGE=br0
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME="System
p3p1"
10. Task : Copy
ifcfg-ath0, ifcfg-wifi0, ifcfg-br0 to /etc/sysconfig/networking/devices/
11.
Task
: Restart the network service
Command
: service network restart
12.
Task
: Test whether bridge - br0 is working
Command
: brctl show
Expected output
is the following.
bridge name bridge id STP
enabled interfaces
br0 8000.0015609eb421 no ath0
p3p1
Restart the
system to see the access point working.
To change the
channel to 802.11a (5GHz), the command is “iwconfig ath0 channel 36”.
Monitor System
Configuration
To configure the
atheros network interface card to monitor mode, following steps are followed.
1.
Put
the wlan0 interface down : ifconfig wlan0 down.
2.
Put
the card in monitor mode : iwconfig wlan0 mode monitor.
3.
Put
the wlan0 interface up : ifconfig wlan0 up.
Client System
and Jammer System Set Up
Insert the
AR5413 NIC cards in these systems like in AP and monitor system. Assign IP
address and gateway to the wireless card interface and connect both of them to
the 'labAP' access point that has been set up.
5.2. Testing and observations
Thus, with virtual sensing enabled and
duration field (jammer module) kept to maximum value we can jam the network
even while sending data at lower rate from the jammer system which makes it an
energy efficient jammer.
After we did the
set up for testing the changes we made, we started sending data out in the
network.
Ensuring
the sharing of channel : Before actually we can start testing the setup, we
should be sure about the setup is the setup that is behaving normally and all
the system in the network are sharing the channel bandwidth equally. To make
this sure we first started sending the “iperf” traffic from one client system
to the access point we set up. We can send the data to any other system in the
network as well, which is the more preferred way to test these kind of projects.
But we couldn’t make the setup so that the systems will share the bandwidth
equally. So we continued with what setup we could make.
We first ran “iperf –s –u -i 1” command in the access point system and
then ran the command “iperf –c 192.168.1.50 –u –t 100 –i 1 –b 80M” from the
client system. Here the IP address 192.168.1.50 is the access points IP address
where we have run the iperf udp server. The client system after running the client
application started sending data to the iperf server at 192.168.1.50 and
started showing the bandwidth utilization as around 32 to 34 Mbps in the client
system iperf output. That said, the maximum available bandwidth is around
32Mbps. Now we added one more system to the network, by connecting one more
client system to the access point and running “iperf –c 192.168.1.50 –u –t 100
–i 1 –b 80M” command in the new client system. Now the bandwidth availability
in both the system went down and started fluctuating around 5Mbps to 20Mbps and
the bandwidth was not getting shared between the systems. We then adjusted the
locations of CPUs (i.e atheros cards antennae) such a way that it actually
started sharing the bandwidth and the bandwidth used went to around 15-16Mbps
in both the systems. Though both the systems did not perfectly share the
bandwidth, but we were able to see fair sharing of bandwidth. Then we
considered the setup can be used to test the changes such that there will not
be any biasing in the network.
We
then tested the same setup for multiple bandwidth requirements from both the
systems and made it sure that the bandwidth is shared as expected.
Testing
procedure : Whatever
changes we had made, we made it in the kernel source we had downloaded and then
we compiled the entire kernel source code once. After the compilation the .ko
(kernel object) files got created for the ath5k driver. This .ko files can be
directly inserted in to the current kernel as loadable modules. As per the
system setup we discussed, we started one system as a client system, one system
we planned to use as a jammer system so inserted the ath5k.ko module into
current kernel using below commands,
ifconfig
wlan0 down
insmod
drivers/net/wireless/ath/ath5k/ath5k.ko
ifconfig
wlan0 up
After
the module is inserted one of the clients is now configured as a jammer system.
Now any data we send out from this system will pass through the driver module
we just inserted and the changes will be reflected. To test this, we started
sending data out from the jammer system to the access point using iperf command
similarly as we tested the sharing of channel bandwidth and we started looking
for the RTS/CTS frames flowing between the jammer system and the access point
using wireshark. As explained in the “Environment setup” section we put the
monitor system in to the monitor mode so as to see frames transmission in the network,
but we could not see any RTS/CTS frames in the network. The issue was, there
were no RTS frames sent out from the jammer system as by default the sending of
RTS frames (virtual carrier sensing) is not enabled in the current
implementations. So to enable RTS frames, we ran below command in the jammer
system,
ifconfig
wlan0 down
iwconfig
wlan0 rts 0
ifconfig
wlan0 up
Using
the iwconfig command as mentioned above we actually set the RTS threshold value
to 0, so that any frame that is more than this threshold size, will need
virtual sensing using an RTS frame. So by keeping RTS threshold to zero in the
jammer system we are actually asking jammer system to send RTS for every frame
that is sent out from that system.
After
enabling the virtual carrier sensing in the jammer system, We could see the
changes we made in the code when analyzing the duration field in the RTS frames
sent out from the jammer system. The duration field (highlighted in the below
screenshot of Wireshark) was set to 32767, which is the maximum possible value
(in microseconds) that the duration field can carry. And in the reply CTS frame
sent by the access point the value was RTS duration minus the time it took for
RTS to reach the access point and the time it took to process the RTS frame.
Now
we were ready to test the effect of changes we made to the implementation of
802.11 on the network performance.
Observation at
1.04 Mbps data rate :
For
testing we first started sending iperf data from both the client system to the
access point at default data rate i.e. 1.04 Mbps. The output we got was as
expected as the client system was able to send data at 1.04 Mbps. Then we inserted the jammer module in
one of the system and started sending iperf data from that system with the same
command as we ran in the client system. We collected the iperf output for about
500 seconds from both the systems, .i.e. client system and the jammer system.
We then plotted the graph for the same with “Time in seconds” on X-axis versus “Average
bandwidth availability at the client system in Mbps”.
Figure : Average Bandwidth versus Time in the client system (1 Mbps)
We
started sending iperf data alone from both the client systems to the access
point at 20 Mbps data rate. The output we got was as expected as the client
system was able to send data at around 16-18 Mbps. Then we inserted the jammer
module in one of the system and started sending iperf data from that system
with the same command as we ran in the client system. We collected the iperf
output for about 500 seconds from both the systems, .i.e. client system and the
jammer system. We then plotted the graph for the same with “Time in seconds” on
X-axis versus “Average bandwidth availability at the client system in Mbps”.
As
per the graph, when the client system was sending data at around 1.04 Mbps in
the presence of jammer system which was also sending data at the same rate, the
average bandwidth utilization of the client system went down by around 35-40%
as against when the client system was sending data at 1.04 Mbps alone without
the jammer system interfering the transmission.
Observation at 10
Mbps data rate :
We
started sending iperf data from both the client system to the access point at 10
Mbps data rate. The output we got was as expected as the client system was able
to send data at 10 Mbps. Then we inserted the jammer module in one of the
system and started sending iperf data from that system with the same command as
we ran in the client system. We collected the iperf output for about 500
seconds from both the systems, .i.e. client system and the jammer system. We
then plotted the graph for the same with “Time in seconds” on X-axis versus
“Average bandwidth availability at the client system in Mbps”.
Figure : Average Bandwidth versus Time in the client system (10 Mbps)
As
per the graph, when the client system was sending data at around 10 Mbps in the
presence of jammer system which was also sending data at the same rate, the
average bandwidth utilization of the client system went down by around 60% as
against when the client system was sending data at 10 Mbps alone without the
jammer system interfering the transmission.
Observation at 20
Mbps data rate :
Figure : Average Bandwidth versus Time in
the client system (20 Mbps)
As
per the graph, when the client system was sending data at around 16-18 Mbps in
the presence of jammer system which was also sending data at the same rate, the
average bandwidth utilization of the client system went down by around 80% as
against when the client system was sending data at 16-18 Mbps alone without the
jammer system interfering the transmission.
Observation at 40
Mbps data rate :
We
started sending iperf data alone from the client system to the access point at
40 Mbps data rate. The output we got was as expected as the client system was
able to send data at 16-18 Mbps. Then we started sending iperf data from the
jammer system at the same rate as the client system. We collected the iperf
output for about 500 seconds from both the systems, .i.e. client system and the
jammer system. We then plotted the graph for the same with “Time in seconds” on
X-axis versus “Average bandwidth availability at the client system in Mbps”.
Figure : Average Bandwidth versus Time in the client system (40 Mbps)
As
per the graph, when the client system was sending data at around 16-18 Mbps in
the presence of jammer system which was also sending data at the same rate, the
performance of client system went down by around 80% as against when the client
system was sending data at 16-18 Mbps alone without the jammer system
interfering the transmission.
Packet loss in the client system when the jammer
system was running against when the jammer system was not running :
The
iperf command at the end of successful execution gives the number of packets
lost out of total number of packets that were sent out from the system. It also
gives the percentage of packets lost when the command was run for the specified
time at specified data rate. We observed and tried to plot the graph of
percentage packets lost when the jammer system was running as against
percentage packets lost when the jammer system was not running.
Figure : Packet loss percentage versus Bandwidth requirement
The
total available bandwidth is around 26-28 Mbps. As per the graph, when the
bandwidth requirement by each of the systems, i.e. the client system and the
jammer system goes beyond the available bandwidth, the almost around 95-100%
packet loss can be seen in the client system as against close to 0% packet loss
in the client system when the jammer module was not inserted in one of the
client systems.
Packet loss created
by jammer system while both, client and jammer system are sending data at same
data rate
The
two major tasks in the implementation of jammer are enabling virtual sensing
and setting the duration field of RTS/CTS to maximum (jammer module). To know
the effect of each on the overall performance achieved, we compared the packet
loss caused only by virtual sensing and virtual sensing with jammer module
inserted. This experiment was conducted with initially both the systems sending
iperf data at normal data rate 1.04 Mbps. In the jammer system only the virtual
sensing was enabled and jammer module was not inserted. As seen in the table
and graph, the effect created by virtual sensing alone was nil i.e. 0% packet
loss in the scenario considered. Even when both the systems increased their
data rate to 10 Mbps, there was no effect on the network. Only when the data
rate was increased to 20Mbps and above, virtual sensing (normal RTS/CTS
duration) had an effect on the client system.
Next,
we inserted the jammer module in the jammer system and conducted the second
round of experiments. Now when the jammer system (with virtual sensing and
jammer module inserted) and the client system were made to send iperf data at
default data rate of 1.04Mbps, a significant effect on the performance of
client system was observed. As seen in the table, the jammer was able to cause
a loss of 62% to the client system.
Table-1 : Packet
loss evaluation for virtual sensing and Max. duration
Bandwidth
Requirement from both the systems
|
Packet
Loss percentage with only virtual sensing and
normal duration
|
Packet
Loss percentage with Virtual sensing and Max duration
|
1.04
Mbps
|
0%
|
62%
|
10
Mbps
|
0.026%
|
83%
|
20
Mbps
|
85%
|
90%
|
30
MBbps
|
91%
|
95%
|
40
MBbps
|
97%
|
99%
|
Figure : Packet loss percentage for virtual sensing and Max. duration
Thus,
as seen in the graph, with only virtual sensing enabled in the jammer system we
can jam the network significantly when both the systems are sending data at
higher data rate but at lower data rates no much harm is caused to the network,
whereas with duration field set to maximum value, it can cause the network to
jam when both the systems are sending data at lower data rates.
Packet loss
created by the jammer system while jammer system is sending data at default
data rate (1.04 Mbps)
As
our goal is to build an energy efficient jammer, we conducted experiments to
see at what data rate the jammer could create an effect on the network. We
observed the packet losses created by the jammer system by sending at default
data rate of 1.04Mbps throughout and the client system sending at 1.04Mbps,
10Mbps, 20Mbps and 30Mbps. As seen in the table, the jammer was able to create
a significant packet loss of 92% to the client system of data rate 30Mbps by
just sending at 1.04Mbps.
Table-2 : Packet
loss percentage for virtual sensing with Max. duration
Bandwidth
requirement by Client system
|
Packet
Loss with Virtual sensing with Max duration
|
1
Mbps
|
67
%
|
10
Mbps
|
74
%
|
20
Mbps
|
88
%
|
30
Mbps
|
92
%
|
40
Mbps
|
99
%
|
Figure : Packet loss percentage for
virtual sensing with Max. duration
Conclusion :
By exploiting the vulnerabilities in the
802.11 MAC protocol we can disrupt service to the network using relatively few
packets and low power consumption.








