Load Balancer – Connection Handling & SSL Termination

Hi guys,

Today I will be talking about load balacing and its features, I will talk about how the load balancers handle its connections.

Before going straight to the point, I will explain the basics conecpts about load balance and then we can go further on the connection handling analysis.

Load Balancer

A Load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load Balancers are used to increase capacity (current users) and reliability of applications. They improve the overall performance of applications by decreasing the burden non servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks.

They are grouped into categories, such as: Layer 4 and Layer 7. Layer 4 load balancer act upon data found in nework and transporte layer protocols such as TCP, IP, FTP, UDP whereas the layer 7 load balancers distribute requests based upon data found in application layer protocols such as HTTP.

Most of the load balancers use one of the following algorithm to distribute the packets:

  • Round Robin
  • Weighted Round Robin
  • Least Connections
  • Least Response Time

Layer 7 load balancers can further distribute requests based on application specific data such as HTTP headers, cookies or data within the application message itself, such as the value of a specific parameter.

Connection Handling – Overview

I will try to descrive how the Load balancers handles connections at Layer 4 and Layer 7. For Layer 4 connectinos, the LB receives a TCP packet from a client and load balances the connection to a server on the first packet. The SYN-ACK from the server matches an existing flow and the rest of the connection is handled in the fast path, which means the packt is accelerated in the hardware network processors.  The ACE then completes the TCP handshake, this process applies to the following functions:

  • Basic Load Balacing
  • Source IP sticky
  • TCP/IP normalization

The figure below shows the Layer 4 Flow setup:

L4 Flow

 

For flow in application layer (L7 Flows) the ACE acts as proxy, intercepts the clients request that matches an L7 rule, and terminates the TCP connection. The ACE send back to the client a SYN_ACK message in response to the client TCP SYN, once the TCP sessions is complete a L7 request is done ( HTTP GET or POST). Look figure below:

L7 Flow.PNG

If you look at the figure above, you will se the client can only communcate directly with the ACE and the ACE is reponsible for communucating with the server load balacing the traffic according to what has been configured.

If you try to compare it to the real life, when you are renting an apartment, you are the end user (client), the ACE is the agency and the Server is the landlord ….You only talks to the agency , they (agency) reaches the landlords….=)

SSL Termination

SSL Termination is aldo known as SSL Offloading, the client will establish a connection using HTTPS (SSL) to the VIP configured in ACE. HTTPS causes the clients TCP session to be encrypted between the browser and the ACE. Once the sessio reaches the ACE, the ACE will decrypt the session and forward it to a real server in clear text (HTTP). The following image show exactly how it works.

SSL Termination

Well, when do we need that done ? Most of the times this kind of configuration is done when you need to allow external users to access systems inside your network, you need to ensure the outside traffic will be encrypted for security reasons.

Lets go the cool part of the blog, the configuration steps.

Configuring SSL Offloading on Cisco ACE.

In order to be able to terminate SSL sessinos, we need to configure both an SL certificate and a corresponding SSL key. Once imported, these SSL files are associated with an SSL proxy service that is applied to the VIP to enable SSL termination.

The SSL termination configuration begins like the basic L4 load-balacing configuration, by definig a VIP and corresponding server farm and rservers.

The configuration will be based on the diagram above.

ACE(config)# class-map match-all 102-vip
ACE(config-cmap)# match virtual-address 172.16.1.102 tcp eq 443

When adding the rservers to the server farm, consider the destination of the descrypted traffic, ACE use the rserver port defined in the server farm to properly translate the destination for the decrypted connection.

serverfarm host web
  rserver lnx1 80
    inservice
  rserver lnx2 80
    inservice
  rserver lnx3 80
    inservice
  rserver lnx4 80
    inservice
  rserver lnx5 80
    inservice

The VIP and server farm in this example allow the ACE to accept connections to the VIP on port 443 and forward them to a real server on port 80. Note that if the port is not provided, the VIP port will be preserved.
Most of SSL termination configurations begni by importing the certificate and key into ACE, the easiest way to do that is by placing the 2 files onto a secure FTP server and import it to the ACE.

ACE# crypto import ftp 172.25.91.127 cisco cert.pem cert.pem
ACE# crypto import ftp 172.25.91.127 cisco intermediate.pem intermediate.pem
ACE# crypto import ftp 172.25.91.127 cisco key.pem key.pem
ACE# crypto verify key.pem cert.pem

Once the SSL files have been verified, ACE can be configured with an SSL proxy service, which is a logical grouping of the certificates, key and SSL parameters used to define the characteristics of SSL termination.

ACE(config)# ssl-proxy service proxy-1
ACE(config-ssl-proxy)# cert cert.pem
ACE(config-ssl-proxy)# key key.pem
ACE(config)# crypto chaingroup intermed-1
ACE(config-chaingroup)# cert intermediate.pem
ACE(config)# ssl-proxy service proxy-1 
ACE(config-ssl-proxy)# chaingroup intermed-1

Within the ACE, all SSL termination is fully integrated. Therefore, there is no need to configure internal VLANs or IPs to handle decrypted traffic. All that is required to enable SSL termination is to attach the SSL proxy service configured above to a VIP in a service policy.

ACE(config)# policy-map multi-match client-vips 
ACE(config-pmap)# class 102-vip 
ACE(config-pmap-c)# ssl-proxy server proxy-1 

At this point the ACE should be configured with a working SSL termination configuration. Make a test connection to the VIP address using HTTPS in a web browser, and you should see a response from one of the real servers.

Checking – Show Commands
Follow below a list of show commands that can be used to double check this configuration:

ACE(config)# policy-map multi-match client-vips 
ACE# show crypto files
ACE# show crypto certificate all
ACE# show crypto key all
ACE# show crypto session
ACE# show crypto hardware
ACE# show service-policy  detail

 

Configuring SSL OffLoading on F5 BIG IP
I will not explain everything again, I will walk youn through the steps so you can have a big picture of how to set up the SSL Termination on BIG IP.

SSL Certificate and Key

Login into F5 -> Go to Local Traffic -> SSL Certificate List -> Import, which will show the following UI, here , do the following:

  • Import Type: Select Certificate
  • Certificate Name: Select “Create New” enter the certificate name.
  • Certificate Source: Select “Past Text”, paste the content of your SSL certificate here.
  • Click Import.

Certificate - F5

Now you have to import the key, follow the instructions below:

Go to Local Traffic -> SSL Certificate List -> Select the certificate you just created-> Click in Key Tab and follow the steps below:

  • Import Type – Key
  • KeyName – It will display your certificate name
  • Key Source – Paste your key here.

Now, if you go to “Certificate List”, you’ll see the “devdb”, but under the “Contents” column it will say “Certificate and Key”, which indicates that you’ve uploaded both certificate and key.

Once you have done it, you should go to “Certificate List”, then you will be able to see the certificate name in there.

SSL Profile

Go to “Local Traffic” -> Profiles -> SSL -> Client, which will display all the current SSL profiles.

Click on “Create” button on the top right corner, which will display the following:

  • Name: Enter the SSL profile name.
  • Parent profile: Leave it default at clientssl.
  • If you have a passphrase to enter for your key, you should do it here, by selecting “Advanced”. If not, just “Basic” information is good.
  • Certificate: Select the certificate you created above.
  • Key: Select the key you created above.
  • Passphrase: The passphrase for the SSL key.
  • Leave all other fields default.

F5 Pool

After you create the SSL certificate/key, and SSL profile, it is time to create a pool, and assign members to it.

Go to “Local Traffic” -> Pools -> Pool List as figure below:

f5-pool-list

From here, click on “Create” button on the top right corner, which will display the following:

  • Configuration: Leave it as “Basic”
  • Name: Enter the pool name.
  • Description: Enter some meaningful info here
  • Health Monitors: Select “tcp” from the “Available” list.
  • Load Balancing Method: Select “round robin”
  • New Members: Click on “New Node” radio button, and enter the ip-address of the node1.
  • Port: Select HTTP here, as the nodes themselves will be running only on HTTP. (If you are doing HTTPS passthrough, you’ll select HTTPS her. But, we are not doing that in our example)
  • Add: Click on add to add the node. Repeat the same process to add more nodes.
  • Once you’ve added both the nodes, click on “Finished”, which will create this new

Create HTTPS Virtual Server

Now it is time to create the HTTPS Virtual Server that will be use the pool we just created:

Go to “Local Traffic” -> Virtual Servers -> Virtual Server List. Click on “Create” button on the top right corner, which will display the following:

  • Name: Enter the name of the virtual server.
  • Description:
  • Type: Select standard
  • Destination: Select “Host”, and enter the name of the virtual server. (For example, 192.168.102.2). So, if someone comes to 192.168.102.2 on SSL, it will get redirected to one of the nodes in the pool.
  • Service Port: Select HTTPS, as incoming request to the virtual server itself will be in SSL.
  • SSL Profile (Client): select the profile from the list.
  • Leave everything else default on this screen and create the virtual server.

Once you have done all set up mentioned above, you can go to the browser and try to connect into the page, the BIG IP will do the SSL encryption and transfer the traffic to one of the HTTP servers.

I hope this post helped you to understand how Load Balancers encrypt and descrypt packets when doing SSL Termination.

Thanks for reading !!!

Renato Gentil

 

OSPF – LSA Types

Today I am going to talk about OSPF and its LSA types. I hope you enjoy the post.

First of all I will give you a brief explanation of how OSPF works so we can go ahead and look at its LSA types.

OSPF

OSPF stands for Open Shortest Path First, it is a link-state routing protocol, which means the routers exchange topology information with their nearest neighbors. The topology information is flooded throughout the AS, so that every router within the AS has a complete picture of the topology of the AS.

Once the routers have neighborship with each other, they will elect a DR and the BDR  of the network. The DR stands for Designed Router and Backup Designed Router. The DR servers as a common point for all adjacencies on a multiaccess segment whereas the BDR maintains adjacencies in case the DR fails.

Each OSPF router distributes information about its local state (usable interfaces and reachable neighbors, cost to use each interface) to other routers using a Link State Advertisement (LSA) message.  From this database, each router calculates its own routing table using a Shortest Path First (SPF) or Dijkstra algorithm. This routing table contains all the destinations the routing protocol knows about, associated with a next hop ip address and outgoing interface.

OSPF v2 is used with IPv4. OSPF v3 has been developed for compatibility with IPv6 128-bit address space.  There are other changes and diferences between OSPFv2 and OSPFv3, as detailed below:

  • Protocol processing per-link not per-subnet
  • Addition of flooding scope, which may be link-local, area or AS-wide
  • Removal of opaque LSAs
  • Support for multiple instances of OSPF per link
  • Various packet and LSA format changes (including removal and addressing semantics).

Well, lets get back to our objective and talk about LSA.

OSPF LSA Types

OSPF uses LSDB (Link State Database) and fills this with LSAs, see the LSA list below:

  • LSA Type 1: Router LSA
  • LSA Type 2: Network LSA
  • LSA Type 3: Summary LSA
  • LSA Type 4: Summary ASBR LSA
  • LSA Type 5: Autonomous System External LSA
  • LSA Type 7: Not-so-Stubby Area LSA
  • LSA Type 8: External Attribute LSA for BGP

Detailed Explanation:

  • LSA Type 1  – It is also called Router LSA, basically it contains information about directed connected links in the area to which the router belong. They are flooded to all routers in that area. If the router is an ABR (Area Border Router), it generates type 1 LSA for all the areas to which it is connected and send those LSAs to all neighbors in corresponding areas.
  • LSA Type 2 – Called Network Link. This LSA is generated by the DR, it lists all router on an adjacent segment. They are flooded within area, but they do not leave the area in which they are generated.
  • LSA Type 3Network Summary – This type of LSA are generated by ABR, they represent networks from an area and are sent to the rest of the area in OSPF domain. Type 1 LSA do not cross area boundary, so ABD uses type 3 LSA to inform other areas about networks learned in its area. Type 3 LSA uses network address as link id and router id of advertising router as ADV router in OSPF database.
  • LSA Type 4 ASBR Summary – Injected by an ABR into the backbone to advertise the presence of an ASBR within an area. This LSA instructs the rest of the OSPF domain how to get to the ASBR so that other router in the OSPF domain can route to external prefixes redistributed into OSPF by the ASBR.If we have no way to reach the ASBR that redistributed the route, we can not reach external routes.
  • LSA Type 5External Link – Generated by an ASBR and flooded throughout the AS to advertise a route external to OSPF. OSPF creates a type 5 LSA for a subnet that is injected into OSPF from an external source, which is by definition a router that connects to a non-OSPF routing domain, uses the redistribute command.  The Type 5 LSAs come in 2 types:
    • External Type 1 – Cost to the advertising ASBR plus the external cost of the route.
    • External Type 2Default – Cost of the route as seen by the ASBR.
  • LSA Type 7NSSA External Link – Generated by ASBR in a not-so-stubby-area, converted into a type 5 LSA by the ABR to be flooded to the rest of the OSPF domain. An NSSA makes use of type 7 LSA, which are essenstially type 5 LSA in disguise. An NSSA can function as either a stuby or totally stubby area, to designate a normal NSSA, all routers in the area must be configured with “area X nssa” command. LSA type 3 will pass into and out of the area, unlike a normal stuby area, the ABR will not inject a default route into a NSSA unless explicity configured to do so. As traffic can not be routed to external destinations without a default route, you will probably want to include one by appending “default-informatino-originate” command. To expand the NSSA to Totally Stubby Area, eliminating Type 3 LSA, which is done by configuring “no-summary” parameter with ” area X nssa no-summary” command.

Remember, the objective of this post is not to show the configurations and database looking at the router itself, the idea is to share the theory so you can set up your own lab and see this working.

To help you understand when each LSA is generated, I have done the following network diagrams according to each case, I hope you enjoy it:

Standard Areas

Stub Areas

Totally Stubby Area

Not-So-Stubby Area

Guys, as I said the objective of this post is only explain each one of the LSAs types, thats the reason I have not added any output in here. I hope you have enjoyed this post, send your doubts if you have one !

I really appreciate your visit on my blog!

Thanks!!!

Renato Gentil

Python – Script for Network Engineers

Today I’ll be talking about python and how to develop a small script to help you save your time during your work hours.
Why I came up with this post ? I’ve got a small project where I need to set up more than 300 routers and ship them to the customer site, the tricky part of this is that all configuration will be the same expect for ip address, hostname, ip addresses in the access-list.

How long would you take to build the config for all sites ? What if you had to copy and paste every single hostname and swap the ip address and access-list between notepads and make sure you are doing it without typos.
I developed a small python script which will do it for you in almost 40 seconds, isn’t it great ? You will just run the script, blink your eyes and you’re done! The configuration will be there, ready to be placed into the router.

Let’s have a look some basics python concepts and them jump into the configuration section.

What is Python ?

Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level built in data structures, combined with dynamic typing and dynamic binding, make it very attractive for Rapid Application Development as well as for use as scripting or glue language to connect existing components together.

If you would like to learn python, there are loads of sites on the internet. I really like this one, which is python for non-programmes, it is step by step guide.

https://wiki.python.org/moin/BeginnersGuide/NonProgrammers

Introduction – Creating the files

Let’s assume we have 500 routers to be configured, it could take long time to prepare all configurations, right ? So let’s jump in and have a look at the script and how it would save your time.

The first step would be create some files to provide the information that python will use to build the configurations, you have to create the file and save it in your pc.

You will then need to have a base config, in my case I created a file called “masterconfig.txt”. In this file I have the all parameters specified that I will use on all routers such as username, vlan assigment,etc.
The important thing is in the masterconfig file I have created 3 VARIABLES that python will use to change according to the hostname and ip address of the router and build the new configuration.
So, in my masterconfig I have the variables defined as below:

– hostname BRANCHOFFICE – Branchoffice is my variable.
– ip address IPADDR 255.255.255.0 – IPADDR is my variable.
– SUBNET is also my variable to match the subnet mask of each site in the access-list.

So the python will look at BRANCHOFFICE,IPADDR and SUBNET variables and based on my script will apply the change necessary on these variables.

Second step now, would be to create a file where python will use as information file to apply on the masterconfig. Example: I have created a file called “datafile.txt” which contains the HOSTNAME and IPADDR information to be used by python to create the new files. My Brachoffices will have the following names, interface ip address and subnets:

Stamullen,192.168.100.1,192.168.100.0
Ballygawley,192.168.200.1,192.168.200.0
Wells,192.168.150.1,192.168.150.0
Terryglass,192.168.250.1,192.168.250.0

So, if you look at my datafile.txt you will find the hostname and ip address as well as the subnet mask for my remote sites. Python will look at this information to build my configs files.

Writing the script

Now we have both files ready we can start writing the script. First part of my script will look like this:

# Change the DATAFILE AND DATAPATH variables to reflect the new batch folder

MASTERPATH = 'c:\\python27\\scripts\\'
DATAFILE = 'batch1\\datafile\\datafile.txt'
DATAPATH = 'batch1\\'
SRC = 'masterconfig.txt'
DST = '.cfg'

Let’s break this down:
MASTERPATH – Where the masterconfig file is saved.
DATAFILE – Where the datafile file is saved.
DATAPATH – Where the new configuration files generated by python will be saved.
SRC – file name where python will look to build the configurations.
DST – Python will open a temporary cfg file to create a new file with extension .txt

Now we gonna create a function to generate our config files.

def generate_configs():
    # Open the data file, parse each line and split into fields, BRANCHOFFICE, IPADDR and SUBNET
    # pass those variables to the parse_temp_cfg function
    input = open(MASTERPATH + DATAFILE,'r')
    for line in input:
        BRANCHOFFICE,IPADDR,SUBNET = line.split(',')
        # Pass the variables to parse function
        parse_temp_cfg(BRANCHOFFICE,IPADDR,SUBNET)

    input.close()
    return

So, the config above will open the files masterconfig and datafile files and will analyse each line and it will break it into 3 variables by splitting it at the comma, it then inserts each value into a variable called BRANCHOFFICE,IPADDR and SUBNET.

Now we have to create a new function to create a real file with extension .txt tidy it up and save the new .txt file that we are going to use as the router config.
Also the new function will create the temp file to use but then once the txt file is created and saved, the function will remove the temp file.

#Use the variables passed in to open the temp .cfg file and to create the output file with extension .txt
#Read the whole file into a variable named file, do a string replace on the entire file for variables in the file called BRANCHOFFICE, IPADDR and SUBNET
#save the file into the new output file with extension .txt, tidy up and get rid of the temp .cfg files

def parse_temp_cfg(BRANCHOFFICE, IPADDR, SUBNET):
    input = open(MASTERPATH + DATAPATH + BRANCHOFFICE + DST,'r')
    output = open(MASTERPATH + DATAPATH + BRANCHOFFICE + '.txt','w')
    
    file = input.read()
    file = file.replace('BRANCHOFFICE', BRANCHOFFICE.strip())
    file = file.replace('IPADDR', IPADDR.strip())
    file = file.replace('SUBNET', SUBNET.strip())
    output.write(file)

    input.close()
    # Tidy up, get rid of the temporary .cfg files
    os.remove(MASTERPATH + DATAPATH + BRANCHOFFICE + DST)
    output.close()
    return

#Open the data file with the list of branch offices/gateway/subnet information
#parse each line and split the data into fields BRANCHOFFICE, IPADDR and SUBNET, use the BRANCHOFFICE field to name the new file
#Copy the master config file, and save it as a new file with the name of the branch office .cfg

def copy_the_master_config_file():
    file = open(MASTERPATH + DATAFILE,'r')
    for line in file:
        BRANCHOFFICE,IPADDR,SUBNET = line.split(',')
        shutil.copyfile(MASTERPATH + SRC,MASTERPATH + DATAPATH + BRANCHOFFICE + DST)

    file.close()
    return

copy_the_master_config_file()

generate_configs()

So we have copied from the datafile the 3 variables defined, then open the masterconfig file, make a copy of it in a temporary file,replaced the variables names with the datafile information split by comma and then save it as .txt file. After that we have deleted the temp files created and generated the new configs in the folder path defined in DATAPATH and we are done!!!

I like to use pythonscripter to write this kind of small scripts because it is easy and when the script is ready, you can just click in the “play” button and the script will run.

Once the script run correctly, you can go to your folder DATAPATH and you will see all configurations files as txt file, open it, make sure the variables names has been replaced by the info provided in the datafile and paste it into the router configuration.

It might seem crazy for the first time doing it, but when you get used to it you will see how python can be helpful.

Hope you have enjoyed this small post =)

Thanks

Renato Gentil =)

Clientless SSL VPN – WebVPN

Today I am going to show you how to configure a simple WebVPN using ASA 8.4 CLI.
I won’t show anything on ASDM because you can just figure out how to set up a VPN using the VPN Wizard tool.

Introduction

WebVPN  is also known as Clientless SSL VPN, and to understand when to use it I’ll show an example: let’s suppose the customer doesn’t have a laptop to use client VPN and they still need to access the corporate network. He is a visiting a company but that company doesn’t allow ipsec and ike packets to go outside through the internet. So the customer could use the SSL VPN from a browser which will use the SSL protocol on port 443 which is allowed in the internet. From the webbrowser he will be able to access his portal and have his tools such as sharepoint, any application such as Citrix, SSH, Telnet Session or a company intranet. For this kind of access you can set up a webvpn and the user will be able to get access in the company network from wherever place he is.

VPN Basic Benefits

– Authentication: This can be achieve through the use of usernames and passwords, pre-shared-keys, tokes, public key infrastructure (PKI) and digital certificates. The purpose of authentication is to make sure you are who you say you are.

– Confidentiality: Provided by encrypting user data before transmission through the established VPN Tunnel.

– Integrity: Provides a means to ensure data has not been tampered with along the path between the source and destination.

– Antireplay: The sending device adds a sequence number to each packet sent through the VPN and allow the receiving end (ASA) to determine whether a packet has been duplicated.

IPSec

IPSec is composed of a collection of protocols that together provide the operation of parameter negotiation, connection establishment, tunnel maintenance, data transmission and connection teardown.
Three protocols are used in the IPSec architecture to provide key exchange in addition to the integrity, encryption, authentication and antireplay features. The three protocols are IKEv1 or IKEv2, ESP and AH. Let’s have a look at IKEv1, ESP and AH at the moment. I won’t show anything related to IKEv2 because we won’t use it for our vpn configuration today.

IKEv1

Provides a framework for the parameter negotiation and key exchange between VPN peers for the correct establishment of a Security Association (SA). To do it, IKEv1 uses two different protocols:

Internet Security Association and Key Management Protocl (ISAKMP) takes care of parameter negotiation betweeb peers, for example, DH groups, lifetimes, encryption, authentication.

Oakley provides the key-exchange function between peers using the DH protocol.

There are 2 phases known as IKEv1 Phase 1 and Phase 2 that must be followed by each peer before a communications tunnel can be established between them and they are ready for sucessfull data tarnsmission.

IKEv1 Phase 1: During this phase, both peers negotiate parameters such as integrity, encryption algorithms, authentication methods to set up a secure and authenticated tunnel.

IKEv1 Phase 2: This second phase uses the negotiated parameters in Phase 1 for a secure SA creation. The IPsec SA are unidirectional, this means that different session key is used for each direction (one for inbound, or decrypted, traffic, and one for outbound, or encrypted, traffic).

IKEv1 use Main mode or Aggressive mode in Phase 1 to carry the actions required to build a bidirectional tunnel and use Quick mode for Phase 2 operations.

  • Main Mode: Use 3 pairs of messages between peers (Making six message in total):

  -Pair 1 – Consists of the IKEv1 security policies configured on the device: The initiator begins by sending one or more IKEv1 policies, and the receiving peer responds (responder) with its choice from the policies.

 – Pair 2 – It includes DH public key exchange. DH creates shared secret keys using the agreed DH group/algorithm exchanged in pair 1 and encrypts that to be exchanged between peers. They are then encrypted by the recebing peer and sent back to the sender and descrypted using the generated keys.

– Pair 3 – Used for ISAKMP authentication, each peer is authenticated and their identity validated by the other using pre-shared keys or digital certificates. These packets are now exchanged encrypted and authenticated using the policies exchanged and agreed in pair 2.

  • Aggressive Mode: Use 3 message rather than 6. The same information is exchanged between peers, however, the process is carried out according to the following steps:

– The initiator send DH groups,identity information, IKEv1 policies and so on.

– Responder authenticates the packet and sends back accepted IKEv1 policies, key and an identification hash required to complete the exchange.

– The initiator authenticates the responder packet and sends the authentication hash.

  • Quick Mode (Phase 2): The IKEv1 transform sets used for IPSec policy negotiation and SA creation are exchanged between peers. Regardless of the parameters/attributes selected within a transform set, the same 5 information are always sent:

– IPsec Encryption Algorithm such as DES, 3DES, AES.

– IPsec Authentication Algorithm such as MD5, SHA-1.

– IPsec Protocol such as AH or ESP

– IPsec SA Lifetime

– IPsec Mode such as Tunnel, Transport.

Authentication Header (AH) and Encapsulating Security Payload (ESP)

Both AH and ESP operate at the network layer of the OSI model and they have their own protocol number for identification in the VPN path.

The origin authentication provided by both AH and ESP can be carried out by one of the following hash algorithms:

– Message Digest 5 Algorithm (MD5)

– Secure Hash (SHA).

Because ESP and AH operate at the network layer, the original host and destination ip addresses remain in the packet throughout the network, exposing them to potential attackers of the VPN connection.

Follow below the packets header when using AH and ESP on Tunnel Mode and Transport Mode:

frame-header

 

The goal of this post is to teach you how to set up a webpn, you need to know how SSL and TLS works, let’s have a look at both protocols.

SSL

SSL provides message authentication, confidentiality and integrity through the combination of the cryptography protocols. SSL doesn’t have a mechanism for reliable packet delivery, therefore, the protocol relies on other higher-layer protocol within the OSI mode and the VPN termination device for ordered and guaranteed delivery of packets. For this reason, the TCP protocol is the transport layer of choice, with its sequencing, reordering and reliable delivery functionality.

SSL Tunnel Negotiation

SSL establishes a connection between both the client (webbrowser) and server by sending a number of messages encapsulated. There are 2 phases involved in the building blocks of an SSL Tunnel.

Phase 1 – Handshake phase, during this phase various parameters are negotiated between the client and server.

Follow below an illustration of the SSL handshake process:

SSL_Handshake

After the server has received the ClientHello message, it responds back with its own ServerHello message. This packet is similar with the original ClientHello message, however, the server generates and includes its own random number for creation of the master key and chooses a compression scheme from the list of supported schemes it receives from the client. Instead of the server sending the client a list of the cipher suites it supports, the server chooses from the highest supported version of protocol it has, based on the list it received from the client.

If the session ID received from the client is not null and that of an existing session, however, the server restarts the existing session where possible. At this stage, after the ClientHello and ServerHello messages have been sent and received, and the protocol, encryption, hash, and authentication algorithms have been negotiated, the server sends its certificate to the client, which contains a copy of the server’s public key.

When the client receives the server certificate, the client will check the name of root certificate authority exists in its own trusted root CA store, retrieves the root CA public key and validates the digital signature using it. At this moment the server send the ServerHelloDone to indicate to the client that the server has finished sending information it has.

The client sends a ClientKeyExchange to the server including protocol version number, pre_mater secret used by both to generate the master secret for encryption. The server will decrypts the pre_master secret using its private key matching the public key from its certificate and both client and server now can use the master secret for message encryption.

The client now sends a ChangeCipherSpec(CCS) message to the server as sign that everything sent from now will be encrypted using the keys and protocols as established in the earlier messages, followed by a Finish Message. The server will also send the ChangeCipherSpec message to indicate the same state followed by a Finish message as well.

To reinforce your knowledge about the message I’ve captured the packets between the client and the firewall during the SSL connection, let’s take a look at this:
Note: I couldn’t upload the wireshark file here so I only add a screenshot, if you want to have a look at the wireshark file, leave a comment and I’ll send you. =)

wireshark

Configuring WebVPN

Now we have passed through concepts behind the scenes we can move on and prepare to set up the VPN.

Follow below the options that you cannot forget when you are configuring the webvpn.
Note: You don’t need to configure according to the list below, as far as you have the list below configured your vpn will work.

1- Ip addressing
2- Configure the tunnel-group
3- Configure the group-policy
4-Configure username
5-Configure optional setting as web acl to filter some traffic.
6-Enable the webvpn in the interface.

Let’s jump in step by step. The topology for today will be simple as I just want to show you how the Webvpn works.

Clientless-topology

The first step is easy enough and I don’t need to show you how to configure an ip address in the interface.

The second step would be set up tunnel-group, also called Connection Profile. Connection Profile is responsible for applying the pre-login policies, most used to separate the user based on departments and provide connectivity setting such as DNS, AAA,DHCP servers, ip address-pools if needed, filters, access based on time, etc.

To set up a connection profile use the the configuration below, in my case the WEBBPN_TUNNEL is the name of the connection profile:

ASA# 
ASA# show running-config tunnel-group
tunnel-group WEBVPN_TUNNEL type remote-access
tunnel-group WEBVPN_TUNNEL general-attributes
 default-group-policy WEBVPN_POLICY
tunnel-group WEBVPN_TUNNEL webvpn-attributes
 group-alias ADMIN enable

The next step would be set up the group policy. Group-policy object is a container for the various attributes and post-login parameter that can be assigned to VPN users and to endpoints such as IPv4, ACL , DHCP, etc.In our scenario I set up a simple group-policy with a banner message.

ASA# 
ASA# show running-config group-policy
group-policy WEBVPN_POLICY internal
group-policy WEBVPN_POLICY attributes
 banner value WELCOME TO THE JUNGLE!!

Once the tunnel-group and group-policy is set up you can move on and set up the username for the connection.

ASA# 
ASA# show running-config username
username mark password .8T1d6ik58/lzXS5 encrypted privilege 0
username mark attributes
 vpn-group-policy WEBVPN_POLICY
 vpn-tunnel-protocol ssl-clientless
 webvpn
  filter value WEB_VPN_ACL

As you can see, under the username configuration you will specify the group-policy, which protocol use and the filter list to be applied to that specific username.
So the username MARK will be able to connect using webvpn using ssl. Now let’s check the filter list to know where mark could get from his vpn portal.

 ASA# 
ASA#show running-config access-list
access-list WEB_VPN_ACL webtype deny url http://150.1.1.1/*
access-list WEB_VPN_ACL webtype permit url any

This is a webtype access-list, which means that will be applied on webvpn to filter traffic from the portal destinated to specific network. In this case I’m blocking all traffic to R1-loopback1 and allowing everything else, which means the user mark will be unable to get access on 150.1.1.1.

Now we can enable the webvpn in the interface and test the connectivity from the client machine.

 ASA#
ASA(config)# webvpn
ASA(config)# enable OUTSIDE
ASA(config)#
ASA#

Once the user has been authenticated, you will see the following logs in the firewall:

%ASA-6-716001: Group User IP WebVPN session started.
%ASA-6-716038: Group User IP Authentication: successful, Session Type: WebVPN.

The banner message and the portal will look like below:
wevpn_banner

Portal

From the portal the user MARK won’t be able to http to loopback1 on R1 and will get a deny message, however the client will be able to http to any other loopback in R1.
Let’s check this out!!!
Deny_loopback_r1

As you can see above the client got a deny message, so our access-list is working =).
As you can see we don’t have any way from the Portal to SSH or TELNET to the loopback on R1. The reason for that is because we don’t have installed any plug-in yet. On WebVPN you can install the plugins in the firewall and then the firewall will allow you to use, there are some plugins available on Cisco website to use ssh, telnet, citrix application, etc.
I have downloaded the ssh plugin to show you how to import it to the firewall.Once you have download the JAR file from cisco website you can just run the command as below and import the plugin configuration is done =)!

ASA(config)# import webvpn plug-in protocol ssh,telnet tftp://192.168.100.100/ssh.12.21.2013.jar
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Now you can go back to the portal page, disconnect from the webvpn and reconnect again and you will see the SSH,TELNET on list as below:
plugin

The webvpn is a way to connect users who doesn’t have a specific location to access the corporate network. You can have different users and group-policies based on departments such as Sales, Engineer, Support and each one of them will have their own policies allowing traffic for specifics networks and resources inside the corporate network.

Hope you enjoyed the post and don’t forget, the best way to learn it is doing as much lab as you can, so go ahead and configure your webvpn!! If you have doubts you can leave a comment and I will try to help you!!

Thank you very much =)

Renato Gentil =).

Understanding QoS

Introduction

QoS stands for Quality of Service and it is a service available on a Cisco IOS device to provide quality of specific service such as voice packets, critical packets in a network. Basically you configure a priority based on a protocol, tcp, udp port number and set up a priority. When the traffic hit the interface on a router, it will forward the traffic first depending on their priority.

Congestion Management

Congestion Management is the way the router will handle a traffic when the link speed exceed the limit. There are some methods of prioritizing it onto an output queue. Follow below some queuing tools:

  • First-in First-out (FIFO)
  • Priority Queuing (PQ)
  • Custom Queueing (CQ)
  • Flow-Based Weighted Fair Queuing (WFQ)
  • Class-Based Weighted Fair Queuing (CVWFQ)

Note: Queuing algorithms only take effects when congestion is experienced, if the link is not congested, then there is no need to queue packets.

Let’s talk about each one of those above:

FIFO – Simplest way to forward the packets. When the network is congested it will forward the traffic in order of arrival. The first packet that comes in will be the first that will be forwarded out. This is the default queuing algorithm and there is no configuration required.

PQ – Priority Queuing can prioritize based on network protocol, incoming interface, packet size, source or destination address,etc. The packet is placed in one of the 4 different queues (High,Medium,Normal, Low). Packets that are no classified by this priority list are sent using the Normal queue. PQ is a powerful tool to ensure that mission-critical traffic flows through the wan link with the proper priority treatment.
Note: By default, the queue-limit for each queue is as follows:
High – 20 packets
Medium – 40 packets
Normal – 60 packets
Low – 80 packets

CQ – Custom Queue allows you to guarantee bandwidth for a specific traffic and leaving the remaining bandwidth to other traffic.There are 16 static queues that must be manually defined. The custom queue assigns a byte counter to every configured queue, with a 1500-byte default value and services the queues in round-robin fashion. Every time a queue is serviced the de-queued packet decrements the byte count by the packet size until the counter drops to zero, at thich time the next queue is serviced.

Flow-Based WFQ – Commonly referred as WFQ, it is a flow-based queuing algorithm that creates bit-a-bit fairness by allowing each queue to be serviced fairly in terms of byte count. The way WFQ does it is by diving the amount of traffic in each queue, example: If queue 1 has 200-byte packets and queue 2 has 150-byte packets, the WFQ will take two packets from queue 2 for every one packet from queue 1, this make the service fair for each queue. The WFQ scheduler also provides more bandwidth to flows with higher ip precedence values.

Flow are identified using the following items in an IP Packet:

  • Source IP address
  • Destination IP address
  • Transport layer protocol (TCP or UDP)
  • TCP or UDP source port
  • TCP or UDP destination port
  • IP Precedence

Class-Based WFQ – CBWFQ is provides greater flexibility. When you want to provide a minimum amount of bandwidth use CBWFQ. It allows us to create minimum guaranteed bandwidth classes, instead of providing a queue for each individual flow. A class is defined that consists of one or more flows and each class can be guaranteed a minimum amount of bandwidth. Let’s supposed you are setting up a CBWFQ for video traffic in a T1 link, you can place the video stream in the class and tells the router to provide 768 kbps (hakf of a T1) service for the class. Now the video traffic will be forwarded using this class and the rest of flows will be using the default class which will be allocating the remainder of the bandwidth of the T1 link.
You can also mark packets using ip precedence, drop packets, police, shape, set priority, set netflow using CBWFQ.

Note: WFQ is used by default on all serial interfaces with bandwidths set at T1 and E1 speeds and lower.

Configuring FIFO:

For configuring FIFO queuing strategy the Cisco IOS does not provide any explicit commands. The main configuration of FIFO on specific interface is done when you will turn off any other queuing method. For example, default queuing method on serial interfaces running at E1 speed and slower is WFQ. To enable FIFO you must first disable WFQ using the command “no fair-queue” on interfaces configuration mode. After enable fifo you can check it using show interface command:

R1(config-if)#do show int s2/0
Serial2/0 is up, line protocol is up
  Hardware is M4T
  MTU 1500 bytes, BW 1544 Kbit/sec, DLY 20000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation HDLC, crc 16, loopback not set
  Keepalive set (10 sec)
  Restart-Delay is 0 secs
  Last input never, output 00:00:01, output hang never
  Last clearing of "show interface" counters 00:11:05
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo <<------------------FIFO IS ENABLE
  Output queue: 0/4096 (size/max)  <<--------QUEUE LENGTH CHANGED TO 4096.
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     0 packets input, 0 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     130 packets output, 4243 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     0 unknown protocol drops
     0 output buffer failures, 0 output buffers swapped out
     3 carrier transitions     DCD=up  DSR=up  DTR=up  RTS=up  CTS=up

To change the queue length you can use the command “hold-queue (N_of_Packes) out” and check issuing the show interface again.(Maximum is 4096 on 12.4.11 IOS and 18xx and 28xx routers)

Configuring Priority Queue (PQ)
Now let’s play with PQ, I have created a simple diagram like below:

QoS
Now we must have traffic in all 4 different queues, to do that we must generated some different type of traffic for each queue. I’ll generate the traffic using iperf from a PC connected to R1 as below:

  • High Queue – tcp traffic from R1 lan to R2 lan on port 5001.
  • Medium Queue – tcp traffic from R1 lan to R2 lan on port 5002.
  • Normal Queue – tcp traffic from R1 lan to R2 lan on port 5003.
  • Low Queue – icmp traffic from R1 lan to R2 lan.

The configuration is simple, you need to create an access-list to match the traffic, then match the access-list on priority-list and apply in the interface:

access-list 101 permit tcp host 172.16.1.1 host 172.16.2.2 eq 5001
access-list 102 permit tcp host 172.16.1.1 host 172.16.2.2 eq 5002
access-list 103 permit tcp host 172.16.1.1 host 172.16.2.2 eq 5003
access-list 104 permit icmp host 172.16.1.1 host 172.16.2.2

#The Second step will be set up the priority-list and apply it in the interface.

priority-list 1 protocol ip high list 101
priority-list 1 protocol ip medium list 102
priority-list 1 protocol ip normal list 103
priority-list 1 protocol ip low list 104

interface Eth0/0
 priority-group 1

To check the priority queue you can use “show interface eth0/0” or “show queuing priority”.
Now when the flows starts to be forwarded, the High queue tries to send the most traffic. If you look at the output of Iperf test you will see the tcp traffic on port 5001 received highest bandwidth than other traffics:

C:\>iperf -c 172.16.2.2 -P 1 -p 5001
------------------------------------------------------------
Client connecting to 172.16.2.2, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1900] local 172.16.1.1 port 1170 connected with 172.16.2.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[1900]  0.0-120.2 sec  7.38 MBytes   515 Kbits/sec


C:\>iperf -c 172.16.2.2 -P 1 -p 5002
------------------------------------------------------------
Client connecting to 172.16.2.2, TCP port 5002
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1912] local 172.16.1.1 port 1172 connected with 172.16.2.2 port 5002
[ ID] Interval       Transfer     Bandwidth
[1912]  0.0-60.4 sec  1.28 MBytes   178 Kbits/sec


C:\>iperf -c 172.16.2.2 -P 1 -p 5003
------------------------------------------------------------
Client connecting to 172.16.2.2, TCP port 5003
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1900] local 172.16.1.1 port 1171 connected with 172.16.2.2 port 5003
[ ID] Interval       Transfer     Bandwidth
[1900]  0.0-60.9 sec  1.36 MBytes   187 Kbits/sec

 
Configuring Custom Queue (CQ)
The configurations steps for CQ is pretty much the same as PQ. You will need to create an ACL to match the traffic and then define in which queue the traffic will be assigned to. I’ll use the same ACL that I used for PQ configurations:

access-list 101 permit tcp host 172.16.1.1 host 172.16.2.2 eq 5001
access-list 102 permit tcp host 172.16.1.1 host 172.16.2.2 eq 5002
access-list 103 permit tcp host 172.16.1.1 host 172.16.2.2 eq 5003
access-list 104 permit icmp host 172.16.1.1 host 172.16.2.2

#The Second step will be set up the custom-queue-list and apply it in the interface.

queue-list 1 protocol ip 0 list 101
queue-list 1 protocol ip 1 list 102
queue-list 1 protocol ip 2 list 103
queue-list 1 protocol ip 3 list 104

interface Eth0/0
 custom-queue-list 1

I’m adding different traffic flow in a different queue. You can check it by issuing “show queueing custom” or “show interface eth0/0”.

Configuring Flow Based WFQ
The configuration of WFQ is simple. If you remember how to configure FIFO, you will remember how to configure WFQ. Basically the command “fair-queue” must be enable in the interface and the WFQ configuration is done.However what is important to know here is the options available when you are configuring WFQ.
Let’s have a look at the options when I issue the question mark below:

R1(config)#int eth0/0
R1(config-if)#fair-queue ?
    Congestive Discard Threshold
  
R1(config-if)#fair-queue 1024 ?
    Number Dynamic Conversation Queues

R1(config-if)#fair-queue 1024 16 ?
    Number Reservable Conversation Queues

R1(config-if)#fair-queue 1024 16 32 ?

R1(config-if)#fair-queue 1024 16 32

The first option will be CDT(Congestive Discard Threshold), it could be between 1 and 4096.It would be the local space of each queue, once the queue size reaches the CDT the packet will be dropped.
The second option will be Dynamic Conversation, which means the number of queues that you could have.That could be up to 4096.
The last option will be Reservable Conversation,this is the number of reserved queues for RSVP use.

Configuring Class-Based WFQ (CBWFQ)
The configuration of CBWFQ is more granular and it will allow you to have more option when setting up CBWFQ. The steps will be:

  • Set up class-map
  • Set up policy-map
  • Match the class-map
  • Limit/Mark the traffic
  • Apply policy-map on an interface

Still using the same diagram I’m going to show you the options available when setting up CBWFQ:

R1(config)#class-map CMAP_ICMP
R1(config-cmap)#match protocol icmp
R1(config-cmap)#exit

#####Once the class-map is matched, you have all option below

R1(config)#policy-map PMAP_ICMP
R1(config-pmap)#class ICMP
R1(config-pmap-c)#?
Policy-map class configuration commands:
  bandwidth        Bandwidth
  compression      Activate Compression
  drop             Drop all packets
  exit             Exit from class action configuration mode
  fair-queue       Enable Flow-based Fair Queuing in this Class
  log              Log IPv4 and ARP packets
  netflow-sampler  NetFlow action
  no               Negate or set default values of a command
  police           Police
  priority         Strict Scheduling Priority for this Class
  queue-limit      Queue Max Threshold for Tail Drop
  random-detect    Enable Random Early Detection as drop policy
  service-policy   Configure QoS Service Policy
  set              Set QoS values
  shape            Traffic Shaping
  
R1(config-pmap-c)#drop

If you look at the option above you can see that you have different options to apply on this traffic. At the moment I will set up R2 to drop the packets coming from R1 loopback0 so you can see how it works. The configuration looks like below:

R2#show run | sec access-list
ip access-list extended ICMP
 permit ip host 150.1.1.1 host 150.1.2.2

R2#sh run | sec class-map
class-map match-all CMAP_ICMP
 match access-group name ICMP

R2#show run | sec policy-map
policy-map PMAP_ICMP
 class CMAP_ICMP
   drop
 class class-default

R2#show run int eth0/0
interface Ethernet0/0
 ip address 192.168.12.2 255.255.255.0
 service-policy input PMAP_ICMP
end
R2#

#Check the statistics:
R2#show policy-map interface eth0/0
 Ethernet0/0

  Service-policy input: PMAP_ICMP

    Class-map: CMAP_ICMP (match-all)
      5 packets, 570 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name ICMP
      Match: protocol icmp
      drop

    Class-map: class-default (match-any)
      21 packets, 1974 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: any
R2#

If you look at the show command and check the class-map CMAP_ICMP you will see 5 packets has been dropped. It was the ping from R1.
Now, if I remove the drop and set the precedence, the icmp will be permitted again and you will see how many packets has been marked.

R2#show run | sec policy-map
policy-map PMAP_ICMP
 class CMAP_ICMP
  set precedence 5
 class class-default
R2#

R2#show policy-map interface eth0/0
 Ethernet0/0

  Service-policy input: PMAP_ICMP

    Class-map: CMAP_ICMP (match-all)
      10 packets, 1140 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name ICMP
      Match: protocol icmp
      QoS Set
        precedence 5
          Packets marked 5

 
Using CBWFQ you will have different options to apply on a flow, it will depend on the requirements.You can shape, police, drop, mark a packet, the best way to get used to it is playing around and checking the difference between them.
The next post about QoS I’ll show you how to use police, shape options as well as let you know the details about ip precedence,tos, and DSCP.

Hope you have enjoyed this post!

Thanks!
Renato Gentil =)

A group conversation. Getting started with GETVPN and crypto GDOI.

Along the past few years I’ve had a hard time catching up with all VPN technologies that’s been release. Among them one got my attention because at the time I could not understand its purpose neither where it would have a better fit than other VPN technologies available at the time. This post is about the so called GETVPN (Group Encrypted Transport VPN) and its basics principles. Here we’ll cover what’s needed, the basic protocols running to get it started and a basic GETVPN configuration of it. So let’s get started!

Overview

GETVPN stands for “Group Encrypted Transport VPN”. It’s a technology created to overcome the need of having one IPsec SA or an overlay infrastructure with double encapsulation in order to have multiple peers connected to each other. Instead of adding an additional overlay encapsulation like mGRE (Multiple GRE) does, it simply replicates the original IP Packet’s header using IPsec Tunnel mode. The benefits of this is that it adds the possibility of using network routing protocols and also making usage of traffic engineering in the core network. This specification maked GETVPN the perfect fit for operating in Private WAN built over MPLS (VPN in VPN) but at the same time it makes GETVPN a technology that’s not much of a fit for Internet if the customer is using private addressing (no overlay encap). This is the reason why GETVPN is described as a technology to be positioned as an enterprise solution to be deployed over MPLS networks, usually for public networks such as internet the best solution would be to deploy other technology such as DMVPN, but this is another topic.

Before getting deep inside the good stuff (=P), which for me is the configuration and examples of debugging and communication, let’s get an overview of the technology itself starting with protocols.

GDOI

GDOI stands for “Group Domain of Interpretation”. GDOI is an IETF standard that’s responsible for providing a set of cryptographic keys and policies to a group of devices. GDOI removes the need to configure tunnel endpoints as a key-server is responsible for distributing all keys and policies to the registered and authenticated members.

A perfect example of how the the authentication and the distribution of policies work. Red arrows represent the authentication and registration from GMs (Group-Members) to the KS (Key-Server) and green arrows represent the policy attribution:

GETVPN1

 

IP Header Preservation and GET (Group Encrypted Transport):

The security Group Encrypted Transport enabled mode relies in the existing routing infrastructure other than the traditional IPsec Overlay of other VPN technologies. Data-packets preserve their original IP source and destination allowing the usage of the routing that’s already in place. Bellow is a comparison between the original IP Packet, the traditional VPN mode and the GET mode:

 

GETVPN 2GM and KS Interaction:

Here is where all the concept converges. All GMs register with the KS establishing a Phase-1  IKEv1 SA (ISAKMP SA), then they authenticate themselves with either PSK (Pre-shared-Key) or RSA (Digital Signature). After this GDOI negotiation begins, all the negotiation happens protected by the IKEv1 SA providing each member with a KEK (Key Encryption Key) and a TEK (Traffic Encryption Key). KEK is used to encrypt and protect control-plane message exchange (GDOI negotiation) and TEK is used to protect the data-plane traffic (phase-2). Both keys are common to all members and they are changed by the KS from time to time. The IKEv1 SA established previously from each GM to the KS will expire in time leaving only GDOI SA in place to negotiate group-policies. The only purpose of the initial SA is to provide all the GMs with the KEK and TEK inside a protected media. The KS can use either Multicast or Unicast to refresh KEK and TEK.

Hint: Whenever you have a cisco ASA routing in multi-context it will not accept multicast routing, so you either must use unicast to make the rekeying OR you must establish a GRE tunnel between the GMs and KS in order to by-pass the firewall.

Configuration:

Now the good stuff! Finally! Concepts suck sometimes! For this example I am gonna use unicast rekeying because it has no other requirements other than what’s setup. The topology used is the one bellow. Here we go:

GETVPN3

 

KS Configuration:

! --->>> Here we start with a simple phase-1 configuration, we could match the PSK with a keyring and use 
a isakmp-profile to segment other VPN traffics and have other technologies running in the same router but 
let's keep it simple:
!
crypto isakmp policy 100
authentication pre-share
encryption 3des
hash md5
!
crypto isakmp key CISCO address 0.0.0.0 0.0.0.0
!
!---->>> Here we'll configure phase-2 parameters:
!
crypto ipsec transform-set TSET-GETVPN esp-3des esp-md5-hmac
!
crypto ipsec profile IPSEC-GETVPN_PFL
set transform-set TSET-GETVPN
!
!---->>> Now we define a RSA key for the KS sign the re-keying messages and an ACL to define interesting 
traffic:
!
crypto key generate rsa general-keys label RSA-GETVPN_KEYS modulus 1024
access-list 100 permit ip 192.168.50.0 0.0.0.255 any
!
!---->>> We sum it up by configuring the GDOI policy in the KS. This defines the GDOI SA parameters as well 
as the rekeying process. For each GDOI group you must define a different RSA key and an identity, also an 
IPsec profile must be define in order to to specify the data-plane protection settings
!
crypto gdoi group GP-GETVPN_GROUP
identity number 1234
server local
rekey authentication mypubkey rsa RSA-GETVPN_KEYS
rekey transport unicast
address ipv4 192.168.50.1
sa ipsec 1
profile IPSEC-GETVPN_PFL
match address ipv4 100
replay time window-size 5

The GM configuration is waaaay simpler. There are not much steps to follow:

crypto isakmp policy 10
authentication pre-share
encryption 3des
hash md5
!
crypto isakmp key CISCO address 200.1.18.1
crypto gdoi group GP-GETVPN_GROUP_GM
identity number 123
server address ipv4 200.1.18.1
!
crypto map CRMAP-GETVPN local-address GigabitEthernet0/0
crypto map CRMAP-GETVPN 10 gdoi
set group GP-GETVPN_GROUP_GM
!
interface GigabitEthernet0/0
crypto map CRMAP-GETVPN

Verification:

Check that after you finish the configuration in the KS and the GMs you should get a message like the one bellow:

R1#  
%GDOI-5-GM_REKEY_TRANS_2_UNI: Group GETVPN_GROUP_GM transitioned to Unicast Rekey.
%CRYPTO-5-GM_REGSTER: Start registration to KS 200.1.38.1 for group GP-GETVPN_GROUP_GM using address 200.1.38.3
%GDOI-5-GM_REGS_COMPL: Registration to KS 200.1.38.1 complete for group GP-GETVPN_GROUP_GM using address 200.1.38.3
%GDOI-5-GM_INSTALL_POLICIES_SUCCESS: SUCCESS: Installation of Reg/Rekey policies from KS 200.1.38.1 for group GETVPN_GROUP_GM & gm identity 200.1.38.2

See that both GMs registered with the KS:

R1#show crypto gdoi ks members

Group Member Information : 

Number of rekeys sent for group GETVPN_GROUP : 0

Group Member ID    : 200.1.38.2  GM Version: 1.0.3
 Group ID          : 123
 Group Name        : GP-GETVPN_GROUP
 Key Server ID     : 200.1.38.1
 Rekeys sent       : 0
 Rekeys retries    : 0
 Rekey Acks Rcvd   : 0
 Rekey Acks missed : 0

 Sent seq num : 0       0       0       0
Rcvd seq num :  0       0       0       0

Group Member ID    : 200.1.38.3  GM Version: 1.0.1
 Group ID          : 123
 Group Name        : GP-GETVPN_GROUP
 Key Server ID     : 200.1.38.1
 Rekeys sent       : 0
 Rekeys retries    : 0
 Rekey Acks Rcvd   : 0
 Rekey Acks missed : 0

 Sent seq num : 0       0       0       0
Rcvd seq num :  0       0       0       0

Let’s check the KEK and TEK settings on the KS:

R1#show crypto gdoi ks policy
Key Server Policy:
For group GP-GETVPN_GROUP (handle: 2147483650) server 200.1.38.1 (handle: 2147483650):

  # of teks : 1  Seq num : 0
  KEK POLICY (transport type : Unicast)
    spi : 0xB82B8E09081C62CFD9B5FB8D434459B0
    management alg     : disabled    encrypt alg       : 3DES      
    crypto iv length   : 8           key size          : 24      
    orig life(sec): 86400       remaining life(sec): 85379     
    sig hash algorithm : enabled     sig key length    : 162     
    sig size           : 128       
    sig key name       : GETVPN_KEYS

  TEK POLICY (encaps : ENCAPS_TUNNEL)
    spi                : 0x7010693A
    access-list        : 100
    transform          : esp-3des esp-md5-hmac
    alg key size       : 24            sig key size          : 16        
    orig life(sec)     : 3600          remaining life(sec)   : 2580      
    tek life(sec)      : 3600          elapsed time(sec)     : 1020      
    override life (sec): 0             antireplay window size: 64  

After all that check that both GMs registered and authenticated with the KS and also received KEK/TEK and ACL from the KS:

R2#show crypto gdoi
GROUP INFORMATION

    Group Name               : GP-GETVPN_GROUP_GM
    Group Identity           : 123
    Crypto Path              : ipv4
    Key Management Path      : ipv4
    Rekeys received          : 0
    IPSec SA Direction       : Both

     Group Server list       : 200.1.38.1

    Group member             : 200.1.38.2       vrf: None
       Version               : 1.0.3 
       Registration status   : Registered
       Registered with       : 200.1.38.1
       Re-registers in       : 2718 sec
       Succeeded registration: 1
       Attempted registration: 1
       Last rekey from       : 0.0.0.0
       Last rekey seq num    : 0
       Unicast rekey received: 0
       Rekey ACKs sent       : 0
       Rekey Received        : never
       allowable rekey cipher: any
       allowable rekey hash  : any
       allowable transformtag: any ESP

    Rekeys cumulative
       Total received        : 0
       After latest register : 0
       Rekey Acks sents      : 0

 ACL Downloaded From KS 200.1.38.1:
   access-list  permit ip 192.168.50.0 0.0.255.255 192.168.50.0 0.0.255.255

KEK POLICY:
    Rekey Transport Type     : Unicast
    Lifetime (secs)          : 85647
    Encrypt Algorithm        : 3DES
    Key Size                 : 192     
    Sig Hash Algorithm       : HMAC_AUTH_SHA
    Sig Key Length (bits)    : 1024    

TEK POLICY for the current KS-Policy ACEs Downloaded:
  GigabitEthernet0/0.28:
    IPsec SA:
        spi: 0x7010693A(1880123706)
        transform: esp-3des esp-md5-hmac
        sa timing:remaining key lifetime (sec): (2848)
        Anti-Replay : Disabled
!
!
R3#show crypto gdoi
GROUP INFORMATION

    Group Name               : GP-GETVPN_GROUP_GM
    Group Identity           : 123
    Rekeys received          : 0
    IPSec SA Direction       : Both

     Group Server list       : 200.1.38.1

    Group member             : 200.1.38.3       vrf: None
       Registration status   : Registered
       Registered with       : 200.1.38.1
       Re-registers in       : 3046 sec
       Succeeded registration: 1
       Attempted registration: 3
       Last rekey from       : 0.0.0.0
       Last rekey seq num    : 0
       Unicast rekey received: 0
       Rekey ACKs sent       : 0
       Rekey Received        : never
       allowable rekey cipher: any
       allowable rekey hash  : any
       allowable transformtag: any ESP

    Rekeys cumulative
       Total received        : 0
       After latest register : 0
       Rekey Acks sents      : 0

 ACL Downloaded From KS 200.1.38.1:
   access-list  permit ip 192.168.50.0 0.0.255.255 192.168.50.0 0.0.255.255

KEK POLICY:
    Rekey Transport Type     : Unicast
    Lifetime (secs)          : 86034
    Encrypt Algorithm        : 3DES
    Key Size                 : 192     
    Sig Hash Algorithm       : HMAC_AUTH_SHA
    Sig Key Length (bits)    : 1024    

TEK POLICY for the current KS-Policy ACEs Downloaded:
  FastEthernet0/0.38:
    IPsec SA:
        spi: 0x7010693A(1880123706)
        transform: esp-3des esp-md5-hmac
        sa timing:remaining key lifetime (sec): (3151)
        Anti-Replay :  Disabled

Check that GDOI SA is established successfully:

R2#show crypto gdoi ipsec sa

SA created for group GETVPN_GROUP_GM:
  GigabitEthernet0/0.28:
    protocol = ip
      local ident  = 192.168.50.0/24, port = 0
      remote ident = 192.168.50.0/24, port = 0
      direction: Both, replay: Disabled
!
R3#show crypto gdoi ipsec sa

SA created for group GETVPN_GROUP_GM:
  FastEthernet0/0.38:
    protocol = ip
      local ident  = 192.168.50.0/24, port = 0
      remote ident = 192.168.50.0/24, port = 0
      direction: Both, replay: Disabled

Multicast – PIM Sparse Mode

I decided to write a post about Multicast, specifically about PIM Sparse-Mode (PIM-SM). It is an important topic on CCIE RS exam. So, if you intend to do the exam, you have to know PIM-SM and you cannot lose this post. Today I’m going to show you how PIM-SM works, how to configure it and troubleshoot it.

I’m assuming that you already have a good knowledge of routing protocols to implement and troubleshoot a multicast network.

Introduction

IP Multicast is a technique for one-to-many communication over IP network. A good example would be a TV, when you change the channel that you are watching you are basically changing the multicast group that you are joining to.That’s how multicast work, the host would have to join into a specific group to receive the information needed.

Multicast uses class D addresses to do the communication between router-to-router or host-to-router. Follow below the specification of the scope:

Link-Local-Addresses – 224.0.0.0/24 (224.0.0.0 – 224.0.0.255) – Used by Network Protocol such as OSPF (224.0.0.5/6), VRRP( 224.0.0.18).

Source Specific Multicast (SSM) -232.0.0.0/8 (232.0.0.0 – 232.255.255.255.)- SSM uses the Shortest Path Tree only which means that there will be only (S,G) in the multicast routing table.

Administratively Scoped – 239.0.0.0/8 (239.0.0.0 – 239.255.255.255) – Considered the private addressing for multicast.

*Note: The scope for the addresses is not automatically enforces by the router, so if you want to 239.0.0.0/8 not to be forwarded out particular interface, you have to manually configured it.

Basic Concepts

Before we start to talk about PIM-SM we will need to know some important concepts and names that will be mentioned later on in this post.

Receiver – On this lab receiver will be the router responsible for joining in the multicast group and receive the packets from the senders.

Sender – Router responsible for reaching the multicast group. On our example we gonna use differents routers to reach the Receiver using icmp or telnet.

RP (Rendezvous Point) – RP is a router that has been configured to be used as the root of the multicast group. Join messages from receivers are sent towards the RP and the data from sender are sent to the RP, so that receivers can discover who the sender are and start to receive traffic destined for the group.

RPF Neighbor – RPF stands for Reverse Path Forwarding, the RPF check prevents loops in the Data Plane by checking the source ip address and incoming interface. If incoming multicast interface is the same as outgoing unicast interface back to source then, RPF check passes. If incoming interface is different of outgoing unicast interface back to source then, RPF check fails and packet is dropped. I’ll show in the lab how the RPF is done and you will understand better what I’m talking about.

DR (Designated Router) – A DR is elected on every multiple access segment such as Ethernet, where multiple routers share the same subnet on a segment. The election process is based on the highest priority and highest ip address. The purpose of the DR is registering active sources on the segment with the RP. When the DR hears multicast packet on the segment, it will check to determine if the destination group has a RP, if that’s the case, the packet are encapsulated into special PIM Register Message and sent towards the RP.

*Note: The DR election process is preemptive and every new router with a better priority will preempt the previous DR.

*Note: The PIM Register message are subject to RPF Checks. If the Register message is received on a non-RPF interface, the check will fail and the packet will be dropped.

MRIB – Multicast Routing Information Base – This is the multicast topology table, which is typically derived from the unicast routing table. In PIM-SM, the MRIB is used to decide where to send the Join and Prune messages and provide routing metrics for destination addresses.

Source Based Tree – SPT – Used by PIM Dense Mode and PIM Sparse Mode, it uses shortest path from sender to receiver.

Shared Based Tree – Also known as RP Tree, it uses shortest path from sender to RP, then shortest path from RP to receiver. Used to eliminate flooding and pruning and make routing table more scalable.

PIM Sparse Mode (PIM-SM)

PIM SM considers explicit join, this means that it won’t forward traffic unless you ask for it, in other words, it won’t forward traffic to you if you don’t send the join in that specific group. If you know about PIM-DM you will see the difference here, because PIM-DM will forward traffic unless you ask to don’t receive it.

Basically the RP is responsible for building the tree to the source and destination, so the RP knows both source and destination of the traffic.

There are some important PIM Messages that we need to know about:

PIM Hello Message – Hello messages are sent periodically on each PIM-enabled interface. They allow a router to learn about the neighboring PIM routers on each interface. Hello messages are also the mechanism used to elect a Designated Router (DR), and to negotiate additional capabilities. A router must record the Hello Information received from each PIM neighbor. Hello Message mustbe sent on all active interface, including physical point-to-point links, and are multicast to the group address 224.0.0.13 for IPv4 and FF02::D FOR IPv6.

PIM Register Message- When a sender forward traffic towards the receiver, the attached router hears that traffic and then send a unicast Register Message to the RP.If the RP accepts this message, it acknowledges back with a Register Stop message and insert (S,G) into the multicast table.The (S,G) in the multicast table stands for (Source,Group).

PIM Join Message – The last-hop router (connected on the receiver) receives an IGMP Report message, then it generates a PIM Join towards the RP. At this point the RP knows about Source and Receivers and it can forward traffic from the Source towards the receiver.

Now we have covered the basic concepts, let’s start building a lab to see how it works. As I’m assuming that you have a good understanding of routing protocol I am not go through IGP configuration on my post.

Follow my topology below: 

I’ve created loopback interface on all routers with 150.1.x.x ip address. R1 is 150.1.1.1, R2 150.1.2.2 and over and over…I ran a tclscript from R1 pinging all interface created to make sure the network is fully converged, as I’m running OSPF in the entire topology every node can reach each other.

The first steps to set up a multicast network is enabling cef as well as multicast-routing, otherwise your multicast network will never work.

At the moment I’ll enable PIM on the routers R4,R1,R7,R8,R5,R2 using the comand “ip pim sparse-mode” in interface level.

To check which interface pim is enabled, issue the command “show ip pim interface”:

R1#show ip pim interface
Address       Interface    Version/Mode    Nbr   Query   DR
                                           Count Intvl
192.168.10.1  Ethernet0    v2/Sparse-Dense  1    30    192.168.10.2
192.168.9.3   Ethernet1    v2/Sparse-Dense  1    30    192.168.9.5

To check the PIM adjacency, issue the command “show ip pim neighbors”, it will show you the neighbor:

R1#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface            Uptime/Expires    Ver   DR
Address                                                        Prio/Mode
192.168.14.4      Ethernet0/2          00:05:48/00:01:21 v2    1 / DR S P G
192.168.18.8      Ethernet1/1          00:02:40/00:01:32 v2    1 / DR S P G
R1#

As we can see the neighbors are up on Ethernet 0/2 and Ethernet 1/1. Also we can see the up time and when the next hello packet will be send to the neighbor.

Once the neighbor is up the next step would be set up the RP ip address. You can configure it statically using “ip pim rp-address x.x.x.x” or you can use protocols such as Auto-RP or BSR, but today we are going to set up statically RP. To configure the RP address you have to issue the command “ip pim rp-address x.x.x.x” on global configuration mode. To double check issue the command “show ip pim rp mapping” and the output should look like below:

R1#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 150.1.8.8 (?)
R1#

I know you are asking yourself, why there is question mark in front of RP address. Don’t worry, this means the domain couldn’t be resolver, which means we don’t have a DNS running in our network.

Our RP is mapped, we can keep going, the next step will be to Join the R2 on a multicast group and start the tests.
On R2, to join in a multicast group, issue the command “ip igmp join-group x.x.x.x”. In this case R2 is joined in a group 239.10.10.10. Remember to use the Administratively Scope at this moment. If you are not sure which group the router is joined to, you can verify that using the command below:

R2#show ip igmp membership
Flags: A  - aggregate, T - tracked
       L  - Local, S - static, V - virtual, R - Reported through v3
       I - v3lite, U - Urd, M - SSM (S,G) channel
       1,2,3 - The version of IGMP, the group is in
Channel/Group-Flags:
       / - Filtering entry (Exclude mode (S,G), Include mode (G))
Reporter:
        - last reporter if group is not explicitly tracked
       /      -  reporter in include mode,  reporter in exclude

 Channel/Group                  Reporter        Uptime   Exp.  Flags  Interface
 *,239.10.10.10                 192.168.52.2    00:02:35 stop  2LA    Et0/2
R2#

As you can see above, the router is joined in the group 239.10.10.10 on the Ethernet0/2.
Now I am going to show you how the RPF check works on a multicast network.

I just sent a ping from the our sender (R4) to the receiver (R2). I have got 100% reply, which is good.I sent the ping from our loopback interface 150.1.4.4.

Now if we have a look at the Multicast Routing Table on R1, we will notice that RPF check is ok and the R1 now has the (S,G) on the mrouting table, Source and Destination:

R1#sho ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(150.1.4.4, 239.10.10.10), 00:00:48/00:02:41, flags: T
  Incoming interface: Ethernet0/2, RPF nbr 192.168.14.4
  Outgoing interface list:
    Ethernet1/1, Forward/Sparse, 00:00:48/00:02:41
R1#

Basically, the RPF check will look which interface the packet coming from, which is my unicast route to this host. On our topology the packet is sent by 150.1.4.4 and arrived at Eth0/2 in R1, so the R1 now will do a “sh ip route 150.1.4.4” if this output show us Eth0/2 to reach R4 loopback, then the RPF check passes and R1 will now add the outgoing interface to 239.10.10.10 which is Eth1/1.If we have a look at the debug below we can understand a little bit more what I’m talking about:

R1#debug ip mpacket 239.10.10.10

IP:s=150.1.4.4 (Ethernet0/2), d=239.10.10.10 (Ethernet1/1),len 88, mforward 
IP:s=150.1.4.4 (Ethernet0/2), d=239.10.10.10 (Ethernet1/1),len 88, mforward 
IP:s=150.1.4.4 (Ethernet0/2), d=239.10.10.10 (Ethernet1/1),len 88, mforward 
IP:s=150.1.4.4 (Ethernet0/2), d=239.10.10.10 (Ethernet1/1),len 88, mforward 

S = Source of the packet
Ethernet0/2 = Interface the packet has arrived
D= Destination of the packet
Ethernet1/1 = Outgoing interface
Len 88 = Number of bytes in the packet. This value may vary depending on the application and the media.
mforward = Packet has been forwarded.

The Multicast traffic flow is going according to the black line below:

I hope that you have understood so far how the RPF check works as well as how to build a simple multicast topology.

Now, according to the topology above I have added a new link between R4 and R6 and have enable OSPF on both interfaces. The reason I’m doing this is to change the OSPF cost on R1, to make R1 reach R4 via R6. This way the RPF check will fail, because the packet will come on Ethernet0/2 interface but R1 will be using Ethernet0/1 to reach R4 and that’s why multicast could be so complicated sometimes. Let’s check!!!

Once R1 start to reach R4 loopback interface via R6, the RPF check will fail and R4 won’t be able to reach the destination anymore:

R1#debug ip mpacket 239.10.10.10

IP(0):s=150.1.4.4 (Ethernet0/2) d=224.0.1.39 ttl=8, len=52(48), not RPF interface
IP(0):s=150.1.4.4 (Ethernet0/2) d=224.0.1.39 ttl=8, len=52(48), not RPF interface

The output above looks like our previously debug done, the only difference would be “not RPF interface” instead of “mforward”. The “not RPF interface” means the packet hasn’t arrived at the same interface that R1 is using to reach 150.1.4.4. R1 is using Eth0/1 via R6 now, that is the reason to fail the RPF check.

The same issue could happen on R5 if a multicast packet arrives an interface that R5 is not using as unicast route back to the source.

Now we know how PIM-SM works.Follow below some tips to troubleshoot PIM-SM:

1– Ensure RP information propagation through the topology. Make sure there is a RP for group “G” using the command that we have learn during this post. Check every single router in the topology and make sure all of them have RP configured.

2-Ensure the receivers have joined the group using the command “show ip igmp groups”. If you don’t have a receiver or don’t have access to the receiver, you can simulate it on a router.

3– Ensure that DR can reach the RP using unicast packet exchange. Ping the RP ip address from the DR.

4– Use ICMP echo traffic from the source to the multicast group from the routers attached directly to the sources and ensure you are receiving response from the receiver.

5-Use the IOS tools as mtrace to track down the multicast packet, use debug to find out if the packet is being dropped somewhere in the network, review your IGP topology and make a good lab to train it.

I hope you have enjoyed this post!!

Thanks!
Renato Gentil =)