Lab 2: Link and Network with Software Defined Networking

In this lab you will learn about Software Defined Networking (SDN). Using Multipass, Mininet, and Pox as the implementers of the OpenFlow protocol, you will build simple networks using SDN primitives.

  1. First you will learn to use mininet, a SDN-enabled network emulator.
  2. For the second portion you will be using POX to implement a simple L2 static firewall.
  3. For the third portion you will be building an entire network, with multiple switches capable of handling ARP and other traffic in a static topology.
  4. Lastly, you will be modifying your part 3 solution to implement an actual L3 IP router that handles ARP and routes traffic dynamically.

Background

Software Defined Networking and OpenFlow

Software-Defined Networking (SDN) is a recently proposed networking paradigm in which the data and control planes are decoupled from one another. One can think of the control plane as being the networks "brain", i.e., it is responsible for making all decisions, for example, how to forward data, while the data plane is what actually moves the data. In traditional networks, both the control- and data planes are tightly integrated and implemented in the forwarding devices that comprise a network. The SDN control plane is implemented by the "controller" and the data plane by "switches". The controller acts as the "brain" of the network, and sends commands ("rules") to the switches on how to handle traffic. OpenFlow has emerged as the de facto SDN standard and specifies how the controller and the switches communicate as well as the rules controllers install on switches.

Mininet

Mininet is a software stack that creates a virtual network on your computer/laptop. It accomplishes this task by creating host namespaces (h1, h2, etc) and connecting them through virtual interfaces. So when we run ping between the linux namespaces h1 and h2, the ping will run from h1s namespace through a virtual interface pair created for h1 and h2, before it reaches h2. If h1 and h2 are connected through a switch as shown in the python code in the Mininet walkthrough, the ping will transit multiple virtual interface pairs. The switches that we will be using are running OpenVSwitch (OVS), a software-defined networking stack. Mininet will connect additional virtual interfaces between each virtual port on the switch with each connected host. The host name space allows each host to see the same file system, but operates as its own process that will run separately from each of the other host processes. The OVS version running on the Ubuntu image supports OpenFlow.

POX

Pox is a research/academic implementation of an OpenVFlow controller. It provides python hooks to program mininet-based software emulated networks.


Assignment

Part 1: Mininet Primer

Mininet VM Installation

Mininet is a tool for building emulated network topologies on a Linux host for experimentation with SDN. To ensure everyone is using the same enviornment and prevent you from messing up the networking on your computer, we'll run mininet inside of a virtual machine.

You need to download Xquartz first. The XQuartz project is an open-source effort to develop a version of the X.Org X Window System that runs on macOS. This is only required for macOs. You can download it from this link.

To manage the virtual machine, we'll be using a tool called multipass. If you're using a flavor of Linux or MacOS, you may be able to install instead via package manager or brew, respectively. You can find link to install multipass here.

Using mininet

Mininet documentation is described here. Using mininet is described here. You can run it by typing: sudo mn
Inside of the the mininet CLI, try running other commands like help, net, nodes, links and dump. Mininet starts with a default network that you can poke at. Find the mac address of the hosts, the ethernet ports connected, and the hostnames in the system.

Programming Mininet Topologies

Mininet is also programmable using the python programming language. We have provided some sample topologies here, which should already be downloaded in the VM and unzipped into ~/461_mininet. In this directory, you'll find two different directories: topo and pox. Ignore the pox directory for now (it's used in part2). In the topo folder there are a variety of python files. These each define a topology for each of the following lab portions. Run the lab 1 file with sudo python3 461_mininet/topos/part1.py or sudo mn --custom 461_mininet/topos/part1.py --topo=part1. It will drop you into the CLI with the network topology defined in the python script.

Your task in part one is to modify part1.py to represent the following network topology:

              [h1]-----{s1}------[h2]
              [h3]----/    \-----[h4]
              

Where [x] means you create a host named x, {y} means a switch named y, and --- means there is a link between the node and the switch.

Deliverables

After creating the above architecture, provide the two items in a part1 folder in a compressed file:

  1. Your modified part1.py file
  2. Screenshots of the iperf, dump, and pingall commands (from mininet) in pdf format.

Part 2: SDN Controllers using POX

In part 1, we experimented with Mininet using its internal controller. In this (and future) parts, we will instead be using our own controller to send commands to the switches. We will be using the POX controller, which is written in Python.

For this assignment you will create a simple firewall using OpenFlow-enabled switches. The term "firewall" is derived from building construction: a firewall is a wall you place in buildings to stop a fire from spreading. In the case of networking, it is the act of providing security by not letting specified traffic pass through the firewall. This feature is good for minimizing attack vectors and limiting the network "surface" exposed to attackers. In this part, we will provide you with the Mininet topology, part2.py, to setup your network which assumes a remote controller listening on the default IP address and port number 127.0.0.1:6633. You do not need to (and should not) modify this file. The topology that this script will setup is as follows. Note that h1 and h4 are on the same subnet and a different one from h2 and h3.

            [h1@10.0.1.2/24][h2@10.0.0.2/24][h3@10.0.0.3/24][h4@10.0.1.3/24]
            \                \               /               /
             \                \             /               /
              \                \----{s1}---/               /
               \-------------------/ |  \-----------------/
                                     |
                                (controller)
            

For part 2, we will also provide you with a skeleton POX controller: part2controller.py. This file will be where you will make your modifications to create the firewall. To run the controller, place 461_mininet/pox/part2controller.py in the ~/pox/pox/misc directory. You can then launch the controller with the command sudo ~/pox/pox.py misc.part2controller. Then, on a separate terminal in the same VM (run multipass shell again), run the command sudo python3 ~/461_mininet/topos/part2.py or run the command sudo -E mn --custom 461_mininet/topos/part2.py --topo=part2 --controller=remote,ip=127.0.0.1,port=6633 to run mininet.

The rules s1 will need to implement are as follows:

src ip dst ip protocol action
any ipv4 any ipv4 icmp accept
any any arp accept
any ipv4 any ipv4 - drop

Basically, your Firewall should allow all ARP and ICMP traffic to pass. However, any other type of traffic should be dropped. It as acceptable to flood the allowable traffic out all ports. Be careful! Flow tables match the rule with highest priority first, where priority is established based on the order rules are placed in the table. When you create a rule in the POX controller, you need to also have POX "install" the rule in the switch. This makes it so the switch "remembers" what to do for a few seconds. Do not handle each packet individually inside of the controller! Hint: To do this, look up ofp_flow_mod.

The OpenFlow tutorial (specifically " Sending OpenFlow messages with POX") and the POX Wiki are both useful resources for understanding how to use POX.

Deliverables:

  1. A screenshot of the pingall command. Note that h1 and h4 should be able to ping each other (h2 and h3 as well), but not across subnets. Also, the iperf command should fail (as you're blocking IP traffic). This is realized as the command hanging.
  2. A screenshot of the output of the dpctl dump-flows command. This should contain all of the rules you've inserted into your switch.
  3. 3) Your part2controller.py file.

Part 3: A real network

In part 2 you implemented a simple firewall that allowed ICMP packets, but blocked all other packets. For your part 3, you will be expanding on this to implement routing between subnets, and implementing firewalls for certain subnets. The idea is to simulate an actual production network.

We will be simulating a network for a small company. The company has a 3 floor building, with each floor having its own switch and subnet. Additionally, we have a switch and subnet for all the servers in the data center, and a core switch connecting everything together. Note that the names and IPs are not to be changed. As with prior parts, we have provided the topology (461_mininet/topos/part3.py) as well as a skeleton controller (461_mininet/pox/part3controller.py). As with part 2, you need only modify the controller.

            [h10@10.0.1.10/24]--{s1}--\
            [h20@10.0.2.20/24]--{s2}--{cores21}--{dcs31}--[serv1@10.0.4.10/24]
            [h30@10.0.3.30/24]--{s3}--/    |
                                           |
                                [hnotrust1@172.16.10.100/24
            

Your goal will be to allow traffic to be transmitted between all the hosts. In this assignment, you will be allowed to flood traffic on the secondary routers (s1,s2,s3,dcs31) in the same method that you did in part2 (using a destination port of of.OFPP_FLOOD). However, for the core router (cores21) you will need to specify specific ports for all IP traffic. You may do this however you choose-- however, you may find it easiest to determine the correct destination port by using the destination IP address and source IP address, as well as the source port on the switch that the packet originated from. Additionally, to protect our servers from the untrusted Internet, we will be blocking all IP traffic from the Untrusted Host to Server 1. To block the Internet from discovering our internal IP addresses, we will also block all ICMP traffic from the Untrusted Host. It might also be helpful to check the contents of the ARP table of a host by using the hostname arp in the mininet shell. In summary of your goals:

  • Create a Pox controller (as per part 2) with the following features: All nodes able to communicate EXCEPT
    • hnotrust1 cannot send ICMP traffic to h10, h20, h30, or serv1.
    • hnotrust1 cannot send any IP traffic to serv1.

Deliverables:

  • 1) A screenshot of the pingall command. All nodes but hnotrust should be able to send and respond to pings.
  • 2) A screenshot of the iperf hnotrust1 h10 and iperf h10 serv1 commands. Though not shown in these commands, hnotrust should not be able to transfer to serv1. It should be able to transfer to other hosts.
  • 3) A screenshot of the output of the dpctl dump-flows command. This should contain all of the rules you've inserted into your switches.
  • 4) Your part3controller.py file.

Part 4: A learning router

For part 4, we're extending your part 3 code to implement an actual level-3 router out of the cores21 switch. Copy the part3controller.py file to part4controller.py, there is no new skeleton. For the topology, we again provide a file (part4.py). The difference between part3.py and part4.py topologies is that the default route 'h10-eth0' was changed to 'via 10.0.1.1' where '10.0.1.1' is the IP address of the gateway (i.e. router) for that particular subnet. This effectively changes the network from a switched network (with hosts sending to a MAC address) into a routed network (hosts sending to an IP address). Your part3controller should not work on this new topology! To complete the assignment cores21 will need to:

  • Handle ARP traffic across subnets (without forwarding); and
  • Forward IP traffic across link domains (changing the ethernet header);
This also must be done in a learning fashion: you may not install static routes on cores21 startup. Instead, your router must learn of IP address through the ARP messages sent (this type of learning is normally done at the MAC layers, but there's a bunch of implementations of those in mininet already) and install these routes into the router dynamically. Imagine this as an alternative form of DHCP where hosts instead inform the router of their addresses (conflicts be damned!). You may handle each of the individual ARP packets in the controller (i.e., not with flow rules) for part 4. The IP routers must be done with flow rules. The other switches (e.g., s1) do not need to be modified and can continue to flood traffic. You may hardcode the drop rules for hnotrust on cores21.

Here's an example scenario on how cores21 might learn a host's MAC address. In the beginning, each hosts' ARP table will be empty. Suppose h10 with MAC address hw1 wants to send an IP packet to h30.

  1. h10 checks that h30 is outside it's subnet. So, it knows it will need to send it to the default gateway.
  2. h10 checks its ARP table for its default gateway (10.0.1.1 in this case) and it can't find the default gateway's MAC address
  3. It sends an ARP request for the default gateway.
  4. This packet eventually arrives to cores21. Since we have not installed any rules, it will be forwarded to the controller.
  5. From this ARP packet, the controller is able to know that h10 has a MAC address hw1. It can also know the port where the packet that is unable to be matched is coming from via event.port. You can use these information to install the appropriate rule to cores21.
  6. Finally, you will need to send back an ARP reply to h10. Ideally, the reply would be the MAC address of the default gateway. However, we can actually send cores21's MAC address or even a random MAC address. Why is this the case?

Deliverables:

  • 1) A screenshot of the pingall command. All nodes but hnotrust should be able to send and respond to pings. Note that some pings will fail as the router learns of routes (why would that be the case?)
  • 2) A screenshot of the iperf hnotrust1 h10 and iperf h10 serv1 commands. hnotrust should not be able to transfer to serv1, but should be able to transfer to other hosts
  • 3) A screenshot of the output of the dpctl dump-flows command. This should contain all of the rules you've inserted into your switches.
  • 4) Your part4controller.py file.

Turn-in

When you're ready to turn in your assignment, do the following:
  1. The files you submit should be placed in a directory named lab2. There should be no other files in that directory.
  2. Create a README.txt file that contains the names and UW netid of the member(s) of your team.
  3. Inside of the lab2 directory, create subdirectories for each of the lab parts (part1/,part2/,...).
  4. Inside of each part directory, place your topo file (e.g., part1.py), controller file (part1controller.py) if they exist (no controller for part 1), and your screenshots.
  5. Archive all the materials (lab2 folder and everything in it) in a single .zip file named lab2.zip.
  6. Submit the lab2.zip file to Gradescope.