# CS 219: Network Verification - Course Review

Published:

George is a fantastic teacher with very attractive lectures. I think his secret sauces include the following:

• Look at everything from different views: Zoe (ζωή, big picture) versus Bios (βίος, details).
• Selectively focus on the most important techniques and examples, ignoring unnecessary points. Students can feel that they learned a lot without remembering too much boring concepts.
• Have his own methodologies on creative process. Students can experience those “Aha” times when following his introduction.

The trade-offs may be as follows: (very biased personal view, don’t take it serious)

• (+) Attending lectures is always pleasant.
• (+) Students can learn things quickly and apply to his own research.
• (-) His lectures may give the false image that creating things is as easy as the fusion of ideas. This is not true because one must have a broad view to know what to borrow, and there are boring times such as trials and errors, non-trivial adoptation and modification of existing methods. He omitted these in his lecture.

I would strongly recommend everyone interested in networking try his 216 and 219.

In this post I won’t put all Bia unless interesting. Using those well-defined mathematical terms, Zoai are quite easy to state and thus very short.

# Mathematical Preparation

## Poset Topology

To understand a network we must have a language that describe sets of IP addresses. Though every paper we read in 219 uses its own notation, the idea behind is similar. To have a unified notation for this post, I use the concept of regular open algebra. (See also this book, Ch.10)

# Data Plane Testing

Data plane testing is to send packets with specific header in the deployed network, and verify whether it works as expected.

## ATPG

This work uses HSA to generate an all-pairs reachability table {Header Space, Ingress Port, Egress Port, Rule History}. In this table there are equivalent classes of packets (with ports). It picks one packet from each equivalent class. To further reduces the number of packets, it selects a subset of those packets that covers all rules in the network. The idea of rule coverage comes from path coverage in software verification. If packets are test cases, then rules are exactly if-branches.

I think Atomic Predicate should also work here. We can pick one packet from each principal ideal, and then select a covering subset.

## Software Dataplane Verification

This work only verifies the implementation of a software router Click. Nothing special related to networking. I think it should be submitted to a software engineering or a compiler design conference.

# Control Plane

## BGP-RCC

This work does static checking of BGP. I don’t fully understand this work. In very abstract, they parse router configurations, and check whether some beliefs (such as route validity and path visibility) hold.

## Batfish

Batfish works as follows:

1. Parse OSPF and BGF configuration from routers.
2. Compute a data plane result from configuration.
• LogiQL, a Datalog variant, is used here.
3. Analysis the data plane.
4. Report the result.

## Efficient Network Reachability Analysis (ERA)

Data plane reachability depends on route reachability. A packet cannot reach B from A unless on every node in the path:

• There is a route from B reaches A.
• Route is an abstract of route advertisement, encoded as a bitvector.
• There is no ACL drops this packet.

How a router handles routes is a program that works on bitvector. We can model routes with BDD/ZDD, and let routers act on it.

## Synthesis: Propane

Propane takes user input describing network topology and policies, compiles it into state machines, and generates BGP configuration of every router. Propane enables centralized configuration and distributed implementation simultaneously.

## Verification: Minesweeper

Minesweeper parses router configuration and encodes routing protocols with SMT. Then, it can check whether user-specified beliefs are violated. Minesweeper scales but only returns 1 counterexample.

# Lessons Summarized

As introduced in CS 216, an idea arises when an impacting steam hits the main stream. Through these works, we can clearly see the influences (or similarities) from other fields on networking:

• Programming languages and formal verification
• Use of automated proof tools (Datalog, SMT)
• Logical modeling (boolean-valued model, LTL)
• Use of existing compiler (LLVM-IR)
• Use of existing data structures (BDD)
• Software testing
• Branch coverage -> rule coverage
• Source-code annotation -> beliefs
• Hardware design
• EDA -> Synthesis
• Programmable hardware (FPGA) -> SDN, P4
• Algebra
• Exploiting symmetries

More methods, tools and concepts from all different fields will be applied to networking. Algorithms and data structures can be used to develop routers. Dynamical systems and graph theory can be utilized to research packet dynamics. Software architecture has similarities with cloud/distributed architecture. Cryptography influences cyber security. I believe networking field is still far from mature, and it will not stop growing as well as borrowing from other fields.

# References

• Givant, S., and P. R. Halmos, 2009: Introduction to Boolean Algebras. Springer.
• Kazemian, P., G. Varghese, and N. McKeown, 2012: Header Space Analysis: Static Checking for Networks. 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI 12).
• Kazemian, P., et al., 2013: Real Time Network Policy Checking Using Header Space Analysis. 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI 13).
• Yang, H., and S. S. Lam, 2013: Real-time verification of network properties using Atomic Predicates. 2013 21st IEEE International Conference on Network Protocols (ICNP).
• Lopes, N. P., et al., 2015: Checking Beliefs in Dynamic Networks. 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 15).
• Mai, H., et al., 2011: Debugging the data plane with anteater. SIGCOMM Comput. Commun. Rev. 41, 4 (August 2011), 290–301.
• Plotkin, G. D., et al., 2016: Scaling network verification using symmetry and surgery. SIGPLAN Not. 51, 1 (January 2016), 69–83.
• Zeng, H., P. Kazemian, G. Varghese, and N. McKeown, 2012: Automatic test packet generation. In Proceedings of the 8th international conference on Emerging networking experiments and technologies (CoNEXT ‘12), 241–252.
• Dobrescu, M. and K. Argyraki, 2014: Software Dataplane Verification. 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14).
• Feamster, N. and H. Balakrishnan, 2005: Detecting BGP configuration faults with static analysis. In Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation - Volume 2 (NSDI’05), 43–56.
• Fogel, A., et al., 2015: A General Approach to Network Configuration Analysis, 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 15).
• Fayaz, S. K., et al., 2016: Efficient Network Reachability Analysis Using a Succinct Control Plane Representation, 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16).
• Beckett, R., et al., 2016: Don’t Mind the Gap: Bridging Network-wide Objectives and Device-level Configurations, SIGCOMM 2016.
• Beckett, R., et al., 2017: A General Approach to Network Configuration Verification. In Proceedings of the Conference of the ACM Special Interest Group on Data Communication (SIGCOMM ‘17), 155–168.

Tags: