Networking concepts

From Gender and Tech Resources

Distant, abstract and idealised, principles have been drafted, ideas floated, and suggestions made about how best to use the enormous, networked tool. But the very idea of "neutrality" where "communications" and "investment" come together (these two not only get together in the FCC, they get together in lots of standardisation committees run by military and corporate interests), where information is key and the battle for access fundamental, suggest the fictional character of the effort.

Dandy it may be to speak about such "access" entitlements to internet power, till one realises the range of forces at work seeking to limit and restrict its operations. They come from governments and their agencies. They come from companies and their subsidiaries. The internet, in other words, is simply another territory of conflict, and one filled with fractious contenders vying for the shortest lived of primacies. Forget neutrality – it was never there to begin with. Just ask the lawyers getting their briefs ready for the next round of dragging litigation. ~ The FCC, the Internet and Net Neutrality [1]

Note: On this page I frequently use the word digital device. A computer is a digital device. So is a router.

What can you do to end (economic) violence against women? Take Back The Tech!


Network topology

A network consists of multiple digital devices connected using some type of interface, each having one or more interface devices such as a Network Interface Card (NIC) and/or a serial device for PPP networking. Each digital device is supported by network software that provides server and/or client functionality.

Centralised vs Distributed

We can make distinctions in type of networks according to centralisation vs distribution:

  • In a server based network, some devices are set up to be primary providers of services. These devices are called servers and the devices that request and use the service are called clients.
  • In a peer-to-peer (p2p) network, various devices on the network can act both as clients and servers. Like a network of switchers. :D

Social p2p processes are interactions with a peer-to-peer dynamic. Peers can both be a device or a human. The term comes from the P2P distributed computer application architecture which partitions tasks or workloads between peers. P2P has inspired new structures and philosophies in many areas of human interaction. Its human dynamic affords a critical look at current authoritarian and centralized social structures. Peer-to-peer is also a political and social program for those who believe that in many cases, peer-to-peer modes are a preferable option.

Flat vs Hierarchical

In general there are two fundamental design relationships that can be identified in the construction of a network infrastructure: flat networks versus hierarchical networks. In a flat network every device is directly reachable by every other device. In a hierarchical network the world is divided into separate locations and devices are assigned to a specific location. The advantage of hierarchical design is that devices interconnecting the parts of the infrastructure need only know how to reach intended destinations without having to keep track of individual devices at each location. Routers make forwarding decisions by looking at that part of the station address that identifies the location of the destination.

Physical wiring

The network topology describes the method used to do the physical wiring of the network. The main ones are:

  • Bus networks (not to be confused with the system bus of a computer) use a common backbone to connect all devices. Both ends of the network must be terminated with a terminator. A barrel connector can be used to extend it. A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message. Bus networks are limited in the number of devices it can serve due to the broadcast traffic it generates.
  • Ring networks connecting from one to another in a ring. Every device has exactly two neighbors. A data token is used to grant permission for each computer to communicate. All messages travel through a ring in the same direction, either "clockwise" or "counterclockwise". A failure in any cable or device breaks the loop and can take down the entire network, so there are also rings that have doubled up on networking hardware and information travels both "clockwise" and "counterclockwise".
  • Star networks using a central connection point called a "hub node", a network hub, switch or router, that controls the network communications. Most home networks are of this type. Star networks are limited in number of hub connection points.
  • Tree networks join multiple star topologies together onto a bus. In its simplest form, only hub devices connect directly to the tree bus, and each hub functions as the root of a tree of devices.
  • Mesh networks use routes. Unlike the previous topologies, messages sent on a mesh network can take any of several possible paths from source to destination. Most prominent example is the internet.
                                                   +-----+        +-----+                               +-----+ 
+-----+       +-----+       +-----+                |     |--------|     |                               |     |
|     |       |     |       |     |                +-----+        +-----+                               +-----+
+-----+       +-----+       +-----+                  /                \                                    |
   |             |             |                +-----+              +-----+            +-----+         +-----+         +-----+ 
   -----------------------------                |     |              |     |            |     |---------|     |---------|     |
   |             |             |                +-----+              +-----+            +-----+         +-----+         +-----+
+-----+       +-----+       +-----+                  \                /                                 /     \
|     |       |     |       |     |                 +-----+        +-----+                       +-----+        +-----+ 
+-----+       +-----+       +-----+                 |     |--------|     |                       |     |        |     |
                                                    +-----+        +-----+                       +-----+        +-----+
            Bus topology                                Ring topology                                Star topology

Hardware connections

Network interface card (NIC)

A Network Interface Card (NIC) is a circuit board or chip which allows the computer to communicate with other computers. This board when connected to a cable or other method of transferring data such as infrared or ISM bands can share resources, information and computer hardware. Using network cards to connect to a network allows users to share data such as collectives being able to have the capability of having a library, receive e-mail internally within the collective or share hardware devices such as printers.

Each network interface card (NIC) has a built in hardware address programmed by its manufacturer. This is a 48 bit address and should be unique for each card. This address is called a media access control (MAC) address.

Network cabling


You can connect two digital devices (computers) together with a cross-over cable between their network cards, not a straight network jumper cable (otherwise the transmit port would be sending to the transmit port on the other side).

Common network cable types:

  • In Twisted Pair cables, wire is twisted to minimize crosstalk interference. It may be shielded or unshielded.
    • Unshielded Twisted Pair (UTP)
    • Shielded twisted pair (STP)
  • Coaxial cables are two conductors separated by insulation. Coax cable types of intrest:
    • RG-58 A/U - 50 ohm, with a stranded wire core.
    • RG-58 C/U - Military version of RG-58 A/U.
  • With Fiber-optic cables data is transmitted using light rather than electrons. Usually there are two fibers, one for each direction. It is not subject to interference. Two types of cables are:
    • Single mode cables for use with lasers.
    • Multimode cables for use with Light Emitting Diode (LED) drivers.

Hubs and switches

A network hub is a hardware device to connect network devices together. The devices will all be on the same network and/or subnet. All network traffic is shared and can be sniffed by any other node connected to the same hub.

An uplink is a connection from a device or smaller local network to a larger network. Uplink does not have a crossover connection and is designed to fit into a crossover connection on the next hub. This way you can keep linking hubs to put more computers on a network. Because each hub introduces some delay onto the network signals, there is a limit to the number of hubs you can sequentially link. Also the computers that are connected to the two hubs are on the same network and can talk to each other. All network traffic including all broadcasts is passed through the hubs.

A network switch is like a hub but creates a private link between any two connected nodes when a network connection is established. This reduces the amount of network collisions and thus improves speed. Broadcast messages are still sent to all nodes.

If you have a machine (device, computer) with two network cards, eth0, connected to an outbound hub, and eth1, connected to another hub that only connects to local machines, and it is not configured as router or bridge, the two networks are considered separated. If no other machines on the network that the eth0 card is connected to is an outbound device, all devices in that network are dependent.

Three hubs, A, B and C, with uplink connections. A can be connected to the internet. As depicted it is not, and as a result all computers are not.

Wireless media

Transmission of waves take place in the electromagnetic (EM) spectrum. The carrier frequency of the data is expressed in cycles per second called hertz(Hz). Low frequency signals can travel for long distances through many obstacles but can not carry a high bandwidth of data. High frequency signals can travel for shorter distances through few obstacles and carry a narrow bandwidth. Also the effect of noise on the signal is inversely proportional to the power of the radio transmitter, which is normal for all FM transmissions. The three broad categories of wireless media are:

Radio frequency (RF) refers to frequencies of radio waves. RF is part of electromagnetic spectrum that ranges from 3 Hz - 300 GHz. Radio wave is radiated by an antenna and produced by alternating currents fed to the antenna. RF is used in many standard as well as proprietary wireless communication systems. RF has long been used for radio and TV broadcasting, wireless local loop, mobile communications, and amateur radio. It is broken into many bands including AM, FM, and VHF bands. The Federal communications Commission (FCC) regulates the assignment of these frequencies. Frequencies for unregulated use are:

  • 902 - 928Mhz - Cordless phones, remote controls.
  • 2.4 Ghz
  • 5.72 - 5.85 Ghz

Microwave is the upper part of RF spectrum, i.e. those frequencies above 1 GHz. Because of the availability of larger bandwidth in microwave spectrum, microwave is used in many applications such as wireless PAN (Bluetooth), wireless LAN (Wi-Fi), broadband wireless access or wireless MAN (WiMAX), wireless WAN (2G/3G cellular networks), satellite communications and radar. But it became a household name because of its use in microwave oven.

  • Terrestrial - Used to link networks over long distances but the two microwave towers must have a line of sight between them. The signal is normally encrypted for privacy.
  • Satellite - A satellite orbits at 22,300 miles above the earth which is an altitude that will cause it to stay in a fixed position relative to the rotation of the earth. This is called a geosynchronous orbit. A station on the ground will send and receive signals from the satellite. The signal can have propagation delays between 0.5 and 5 seconds due to the distances involved.

Infrared light is part of electromagnetic spectrum that is shorter than radio waves but longer than visible light. Its frequency range is between 300 GHz and 400 THz, that corresponds to wavelength from 1mm to 750 nm. Infrared has long been used in night vision equipment and TV remote control. Infrared is also one of the physical media in the original wireless LAN standard, that's IEEE 802.11. Infrared use in communication and networking was defined by the Infrared Data Association (IrDA). Using IrDA specifications, infrared can be used in a wide range of applications, e.g. file transfer, synchronization, dial-up networking, and payment. However, IrDA is limited in range (up to about 1 meter). It also requires the communicating devices to be in LOS and within its 30-degree beam-cone. A light emitting diode (LED) or laser is used to transmit the signal. The signal cannot travel through objects. Light may interfere with the signal. Some types of infared are:

  • Point to point - Transmission frequencies are 100GHz-1,000THz . Transmission is between two points and is limited to line of sight range. It is difficult to eavesdrop on the transmission.
  • Broadcast - The signal is dispersed so several units may receive the signal. The unit used to disperse the signal may be reflective material or a transmitter that amplifies and retransmits the signal. Installation is easy and cost is relatively inexpensive for wireless.

LAN radio communications

  • Low power, single frequency is susceptible to interference and eavesdropping.
  • High power, single frequency requires FCC licensing and high power transmitters. It is susceptible to interference and eavesdropping.
  • Spread spectrum uses several frequencies at the same time. Two main types are:
    • In Direct sequence modulation the data is broken into parts and transmitted simultaneously on multiple frequencies. Decoy data may be transmitted for better security.
    • In Frequency hopping the transmitter and receiver change predetermined frequencies at the same time (in a synchronized manner).

TCP/IP ports and addresses

The part of the network that does the job of transporting and managing the data across the "normal" internet is called TCP/IP which stands for Transmission Control Protocol (TCP) and Internet Protocol (IP). The IP layer requires a 4 (IPv4) or 6 (IPv6) byte address to be assigned to each network interface card on each computer. This can be done automatically using network software such as Dynamic Host Configuration Protocol (DHCP) or by manually entering static addresses.

Port numbers

The TCP layer requires what is called a port number to be assigned to each message. This way it can determine the type of service being provided. These are not ports that are used for serial and parallel devices or for computer hardware control, but reference numbers used to define a service (RFC 6335).


Addresses are used to locate computers, almost like a house address. Each IP address is written in what is called dotted decimal notation. This means there are four numbers, each separated by a dot. Each number represents a one byte value with a possible range of 0-255.

Network protocol levels

Protocols are sets of standards that define all operations within a network and how devices outside the network can interact with the network. Protocols define everything: basic networking data structures, higher level application programs, services and utilities.

The International Standards Organization (ISO) has defined the Open Systems Interconnection (OSI) model for current networking protocols, commonly referred to as the ISO/OSI model (ISO standard 7498-1). It is a hierarchical structure of seven layers that defines the requirements for communications between two computers. It was conceived to allow interoperability across the various platforms offered by vendors. The model allows all network elements to operate together, regardless of who built them. By the late 1980's, ISO was recommending the implementation of the OSI model as a networking standard. By that time, TCP/IP had been in use for years. TCP/IP was fundamental to ARPANET and the other networks that evolved into the internet. For differences between TCP/IP and ARPANET, see RFC 871. Only a subset of the whole OSI model is used today.


Protocols are outlined in Request for Comments (RFCs). The RFCs central to the TCP/IP protocol:

  • RFC 1122 - Defines host requirements of the TCP/IP suite of protocols covering the link, network (IP), and transport (TCP, UDP) layers.
  • RFC 1123 - The companion RFC to 1122 covering requirements for internet hosts at the application layer
  • RFC 1812 - Defines requirements for internet gateways which are IPv4 routers

ISO/OSI model


7. The Application layer provides a user interface by interacting with the running application. Examples of application layer protocols are Telnet, File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP) and Hypertext Transfer Protocol (HTTP).

6. The Presentation layer transforms data it receives from and passes on to the Application layer and Session layer. MIME encoding, data compression, data encryption and similar manipulations of the presentation are done at this layer. Examples: converting an EBCDIC-coded text file to an ASCII-coded file or from a .wav to .mp3 file, or serializing objects and other data structures into and out of XML. This layer makes the type of data transparent to the layers around it.

5. The Session layer establishes, manages and terminates the connections between local and remote applications. The OSI model made this layer responsible for "graceful close" of sessions (a property of TCP), and session checkpointing and recovery (usually not used in the internet protocol suite). It provides for duplex or half-duplex operation, dialog control (who transmits next), token management (who is allowed to attempt a critical action next) and establishes checkpointing of long transactions so they can continue after a crash, adjournment, termination, and restart procedures.

  • Full Duplex allows the simultaneous sending and receiving of packets.
  • Half Duplex allows the sending and receiving of packets in one direction at a time only.

4. The Transport layer provides end-to-end delivery of data between two nodes and is responsible for the delivery of a message from one process to another. It converts messages into Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), etc. Some protocols are state and connection oriented, allowing the transport layer to keep track of the packets and retransmit those that fail: It divides data into different segments before transmitting it. On receipt of these segments, the data is reassembled and forwarded to the next layer. If the data is lost in transmission or has errors, then this layer recovers the lost data and transmits the same.

3. The Network layer translates the network address into a physical MAC address. It performs network routing, flow control, segmentation/desegmentation, and error control functions. The best known example of a layer 3 protocol is the Internet Protocol (IP).

2. The Data Link layer is responsible for moving frames from one hop (node) to the next. The main function of this layer is to convert the data packets received from the upper layer(s) into frames, to establish a logical link between the nodes, and to transmit the frames sequentially. The addressing scheme is physical as MAC addresses are hard-coded into network cards at the time of their manufacture. The best known example is data transfer method (802x ethernet). IEEE divided this layer into the two following sublayers:

  • Logical Link Control (LLC) maintains the link between two computers by establishing Service Access Points (SAPs) which are a series of interface points. See IEEE 802.2.
  • Media Access Control (MAC) is used to coordinate the sending of data between computers. See the IEEE 802.3, 4, 5, and 12 standards.

1. The Physical layer coordinates the functions required to transmit a bit stream over a physical medium. It defines all the electrical and physical specifications for devices. This includes layout of pins, voltages, cable specifications, etc. Hubs, repeaters and network adapters are physical-layer devices. Popular protocols at this layer are Fast Ethernet, ATM, RS232, etc. The major functions and services performed by the physical layer are:

  • Establishment and termination of a connection to a device.
  • Participation in the process whereby resources are effectively shared among multiple users.
  • Modulation, or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a channel. These are signals operating over the physical cabling or over a radio link.

TCP/IP model

4. The Application layer includes all the higher-level protocols such as TELNET, FTP, DNS SMTP, SSH, etc. The TCP/IP model has no session or presentation layer. Its functionalities are folded into its application layer, directly on top of the transport layer.

                                                           |  Application data  |    Application packet

3. The Transport layer provides datagram services to the Application layer. This layer allows host and destination devices to communicate with each other for exchanging messages, irrespective of the underlying network type. Error control, congestion control, flow control, etc., are handled by the transport layer. The protocols used are the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). TCP gives a reliable, end-to-end, connection-oriented data transfer, while UDP provides unreliable, connectionless data transfers.

The data necessary for these functions is added to the packet. This process of "wrapping" the application data is called data encapsulation.

                                            |  TCP header  |  Application data  |    TCP packet

2. The Internet layer (alias Network layer) routes data to its destination. Data received by the link layer is made into data packets (IP datagrams), containing source and the destination IP address or logical address. These packets are sent and delivered independently (unordered). Protocols at this layer are Internet Protocol (IP), Internet Control Message Protocol (ICMP), etc.

                              |  IP header  |  TCP header  |  Application data  |    IP packet

1. The Network Interface layer combines OSI's Physical and Data Link layers.

          |  Ethernet header  |  IP header  |  TCP header  |  Application data  |    Ethernet packet

Data link layer

The IEEE 802 standards define the two lowest levels of the seven layer network model and primarily deal with the control of access to the network media [2]. The network media is the physical means of carrying the data such as network cable. The control of access to the media is called media access control (MAC).

Network access methods

All clients talking at once doesn't work. What ways have been developed sofar to avoid this?

  • Contention
    • Carrier-Sense Multiple Access with Collision Detection (CSMA/CD) used by ethernet
    • Carrier-Sense Multiple Access with Collision Avoidance (CSMA/CA)
  • Token Passing - A token is passed from one computer to another, which provides transmission permission.
  • Demand Priority - Describes a method where intelligent hubs control data transmission. A computer will send a demand signal to the hub indicating that it wants to transmit. The hub sill respond with an acknowledgement that will allow the computer to transmit. The hub will allow computers to transmit in turn.
  • Polling - A central controller, also called the primary device will poll computers, called secondary devices, to find out if they have data to transmit. Of so the central controller will allow them to transmit for a limited time, then the next device is polled.

Ethernet uses CSMA/CD, a method that allows network stations to transmit any time they want. Network stations sense the network line and detect if another station has transmitted at the same time they did. If such a collision happened, the stations involved will retransmit at a later, randomly set time in hopes of avoiding another collision.

Data encapsulation

1. One computer requests to send data to another over a network.

2. The data message flows through the Application layer by using a TCP or UDP port to pass onto the Internet layer. The Transport layer may have one of two names, segment or datagram. If the TCP protocol is being used, it is called a segment. If the UDP protocol is being used, it is called a datagram.

3. The data segment obtains logical addressing at the Internet layer via the IP protocol, and the data is then encapsulated into a datagram. The requirements for IP to link layer encapsulation for hosts on a Ethernet network are:

  • All hosts must be able to send and receive packets defined by RFC 894.
  • All hosts should be able to receive a mix of packets defined by RFC 894 and RFC 1042.
  • All hosts may be able to send RFC 1042 defined packets.

Hosts that support both must provide a means to configure the type of packet sent and the default must be packets defined by RFC 894.

4. The datagram enters the Network Access layer, where software will interface with the physical network. A data frame encapsulates the datagram for entry onto the physical network. At the end of the process, the frame is converted to a stream of bits that is then transmitted to the receiving computer.

5. The receiving computer removes the frame, and passes the packet onto the Internet layer. The Internet layer will then remove the header information and send the data to the Transport layer. Likewise, the Transport layer removes header information and passes data to the final layer. At this final layer the data is whole again, and can be read by the receiving computer if no errors are present.

Encapsulation formats

Ethernet (RFC 894) message format:

          |  preamble  |  destination address  |  source address  |  type  |  application, transport and network data  |  CRC  |    Ethernet packet
  • 8 bytes for preamble
  • 6 bytes for destination address
  • 6 bytes for source address.
  • 2 bytes of message type indicating the type of data being sent
  • 46 to 1500 bytes of data. The maximum length of an Ethernet frame is 1526 bytes. This means a data field length of up to 1500 bytes.
  • 4 bytes for cyclic redundancy check (CRC) information

IEEE 802 (RFC 1042) message format:

          |  preamble  |  SFD  | destination address  |  source address  |  length  |  DSAP  |  SSAP  |  control  |  info  |  FCS  |    IEEE 802.3 packet

IEEE 802.3 Media Access Control section used to coordinate the sending of data between computers:

  • 7 bytes for preamble
  • 1 byte for the start frame delimiter (SFD)
  • 6 bytes for destination address
  • 6 bytes for source address
  • 2 bytes for length - The number of bytes that follow not including the CRC.

Additionally, IEEE 802.2 Logical Link control establishes service access points (SAPs) between computers:

  • 1 byte destination service access point (DSAP)
  • 1 byte source service access point (SSAP)
  • 1 byte for control

Followed by Sub Network Access Protocol (SNAP):

  • 3 bytes for org code.
  • 2 bytes for message type which indicates the type of data being sent
  • 38 to 1492 bytes of data
  • 4 bytes for cyclic redundancy check (CRC) information named frame check sequence (FCS)

Trailor encapsulation

This link layer encapsulation is described in RFC 1122 and RFC 892. It is not used very often today and may be very interesting for some further experimentation with.

TCP/IP network protocols

The Transmission Control Protocol/Internet Protocol (TCP/IP) uses a client - server model for communications. The protocol defines the data packets transmitted (packet header, data section), data integrity verification (error detection bytes), connection and acknowledgement protocol, and re-transmission.

TCP/IP Time To Live (TTL) is a counting mechanism to determine how long a packet is valid before it reaches its destination. Each time a TCP/IP packet passes through a router it will decrement its TTL count. When the count reaches zero the packet is dropped by the router. This ensures that errant routing and looping aimless packets will not flood the network.

TDP/IP includes a wide range of protocols which are used for a variety of purposes on the network. The set of protocols that are a part of TCP/IP is called the TCP/IP protocol stack or the TCP/IP suite of protocols

ISO/OSI TCP/IP TCP/IP protocol examples
Application, session, presentation Application NFS, NIS, DNS, RPC, LDAP, telnet, ftp, rlogin, rsh, rcp, RIP, RDISC, SNMP, and others
Transport Transport TCP, UDP, SCTP
Network Internet IPv4, IPv6, ARP, ICMP
Data link Data link PPP, IEEE 802.2, HDLC, DSL, Frames, Network Switching, MAC address
Physical Physical network Ethernet (IEEE 802.3), Token Ring, RS-232, FDDI, and others

Application protocols

FTP, TFTP, SMTP, Telnet, NFS, ping, rlogin provide direct services to the user. DNS provides address to name translation for locations and network cards. RPC allows remote computer to perform functions on other computers. RARP, BOOTP, DHCP, IGMP, SNMP,RIP, OSPF, BGP, and CIDR enhance network management and increase functionality.

  • Hypertext Transfer Protocol (HTTP) is the protocol that facilitates transfer of data. Typically, data is transferred in the form of pages, or HTML markup. HTTP operates on TCP port 80.
  • Secure HTTP (HTTPS) uses TCP port 443 to securely transfer HTTP data via SSL, or Secure Socket Layer. TLS is the newer SSL.
  • File Transfer Protocol (FTP) operates on TCP ports 20 (data)/21(transmission control). It is used in simple file transfers from one node to another without any security (transferred in cleartext). Secure (SFTP) is a version of FTP that uses SSH to transfer data securely, using whichever port SSH uses (usually 22).
  • Trivial FTP (TFTP) is a UDP version of FTP that uses UDP port 69. It is called "trivial" because it is relatively unreliable and inefficient and so is more often used for inter-network communication between routers.
  • Telnet (Telecommunications Network) is an old protocol used to remotely connect to a node. All communications with telnet are in cleartext (even passwords for authentication). Telnet operates on TCP port 23. Except for in lab situations, no longer in use.
  • Secure Shell (SSH) is a secure replacement of Telnet. SSH allows terminal emulation in cipher text, which equates to enhanced and increased security. SSH usually operates on TCP port 22.
  • Network News Transfer Protocol (NNTP) is a protocol used by client and server software to carry USENET (newsgroup) postings back and forth over a TCP/IP network. NNTP operates on TCP port 119.
  • Lightweight Directory Access Protocol (LDAP) is a "Directory Services" protocol allowing a server to act as a central directory for client nodes. LDAP operates on TCP and UDP port 389.
  • Network Time Protocol (NTP) allows for synchronizing network time with a server. NTP operates on UDP port 123.
  • Post Office Protocol (POP3) is the mailbox protocol allowing users to download mail from a mail server. Once you access it, your client software will download all of your incoming mail and wipe it from the server. POP3 operates on TCP port 110.
  • The Internet Message Access Protocol (IMAP) allows for server-based repositories of sent mail and other specialized folders. When using IMAP4 instead of POP3 as your incoming mail protocol, you download very minimal information to your local machine and when you want to access actual incoming mail, you are pulling this directly from the mail server. This allows you to access your mail from virtually anywhere. IMAP4 operates on TCP port 143.
  • Simple Mail Transfer Protocol (SMTP) used in conjunction with POP3 or IMAP4 allows for sending/receiving of email. Without it you will only be able to receive mail. SMTP operates on TCP port 25.
  • Domain Name System (DNS) resolves easy to read domain names into computer readable IP addresses and operates on UDP port 53.
  • Simple Network Management Protocol (SNMP) manages devices on IP networks, such as modems, switches, routers, or printers. Default works on UDP port 161.

Transport protocols

Controls the management of service between computers. Based on values in TCP and UDP messages a server knows what service is being requested.

  • Transmission Control Protocol (TCP) is a reliable connection oriented protocol used to control the management of application level services between computers.
  • User Datagram Protocol (UDP) is an unreliable connection less messaging protocol used to control the management of application level services between computers.
  • The Stream Control Transmission Protocol (SCTP) is message-oriented like UDP and ensures reliable, in-sequence transport of messages with congestion control like TCP. In the absence of native SCTP support in operating systems it is possible to tunnel SCTP over UDP, as well as mapping TCP API calls to SCTP calls.

Internet protocols

ARP communicates between layers to allow one layer to get information to support another layer. This includes broadcasting. IP and ICMP manage movement of messages and report errors (including routing).

  • Internet Protocol (IP) provides the mechanism to use software to address and manage data packets being sent to computers. Except for ARP and RARP all protocols' data packets will be packaged into an IP data packet.
  • Address resolution protocol (ARP) enables packaging of IP data into ethernet packages. It is the system and messaging protocol that is used to find the ethernet (hardware) address from a specific IP number. Without it, the ethernet package can not be generated from the IP package, because the ethernet address can not be determined.
  • Internet Control Message Protocol (ICMP) provides management and error reporting to help manage the process of sending data between computers.

Network interface protocols

Allows messages to be packaged and sent between physical locations.

  • Serial line IP (SLIP) is a form of data encapsulation for serial lines.
  • Point to point protocol (PPP) is a form of serial line data encapsulation that is an improvement over SLIP.
  • Ethernet provides transport of information between physical locations on ethernet cable. Data is passed in ethernet packets.

Network devices


As signals travel along a network cable (or any other medium of transmission), they degrade and become distorted in a process that is called attenuation. If a cable is long enough, the attenuation will finally make a signal unrecognizable by the receiver. A repeater retimes and regenerates the signals to proper amplitudes and sends them to the other segments, enabling signals to travel longer distances over a network.

To pass data through the repeater in a usable fashion from one segment to the next, the packets and the Logical Link Control (LLC) protocols must be the same on the each segment. This means that a repeater will not enable communication, for example, between an 802.3 segment (Ethernet) and an 802.5 segment (Token Ring). Repeaters do not translate anything.


Bridges work at the Data Link layer. This means that all information contained in the higher levels of the OSI model is unavailable to them, including IP addresses. Bridges read the outermost section of data on the data packet to tell where a message is going.

Bridges do not distinguish between one protocol and another and simply pass all protocols along the network. Because all protocols pass across the bridges, it is up to the individual computers to determine which protocols they can recognise. As traffic passes through the bridge, information about the computer addresses is then stored in the bridge's RAM. The bridge will then use this RAM to build a routing table based on source (MAC) addresses. To determine the network segment a MAC address belongs to, bridges use one of:

  • In Transparent Bridging a table of addresses (bridging table) is built as they receive packets. If the address is not in the bridging table, the packet is forwarded to all segments other than the one it came from. This type of bridge is used on ethernet networks.
  • In Source Route Bridging the source computer provides path information inside the packet. This is used on Token Rings.

Bridges can be used to:

  • Expand the distance of a segment.
  • Provide for an increased number of computers on the network.
  • Reduce traffic bottlenecks resulting from an excessive number of attached computers.


In an environment consisting of several network segments with different protocols and architecture, a bridge may not be adequate for ensuring fast communication among all of the segments. A complex network needs a device, which not only knows the address of each segment, but also can determine the best path for sending data and filtering broadcast traffic to the local segment. Such device is called a router. Routers work at the Network layer of the OSI model meaning that the Routers can switch and route packets across multiple networks.

A router is used to route data packets between two networks. It reads the information in each packet to tell where it is going. If it is destined for an immediate network it has access to, it will strip the outer packet, readdress the packet to the proper ethernet address, and transmit it on that network. If it is destined for another network and must be sent to another router, it will re-package the outer packet to be received by the next router and send it to the next router.


Gateways make communication possible between different architectures and environments. They repackage and convert data going from one environment to another so that each environment can understand the other's environment data. Most gateways operate at the application layer, but can operate at the network or session layer of the OSI model.

Gateways strip information until getting to the required level, repackages the information to match the requirements of the destination system, and works its way back toward the hardware layer of the OSI model: It decapsulates incoming data through the networks complete protocol stack and encapsulates the outgoing data in the complete protocol stack of the other network to allow transmission. A gateway links two systems that do not use the same:

  • Communication protocols
  • Data formatting structures
  • Languages
  • Architecture

ARP and RARP address translation

Address resolution refers to the determination of the address of a device from the address of that equipment to another protocol level, for example, an IP address in an Ethernet address.

The Address Resolution Protocol (ARP) provides a completely different function to the network than Reverse Address Resolution Protocol (RARP). ARP is used to resolve the ethernet address of a NIC from an IP address in order to construct an ethernet packet around an IP data packet. This must happen in order to send any data across the network. Reverse Address Resolution protocol (RARP) is used for diskless computers to determine their IP address using the network.

In IPv6, ARP and RARP are replaced by a neighbor discovery protocol called Neighbor Discovery (ND), which is a subset of the control protocol Internet Control Message Protocol (ICMP).


A Media Access Control address (MAC Address) is the network card address used for communication between other network devices on the subnet. This information is not routable. The ARP table maps a (global internet) TCP/IP address to the local hardware on the local network. The MAC address uniquely identifies each node of a network and is used by the Ethernet protocol.

To determine a recipient's physical address, a device broadcasts an ARP request on the subnet that contains the IP address to be translated. The machine with the relevant IP address responds with its physical address.

To make ARP more efficient, each machine maintains in memory a table of addresses resolved and thus reduces the number of Broadcast emissions.


The RARP mechanism allows a device to be identified as a target on the network by broadcasting a RARP request. The servers receiving the message examine their table and meet. Once the IP address obtained, the machine stores it in memory and no longer uses RARP until it is reset.

Network address translation (NAT)

IP-Masquerading translates internal IP addresses into external IP addresses. This is called Network Address Translation (NAT). From the outside world, all connections will seem to be originating from the one external address.

One to one NAT

1:1 NAT (Network Address Translation) is a mode of NAT that maps one internal address to one external address. For example, if a network has an internal server at, 1:1 NAT can map to where is an additional external IP address provided by an internet service provider (ISP).

One to many NAT

The majority of NATs map multiple private hosts to one publicly exposed IP address. In a typical configuration, a local network uses one of the designated "private" IP address subnets (RFC 1918). A router on that network has a private address in that address space. The router is also connected to the internet with a "public" IP address assigned by an internet service provider (ISP). As traffic passes from the local network to the internet, the source address in each packet is translated on the fly from a private address to the public address. The router tracks basic data about each active connection (particularly the destination address and port). When a reply returns to the router, it uses the connection tracking data it stored during the outbound phase to determine the private address on the internal network to which to forward the reply.

Static NAT

Most NAT devices allow for configuring static translation table entries for connections from the external network to the internal masqueraded network. This feature is often referred to as static NAT. It may be implemented in two types: port forwarding which forwards traffic to a specific external port to an internal host on a specified port, and designation of a DMZ host which receives all traffic received on the external interface on any port number to an internal IP address, preserving the destination port. Both types may be available in the same NAT device.

Basic TCP/IP addressing

An IP Address is a logical numeric address that is assigned to every single computer, printer, switch, router or any other device that is part of a TCP/IP-based network.

Until the introduction of Classless Inter-Domain Routing (CIDR) in 1993 to slow the growth of routing tables on routers across the internet, and to help slow the rapid exhaustion of IPv4 addresses, classful networks were used. You can still find it in tutorials, some networks, and in archeological artifacts such as default subnet mask. In classful adresses, the first one or two bytes (depending on the class of network), generally will indicate the number of the network, the third byte indicates the number of the subnet, and the fourth number indicates the host number.

Most of servers and personal computers use Internet Protocol version 4 (IPv4). This uses 32 bits to assign a network address as defined by the four octets of an IP address, up to Each octet is converted to a decimal number (base 10) from 0–255 and separated by a period (a dot). This format is called dotted decimal notation. If not familiar with number conversions, a decent tutorial can be found in

For example the IPv4 address:


is segmented into 8-bit blocks:

11000000 | 10101000 | 00000011 | 00011000

Each block is converted to decimal:

27 + 26 | 27 + 25 + 23 | 21 + 20 | 24 + 23

128 + 64 = 192 | 128 + 32 + 8 = 168 | 2 + 1 = 3 | 16 + 8 = 24

The adjacent octets 192, 168, 3 and 24 are separated by a period:

Internet Protocol version 6 (IPv6) was designed to answer the future exhaustion of the IPv4 address pool. IPv4 address space is 32 bits which translates to just above 4 billion addresses. IPv6 address space is 128 bits translating to billions and billions of potential addresses. The protocol has also been upgraded to include new quality of service features and security, but also has its vulnerabilities [3] [4]. IPv6 addresses are represented as eight groups of four hexadecimal digits with the groups being separated by colons, for example 2805:F298:0004:0148:0000:0000:0740:F5E9, but methods to abbreviate this full notation exist

Routing tables

To minimise unnecessary traffic load and provide efficient movement of frames from one location to another, interconnected hosts are grouped into separate networks. As a result of this grouping (determined by network design and administration functions) it is possible for a router to determine the best path between two networks. A router forms the boundary between one network and another network. When a frame crosses a router it is in a different network. A frame that travels from source to destination without crossing a router has remained in the same network. A network is a group of communicating machines bounded by routers.

The router will use some of the bits in the IP address to identify the network location to which the frame is destined. The remaining bits in the address will uniquely identify the host on that network that will ultimately receive the frame. There are bits to identify the network and to identify the host. The sender of a frame must make this differentiation because it must decide whether it is on the same network as the destination or on a different network

  • If the sender is on the same network as the destination, it will determine the data link address of the destination machine. Then it will send the frame directly to the destination machine.
  • If the destination is on a different then the originator must send the frame to a router and let the next router in line forward the frame on to the ultimate destination network. At the ultimate destination network the last router must determine the data link address of the host and forward the frame directly to that host on that ultimate destination network.

When a router receives an incoming data frame, it masks the destination address to create a lookup key that is compared to the entries in its routing table. The routing table indicates how the frame should be processed. The frame might be delivered directly on a particular port on the router. The frame might have to be sent on to the next router in line for ultimate delivery to some remote network.

The routing table is created by the combination of direct configuration by an administrator or dynamically through the periodic broadcasting of router update frames. Protocol frames from Routing Information Protocol (RIP), Open Shortest Path First (OSPF) and Internet Gateway Routing Protocol (IGRP) are sent from all routers at periodic intervals. As a result, all routers become aware of how to reach all other networks.

The specific behavior that is expected from an IP router is discussed in RFC 1812, a TL;DR (lengthy, like this page) document providing a complete discussion of routing in the IPv4 network environment.

Subnet masking

The IP Address Mask is a configuration parameter used by a TCP/IP end-node and IP router to differentiate between that part of the IP address that represents the network and the part that represents the host.

A router uses the mask value to create a key value that is looked up in the router table to determine where to forward a frame. An end-node uses the mask value to create the same key value but the value is used to compare the destination address with the end-node address to determine whether the destination is directly reachable (on the same network) or remote (in which case the frame must be sent to a router and can not be sent directly to the destination).

The mask value can be assigned by default or it can be specified by the installer of the end-node or router software. The destination IP address and the mask value are combined with a Boolean AND operation to produce the resultant key value. Just in case, for a start in boolean algebra, see

For example:

An end-node is assigned the IP address and a mask value of This end-node wants to send a frame to

  • If is on the same network as then the end-node will broadcast an ARP (Address Resolution Protocol) frame to determine the data link address of the destination and it will then send the frame directly to the destination.
  • If is on a different network then the workstation must send the frame to a router for forwarding to the ultimate destination network.

All the dotted-decimal notation must be converted to the underlying 32-bit binary numbers to understand what is taking place:

    End node 	=  10100100 	00011001 	01001010 	10000011
    Mask 	        =  11111111 	11111111 	00000000 	00000000
    Destination    	=  10100100    	00000111    	00001001    	00000010

When the IP address of the end node is AND'ed with the mask we get:

    End node 	=  10100100 	00011001 	01001010 	10000011   
    Mask 	        =  11111111 	11111111 	00000000 	00000000
    Boolean AND         	=  10100100 	00011001 	00000000 	00000000
    In dotted-decimal   	=  164. 	25. 	        0.      	0

When the destination IP address is AND'ed with the mask we get:

    Destination    	=  10001100    	00000111    	00001001    	00000010   
    Mask    	=  11111111 	11111111 	00000000 	00000000
    Boolean AND         	=  10001100 	00000111 	00000000 	00000000
    (In dotted-decimal) 	=  164. 	7. 	        0.      	0

Since the results ( and are not equal, the end node concludes that the destination must be on a different network and the frame is sent to a router. A router masks the destination address in an incoming frame and the result is used as a lookup key in the routing table.

Address classes

The origins of the current implementation of the Internet Protocol (IPv4) and its associated classes of IP addressing can be found in RFC 791: IP addresses were to be of fixed, 32-bit (4 octets) length comprised of a Network Number and a Local Address or Host Number. The resulting range of addresses were then divided into three broad groupings or “Classes”, based on the bit values within the first octet:

  • Class A: high order bit is “0”, the remaining 7 bits are the network, and the last 24 bits are the host.
  • Class B: high order two bits are “10”, the remaining 14 bits are the network, and the last 16 bits are the host.
  • Class C: high order three bits are “110”, the remaining 21 bits are the network, and the last 8 bits are the host.

Two additional classes of IPv4 addressing, Class D & E that were specified in subsequent RFC’s:

  • Class D: high order four bits are “1110”, the remaining 20 bits identify the multicast group.
  • Class E: high order five bits are “11110”, the remaining bits are reserved for experimental use.

Implied within RFC 791, was the concept of Masking, to be used by routers and hosts:

  • Class A mask =
  • Class B mask =
  • Class C mask =

These masks were applied by default based on the value of the leading bits in the IP address. If an address started with a binary 0, then stations assumed Class A masking. The starting bits 10 indicated Class B, and 110 indicated Class C. Consequently, the class of addressing masking being used could be determined by looking at the first octet in the address:

  • Class A starts with 0 and ends with 0111 1111 (the smallest value in the first octet is decimal 0 and the largest value is 127 yielding a potential range of 0-127).
  • Class B starts with 10 and ends with 1011 1111 (the smallest value in the first octet is decimal 128 and the largest value is 191 yielding a potential range of 128-191).
  • Class C starts with 110 and ends with 1101 1111 (the smallest value in the first octet is decimal 192 and the largest value is 223 yielding a potential range of 192-223).
  • Class D starts with 1110 and ends with 1110 1111(the smallest value in the first octet is decimal 224 and the largest value is 239 yielding a potential range of 224-239).
  • Class E starts with 1111 and ends with 1111 1111 (the smallest value in the first octet is decimal 240 and the largest value is 255 yielding a potential range of 240-255).

This division of addressing allows for the following potential number of addresses.

For example: Class A has 8 bits in the Network part and 24 bits in the Host part, meaning 28 = 128 possible values in the Network part and 224 = 16777216 possible values in the Host part.

Creating subnets

To subnet a network is to create logical divisions of the network, for example arranged on one floor, building or geographical location. Each device on each subnet is to have an address that logically associates it with the others on the same subnet. This also prevents devices on one subnet from getting confused with hosts on another subnet. Subnetting applies to IP addresses because this is done by borrowing bits from the host portion of the IP address. In a sense, the IP address has three components - the network part, the subnet part and the host part. We can create a subnet by logically grabbing the last bit from the network component of the address and using it to determine the number of subnets required.

To make learning subnetting easier see (builds up from no knowledge) and (starts from knowledge about adressess).

Also, and and these mental shortcuts:

Mask # of subnets SlashFmt Class A hosts Class A mask Class B hosts Class B mask Class C hosts Class C mask Class C sub hosts Class C sub mask
255 1 or 256 /32 16,777,214 65,534 254 Invalid, 1 address
254 128 /31 33,554,430 131,070 510 Invalid, 2 addresses
252 64 /30 67,108,862 262,142 1,022 2 hosts, 4 addresses
248 32 /29 134,217,726 524,286 2,046 6 hosts, 8 addresses
240 16 /28 268,435,454 1,048,574 4,094 14 hosts, 16 addresses
224 8 /27 536,870,910 2,097,150 8,190 30 hosts, 32 addresses
192 4 /26 1,073,741,822 4,194,302 16,382 62 hosts, 64 addresses
128 2 /25 2,147,483,646 8,388,606 32,766 126 hosts, 128 addresses

Internet Protocol (IP)

Internet Protocol (IP) provides support at the network layer of the OSI model. All transport protocol data packets such as UDP or TCP packets are encapsulated in IP data packets to be carried from one host to another. IP is a connection-less unreliable service, meaning there is no guarantee that the data will reach the intended host. The datagrams may be damaged upon arrival, out of order, or not arrive at all. IP is defined by RFC 791. Therefore the layers above IP such as TCP are responsible for being sure correct data is delivered. IP provides for:

  • Addressing
  • Type of service specification
  • Fragmentation and re-assembly
  • Security

IP packet format

          0           4          8                   16                                         31
          |  Version  |  Length  |  Type of Service  |              Total Length                 |
          |               Identification             |   Flags   |     Fragmentation Offset      |
          |     Time to Live     |     Protocol      |             Header Checksum               |
          |                                   Source Address                                     |
          |                                Destination Address                                   |
          |                                      Options                                         |
          |                                       Data                                           |
  • Version (4 bits): The IP protocol version, currently 4 or 6.
  • Header length (4 bits): The number of 32 bit words in the header
  • Type of service (TOS) (8 bits): Only 4 bits are used which are minimize delay, maximize throughput, maximize reliability, and minimize monetary cost. Only one of these bits can be on. If all bits are off, the service is normal. Some networks allow a set precedences to control priority of messages the bits are as follows:
    • Bits 0-2 - Precedence:
      • 111 - Network Control
      • 110 - Internetwork Control
      • 101 - CRITIC/ECP
      • 100 - Flash Override
      • 011 - Flash
      • 010 - Immediate
      • 001 - Priority
      • 000 - Routine
    • Bit 3 - A value of 0 means normal delay. A value of 1 means low delay.
    • Bit 4 - Sets throughput. A value of 0 means normal and a 1 means high throughput.
    • Bit 5 - A value of 0 means normal reliability and a 1 means high reliability.
    • Bit 6-7 are reserved for future use.
  • Total length of the IP data message in bytes (16 bits)
  • Identification (16 bits) - Uniquely identifies each datagram. This is used to re-assemble the datagram. Each fragment of the datagram contains this same unique number.
  • Flags (3 bits): One bit is the more fragments bit
    • Bit 0 - reserved.
    • Bit 1 - The fragment bit. A value of 0 means the packet may be fragmented while a 1 means it cannot be fragmented. If this value is set and the packet needs further fragmentation, an ICMP error message is generated.
    • Bit 2 - This value is set on all fragments except the last one since a value of 0 means this is the last fragment.
  • Fragment offset (13 bits): The offset in 8 byte units of this fragment from the beginning of the original datagram.
  • Time to live (TTL) (8 bits): Limits the number of routers the datagram can pass through. Usually set to 32 or 64. Every time the datagram passes through a router this value is decremented by a value of one or more. This is to keep the datagram from circulating in an infinite loop forever.
  • Protocol (8 bits): Identifies which protocol is encapsulated in the next data area. This is may be one or more of TCP(6), UDP(17), ICMP(1), IGMP(2), or OSPF(89). A list of these protocols and their associated numbers may be found in the /etc/protocols file on Unix or Linux systems.
  • Header checksum (16 bits): For the IP header, not including the options and data.
  • Source IP address (32 bits): The IP address of the card sending the data.
  • Destination IP address (32 bits): The IP address of the network card the data is intended for.
  • Options:
    • Security and handling restrictions
    • Record route - Each router records its IP address
    • Time stamp - Each router records its IP address and time
    • Loose source routing - Specifies a set of IP addresses the datagram must go through.
    • Strict source routing - The datagram can go through only the IP addresses specified.
  • Data: Encapsulated hardware data such as ethernet data.

The message order of bits transmitted is 0-7, then 8-15, in network byte order. Fragmentation is handled at the IP network layer and the messages are reassembled when they reach their final destination. If one fragment of a datagram is lost, the entire datagram must be retransmitted. This is why fragmentation is avoided by TCP. The data on the last line is ethernet data, or data depending on the type of physical network.

Type of service specification

RFC 791 defined a field within the IP header called the Type Of Service (TOS) byte. This field is used to specify the quality of service desired for the datagram and is a mix of factors. These factors include fields such as Precedence, Speed, Throughput and Reliability. In normal conversations you would not use any such special alternatives, so the Type of Service byte typically would be set to zero. With the advent of multimedia transmission and emergence of protocols such as Session Initiation Protocol (SIP), this field is coming into use.

The IP Type of Service Byte:

                        0     1     2     3     4     5     6     7
                    |    Precedence    |  D  |  T  |  R  |  0  |  0  |
  • Bits 0-2: Precedence.
  • Bit 3: Delay (0 = Normal Delay, 1 = Low Delay)
  • Bit 4: Throughput (0 = Normal Throughput, 1 = High Throughput)
  • Bit 5: Reliability (0 = Normal Reliability, 1 = High Reliability)
  • Bits 6-7: Reserved for Future Use.

The three bit Precedence field is defined as follows:

Precedence bits Definition
111 Network Control
110 Internetwork Control
100 Flash Overrride
011 Flash
010 Immediate
001 Priority
000 Routine

Session Initiation Protocol (SIP)

This protocol is an application-layer control protocol used for creating, modifying and terminating sessions with one or more participants. Some examples of such activities include Internet multimedia conferences, Internet telephone calls and multimedia distribution. SIP is intended to support communications using Multicast, a mesh of Unicast relations, or a combination of both.

Fragmentation and reassembly

There are a number of deferring network transmission architectures, with each having a physical limit of the number of data bytes that may be contained within a given frame. This physical limit is described in numerous specifications and is referred to as the Maximum Transmission Unit (MTU) of the network. As a block of data is prepared for transmission, the sending or forwarding device examines the MTU for the network the data is to be sent or forwarded across. If the size of the block of data is less then the MTU for that Network, the data is transmitted in accordance with the rules for that particular network.

There are two situations in which MTU becomes important:

  • The size of the block of data being transmitted is greater than the MTU.
  • Data must traverse across multiple network architectures, each with a different MTU.

IPv4 Fragmentation Fields

The three fields concerned with IP Fragmentation are:

Field Name Function Offset Location Alias
Identification 16-bit field containing a unique number used to identify the frame and any associated fragments for reassembly. 18-19 Identification Field
Flags 3-bit field containing the flags that specify the function of the frame in terms of whether fragmentation has been employed, additional fragments are coming, or this is the final fragment. 20 Fragmentation Flags
Fragmentation Offset 13-bit field indicating the position of a particular fragment's data in relation to the first byte of data (offset 0). 20-21 Fragment Offset


With increasing interconnection and complexity of networks, fragments from multiple blocks of data might travel along different paths to the destination, possibly arriving out of sequence in relation to one another. That is, a fragment from block number one might arrive intermixed with the data stream for block number 2 or vice versa. The function of the Fragment Offset Field is to identify the relative position of each fragment, and it is the Identification Field that serves to allow the receiving device to sort out which fragments comprise what block of data. Each fragment from a particular data stream will have the same Identification Field, uniquely identifying which block it belongs to. If one or more fragments are lost, the buffer of the device performing the reassembly process will time out and discard all of the fragments. In the event of such a time out, the data will then have to be retransmitted by the sending device.


Bit Indicator Definition
0xx Reserved
x0x May fragment
x1x Do not fragment
xx0 Last fragment
xx1 More fragments

When a receiving station processes each frame, one of the operations it performs is to review the Flags field. Depending on the value indicated by this field, several possible actions are then initiated, including:

  • (xx1) More Fragments - Indicates that there are additional IP Fragments that comprise the data associated with that specific Identification Field. The receiving device will allocate buffer resources for reassembly and pass all frames containing that unique Identification Field to the buffer.
  • (xx0) Last Fragment - Indicates that this fragment is the final frame for the data block identified by the Identification Field. The receiving device will now attempt to reassemble the fragments in the order specified by the Fragment Offset field.

Fragment Offset

Because it is possible that the fragments that comprise a block of data might travel along different paths to the destination, it is possible they might arrive out of sequence. While the Identification Field serve to mark which IP fragments belong to which block of data, it is the Fragment Offset Field, sometimes referred to as the Fragmentation Offset Field, that tells the receiving device which order to reassemble them in.

During the IP Fragmentation Reassembly process, if a particular fragment is found to be missing, as indicated by the Fragmentation Offset count, the buffer will enter a wait state until either the missing piece(s) are received or a time out occurs. In the event of such a time out, the buffer simply discards the fragments.

IP Fragmentation

Regardless of what situation occurs that requires IP Fragmentation, the procedure followed by the device performing the fragmentation must be as follows:

  1. The device attempting to transmit the block of data will first examine the Flag field to see if the field is set to the value of (x0x or x1x). If the value is equal to (x1x) this indicates that the data may not be fragmented, forcing the transmitting device to discard that data. Depending on the specific configuration of the device, an Internet Control Message Protocol (ICMP) Destination Unreachable -> Fragmentation required and Do Not Fragment Bit Set message may be generated.
  2. Assuming the flag field is set to (x0x), the device computes the number of fragments required to transmit the amount of data in by dividing the amount of data by the MTU. This will result in "X" number of frames with all but the final frame being equal to the MTU for that network.
  3. It will then create the required number of IP packets and copies the IP header into each of these packets so that each packet will have the same identifying information, including the Identification Field.
  4. The Flag field in the first packet, and all subsequent packets except the final packet, will be set to "More Fragments." The final packets Flag Field will instead be set to "Last Fragment."
  5. The Fragment Offset will be set for each packet to record the relative position of the data contained within that packet.
  6. The packets will then be transmitted according to the rules for that network architecture.

IP Fragment Reassembly

If a receiving device detects that IP Fragmentation has been employed, the procedure followed by the device performing the Reassembly must be as follows:

  1. The device receiving the data detects the Flag Field set to "More Fragments".
  2. It will then examine all incoming packets for the same Identification number contained in the packet.
  3. It will store all of these identified fragments in a buffer in the sequence specified by the Fragment Offset Field.
  4. Once the final fragment, as indicated by the Flag Field, is set to "Last Fragment," the device will attempt to reassemble that data in offset order.
  5. If reassembly is successful, the packet is then sent to the ULP in accordance with the rules for that device.
  6. If reassembly is unsuccessful, perhaps due to one or more lost fragments, the device will eventually time out and all of the fragments will be discarded.
  7. The transmitting device will than have to attempt to retransmit the data in accordance with its own procedures.

Across networks

Imagine a block of data originating on a 16Mb Token Ring network (MTU = 17914B) that is connected to another 16Mb Token Ring network (MTU = 17914B) via an Ethernet network (MTU = 1500B). The data block met the MTU restriction for a 16Mb Token Ring Network, but the Router connecting the Token Ring to the Ethernet Network is faced with having to forward this large block onto a network with a smaller MTU. It will simply follow the rules for IP Fragmentation as if was transmitting the frame itself except that the Identification Field will be that of the original frame.

Once the data reaches the router on the other end of the Ethernet network, it will perform reassembly of the fragments exactly as previously described and pass the reassembled block of data onto the network with the new MTU.


IP is responsible for the transmission of packets between network end points. IP includes some features which provide basic measures of fault-tolerance (time to live, checksum), traffic prioritization (type of service) and support for the fragmentation of larger packets into multiple smaller packets (ID field, fragment offset). The support for fragmentation of larger packets provides a protocol allowing routers to fragment a packet into smaller packets when the original packet is too large for the supporting datalink frames. IP fragmentation exploits (attacks) use the fragmentation protocol within IP as an attack vector [5][6].

The IPv4 Fragmentation and Reassembly can also be used to trigger a Denial of Service Attack (DOS). The receiving device will attempt reassembly following receipt of a frame containing a Flag field set to (xx1), indicating more fragments are to follow. Receipt of such a frame causes the receiving device to allocate buffer resources for reassembly. If a device is flooded with separate frames, each with the Flag field set to (xx1), but each has the Identification Field set to a different value, the device would attempt to allocate resources to each separate fragment in preparation for reassembly and would quickly exhaust its available resources while waiting for buffer time-outs to occur. To defend against such DOS attempts, many network security features include specific rules implemented at the Firewall that change the time-out value for how long they will hold incoming fragments before discarding them.

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) supports the network at the transport layer. Transmission Control Protocol (TCP) provides a reliable connection oriented service. Connection oriented means both the client and server must open the connection before data is sent. TCP is defined by RFC 793 and RFC 1122. TCP provides:

  • End-to-end reliability
  • Flow control
  • Congestion control

TCP relies on the IP service at the network layer to deliver data to the host. Since IP is not reliable with regard to message quality or delivery, TCP must make provisions to make sure messages are delivered on time and correctly.

TCP segment format

          0                                                    16                                                    31
          |                   Source port                       |                   Destination port                  |
          |                                             Sequence number                                               |
          |                                         Acknowledgement number                                            |
          | hlen | Reserved | URG | ACK | PSH | RST | SYN | FIN |                   Window size                       |
          |                    Checksum                         |                   Urgent pointer                    |
          |                                                  Options                      |         Padding           |
          |                                                   Data                                                    |
  • Source port number (16 bits)
  • Destination port number (16 bits)
  • Sequence number (32 bits): The byte in the data stream that the first byte of this packet represents.
  • Acknowledgement number (32 bits): Contains the next sequence number that the sender of the acknowledgement expects to receive which is the sequence number plus 1 (plus the number of bytes received in the last message?). This number is used only if the ACK flag is on.
  • Header length (4 bits): The length of the header in 32 bit words, required since the options field is variable in length.
  • Reserved (6 bits)
  • Flags:
    • URG (1 bit) - The urgent pointer is valid.
    • ACK (1 bit) - Makes the acknowledgement number valid.
    • PSH (1 bit) - High priority data for the application.
    • RST (1 bit) - Reset the connection.
    • SYN (1 bit) - Turned on when a connection is being established and the sequence number field will contain the initial sequence number chosen by this host for this connection.
    • FIN (1 bit) - The sender is done sending data.
  • Window size (16 bits): The maximum number of bytes that the receiver will to accept.
  • TCP checksum (16 bits): Calculated over the TCP header, data, and TCP pseudo header.
  • Urgent pointer (16 bits): Only valid if the URG bit is set. The urgent mode is a way to transmit emergency data to the other side of the connection. It must be added to the sequence number field of the segment to generate the sequence number of the last byte of urgent data.
  • Options (0 or more 32 bit words)
  • Data (optional)

End-to-end reliability

In order for two hosts to communicate using TCP they must first establish a connection by exchanging messages in what is known as the three-way handshake:

                         Host A                     Host B
                                  In the network
          Send SYN SEQ=x     | --------------------> |  Receive SYN
                             |                       |
          Receive SYN + ACK  | <-------------------- |  Send SYN SEQ=y, ACK x+1
                             |                       |
          Send ACK y+1       | --------------------> |  Receive ACK
  1. Host A initiates the connection by sending a TCP segment with the SYN control bit set and an initial sequence number (ISN) we represent as the variable x in the sequence number field.
  2. Host B receives this SYN segment at some point in time, processes it and responds with a TCP segment of its own. The response from Host B contains the SYN control bit set and its own ISN represented as variable y. Host B also sets the ACK control bit to indicate the next expected byte from Host A should contain data starting with sequence number x+1.
  3. When Host A receives Host B's ISN and ACK, it finishes the connection establishment phase by sending a final acknowledgement segment to Host B. In this case, Host A sets the ACK control bit and indicates the next expected byte from Host B by placing acknowledgement number y+1 in the acknowledgement field.

Once ISNs have been exchanged, communicating applications can transmit data between each other.

In order for a connection to be released, four segments are required to completely close a connection. Four segments are necessary due to the fact that TCP is a full-duplex protocol, meaning that each end must shut down independently:

                         Host A                     Host B
                                  In the network
          Send FIN SEQ=x     | --------------------> |  Receive FIN
                             |                       |
          Receive ACK        | <-------------------- |  Send ACK x+1
                             |                       |
          Receive FIN + ACK  | <-------------------- |  Send FIN SEQ=y, ACK x+1
                             |                       |
          Send ACK y+1       | --------------------> |  Receive ACK
  1. The application running on Host A signals TCP to close the connection. This generates the first FIN segment from Host A to Host B.
  2. When Host B receives the initial FIN segment, it immediately acknowledges the segment and notifies its destination application of the termination request.
  3. Once the application on Host B also decides to shut down the connection, it then sends its own FIN segment
  4. Host A receives the FIN segment and responds with an acknowledgement.

Flow control

Flow control is a technique whose primary purpose is to properly match the transmission rate of sender to that of the receiver and the network. It is important for the transmission to be at a high enough rate to ensure good performance, but also to protect against overwhelming the network or receiving host. Flow control is not the same as congestion control. Congestion control is primarily concerned with a sustained overload of network intermediate devices such as IP routers.

TCP uses the window field as the primary means for flow control. During the data transfer phase, the window field is used to adjust the rate of flow of the byte stream between communicating TCPs.

                    |  1 |  2 |  3 |  4 |  5 |  6 |  7 |  8 |  9 | 10 | 11 | 12 |

                     Sent and                                Not yet
                       ACKed          window                   sent
                         |    +-------------------+              |
                    |  1 |  2 |  3 |  4 |  5 |  6 |  7 |  8 |  9 | 10 | 11 | 12 |

                    |  1 |  2 |  3 |  4 |  5 |  6 |  7 |  8 |  9 | 10 | 11 | 12 |

The drawing shows a 4-byte sliding window. Moving from left to right, the window "slides" as bytes in the stream are sent and acknowledged. A simple TCP implementation will place segments into the network for a receiver as long as there is data to send and as long as the sender does not exceed the window advertised by the receiver. As the receiver accepts and processes TCP segments, it sends back positive acknowledgements, indicating where in the byte stream it is. These acknowledgements also contain the "window" which determines how many bytes the receiver is currently willing to accept. If data queued by the sender reaches a point where data sent will exceed the receiver's advertised window size, the sender must halt transmission and wait for further acknowledgements and an advertised window size that is greater than zero before resuming.

Congestion control

The data transfer strategy:

  • The TCP host sends packets into the network without a reservation and then the host reacts to observable events.
  • Each sender determines how much capacity is available to a given flow in the network.
  • ACKs are used to "pace" the transmission of packets such that TCP is "self-clocking".

As an over simplified example, imagine a client wants to request the webpage from a server. The requested page is 6 KB and we assume there is no overhead on the server to generate the page (it is static content cached in memory) or any other overhead:

  1. Client sends SYN to server - "Hey sexy, how are you? My receive window is 65,535 bytes."
  2. Server sends SYN, ACK - "Great! How are you? My receive window is 4,236 bytes"
  3. Client sends ACK, SEQ - "Great as well... Please send me the webpage"
  4. Server sends 3 data packets. Roughly 4 - 4.3 kb (3 * MSS1) of data
  5. Client acknowledges the segment (sends ACK)
  6. Server sends the remaining bytes to the client

After step 6 the connection can be ended (FIN - "Have a Nice Day") or kept alive, but that is irrelevant here, since at this point the browser has already received the data.

This transaction took 3 * RTT (Round Trip Time) to finish. If your RTT to a server is 200ms this transaction will take you at least 600ms to complete, no matter how big your bandwidth is. The bigger the file, the more round trips and the longer it takes to download.


In Additive Increase/Multiplicative Decrease (AIMD) a CongestionWindow variable is held by the TCP sender for each connection. The smallest of the two windows, the congestion window and the advertised window by the receiver is the maximum window size to start with for that receiver.

MaxWindow :: min (CongestionWindow , AdvertisedWindow)
EffectiveWindow = MaxWindow – (LastByteSent -LastByteAcked)

The variable cwnd is set based on the perceived level of congestion. The sender receives implicit (packet drop) or explicit (packet mark) indications of internal congestion.

  • Additive Increase is the reaction to perceived available capacity (referred to as congestion avoidance stage): cwnd is incremented fractionally for each arriving ACK:
increment = MSS x (MSS /cwnd)
cwnd = cwnd + increment
  • A dropped packet and resultant timeout are considered due to congestion at a router and TCP reacts with a Multiplicate Decrease by halving cwnd (but not below size of 1 MMS (packet).

Because this simple Congestion Control (CC) mechanism involves timeouts that cause retransmissions, it is important that hosts have an accurate timeout mechanism.

Slow Start

Linear additive increase takes too long to ramp up a new TCP connection from cold start and the slow start mechanism was added to provide an initial exponential increase in the size of cwnd. When a TCP connection begins, Slow Start initialises a congestion window to one segment, which is the Maximum Segment Size (MSS) initialised by the receiver during the connection establishment phase. When ACKs are returned by the receiver, the congestion window increases by one segment for each acknowledgement returned.

Every time an ACK arrives, cwnd is incremented (effectively doubling per RTT "epoch"): The first successful transmission and acknowledgement of a TCP segment increases the window to two segments. After successful transmission of these two segments and acknowledgements completes, the window is increased to four segments. Then eight segments, then sixteen segments and so on, doubling from there on out up to the maximum window size advertised by the receiver or until congestion occurs.

Slow Start prevents a slow start, but is slower than sending a full advertised window’s worth of packets all at once. In the imagined example of requesting the cracked webpage, the client told the server it can receive a maximum of 65,535 bytes of un-acknowledged data (before ACK), but the server only sent about 4 KB and then waited for ACK. This is because the initial congestion window (initcwnd) on the server is set to 3. The server is being cautious. Rather than throw a burst of packets into a fresh connection, the server chooses to ease into it gradually, making sure that the entire network route is not congested. The more congested a network is, the higher is the chances for packet loss. Packet loss results in retransmissions which means more round trips, resulting in higher download times.

There are two slow start situations:

  • At the very beginning of a connection {cold start}.
  • When the connection goes dead waiting for a timeout to occur (the advertised window goes to zero). In this case the source has more information. The current value of cwnd can be saved as a congestion threshold. This is also known as the "slow start threshold" ssthresh:
    • if cwnd <= ssthresh, do slow-start
    • if cwnd > ssthresh, do congestion avoidance

If the network is forced to drop one or more packets due to overload or congestion, Congestion Avoidance is used to slow the transmission rate, in conjunction with Slow Start to get the data transfer going again so it doesn't slow down and stay slow. In the Congestion Avoidance algorithm a retransmission timer expiring or the reception of duplicate ACKs can implicitly signal the sender that a network congestion situation is occurring. The sender then sets its transmission window to one half of the current window size (the minimum of the congestion window and the receiver's advertised window size), but to at least two segments. If congestion was indicated by a timeout, the congestion window is reset to one segment, which automatically puts the sender into Slow Start mode.

Fast Retransmit


When a duplicate ACK is received, the sender does not know if it is because a TCP segment was lost or simply that a segment was delayed and received out of order at the receiver. If the receiver can re-order segments, it should not be long before the receiver sends the latest expected acknowledgement. Typically no more than one or two duplicate ACKs should be received when simple out of order conditions exist. If however more than two duplicate ACKs are received by the sender, it is a strong indication that at least one segment has been lost. The TCP sender will assume enough time has lapsed for all segments to be properly re-ordered by the fact that the receiver had enough time to send three duplicate ACKs.

When three or more duplicate ACKs are received, the sender does not even wait for a retransmission timer to expire before retransmitting the segment (as indicated by the position of the duplicate ACK in the byte stream). This process is called the Fast Retransmit algorithm.

Fast Retransmit eliminates about half the coarse-grain timeouts (~ 20% improvement in throughput). Immediately following Fast Retransmit (which was added with TCP Tahoe) is the Fast Recovery algorithm (which was added with TCP Reno).

Fast Recovery

The TCP sender has implicit knowledge that there is data still flowing to the receiver because duplicate ACKs can only be generated when a segment is received. This is a strong indication that serious network congestion may not exist and that the lost segment was a rare event. So instead of reducing the flow of data abruptly by going all the way into Slow Start, the sender only enters Congestion Avoidance mode: When Fast Retransmit detects three duplicate ACKs, the recovery process starts from the Congestion Avoidance region and uses ACKs in the pipe to pace the sending of packets (half cwnd and commence recovery from this point using linear additive increase "primed" by left over ACKs in pipe).

As a result, rather than start at a window of one segment as in Slow Start mode, the sender resumes transmission with a larger window, incrementing as if in Congestion Avoidance mode. This allows for higher throughput under the condition of only moderate congestion.

With Fast Recovery, Slow Start only occurs:

  • At cold start
  • After a coarse-grain timeout

User Datagram Protocol (UDP)

The User Datagram Protocol (UDP) is the other layer 4 protocol commonly used to support the network at the transport layer. While TCP is designed for reliable data delivery with built-in error checking, UDP aims for speed. Protocols relying on UDP typically have their own built-in reliability services, or use certain features of ICMP to make the connection somewhat more reliable. DNS, SNMP, RPC and RIP are examples of services using UDP.

UDP is a so-called unreliable connectionless protocol and is defined by RFC 768 and RFC 1122:

  • A connectionless protocol does not formally establish and terminate a connection between hosts, unlike TCP with its handshake and teardown processes.
  • It is a datagram service, suitable for modeling other protocols such as IP tunneling or Remote Procedure Call and the Network File System.
  • There is no guarantee that the data will reach its destination. UDP is meant to provide serivce with very little transmission overhead.
  • It is stateless, suitable for very large numbers of clients, such as in streaming media applications for example IPTV
  • Works well in unicast and is suitable for broadcast information such as in many kinds of service discovery and shared information such as broadcast time or Routing Information Protocol (RIP)

UDP datagram format

UDP adds very little to IP datapackets except for some error checking and port direction (UDP encapsulates IP packets):

          0                                                     16                                                   31
          |                   Source port                       |                   Destination port                  |
          |                     Length                          |                       Checksum                      |
          |                                                   Data                                                    |
  • Source port number (16 bits)
  • Destination port number (16 bits)
  • UDP length (16 bits)
  • UDP checksum (16 bits): The UDP checksum (for IPv4 optionally) includes UDP data, not just the header as with IP message formats.

Error checking

The UDP protocol has error checking but doesn't have any error-recovery:

  • Error-detection: detect error occurs on the frame (FCS). The method used to compute the checksum is defined in RFC 768.
  • Error-recovery: using sequence of bytes to detect error occurs, and if it happens, resend it.

Checksum is the 16-bit one's complement of the one's complement sum of a pseudo header of information from the IP header, the UDP header, and the data, padded with zero octets at the end (if necessary) to make a multiple of two octets

Port direction

Port numbers are used for addressing different functions at the source and destination of the datagram.


Regular network exchanges of data are peer to peer unicast transactions. A HTTP request to a web server (TCP/IP), email SNMP (TCP/IP), DNS (UDP), FTP (TCP/IP), etc, are all peer to peer unicast transactions. If you want to transmit a video, audio or data stream to multiple nodes with one transmission stream instead of multiple individual peer to peer connections, you can use multicasting. Multicast and a network broadcast are different. Multicast is a UDP broadcast only and the messages are only "heard" by the nodes on the network that have "joined the multicast group", which are those that are interested in the information


Depending on environment, UDP's built-in checksum may or may not be reliable enough.

UDP has a 16 bit checksum field starting at bit 40 of the packet header. This suffers from (at least) 2 weaknesses:

  • Checksum is not mandatory, all bits set to 0 are defined as "No checksum".
  • It is a 16 bit checksum in the strict sense of the word, so it is susceptible to undetected corruption.

An even more realistic threat than data courruption along the transport is packet loss reordering: UDP makes no guarantees about all packets to (eventually) arrive at all and packets to arrive in the same sequence as sent.

UDP has no built-in mechanism to deal with payloads bigger than a single packet. It wasn't built for that. If we choose to use UDP, we need to build those parts that are integral to TCP but not to UDP into the application. This will most likely result in a (possibly) inferior reimplementation of TCP. Or not. These protocols may even be an improvement.

Internet Control Message Protocol (ICMP)

Compared to other IP protocols the Internet Control Message Protocol (ICMP) is fairly small and is defined by RFC 792 and RFC 1122. It belongs to the IP layer of TCP/IP but relies on IP for support at the network layer. ICMP messages are encapsulated inside IP datagrams. ICMP only reports errors involving fragment 0 of any fragmented datagrams. The IP, UDP or TCP layer will usually take action based on ICMP messages.

ICMP serves a large number of disparate functions. At its core ICMP was designed as the debugging, troubleshooting, and error reporting mechanism for IP. The errors reported by ICMP are generally related to datagram processing. ICMP will report the following network information:

  • Timeouts
  • Network congestion
  • Network errors such as an unreachable host or network.
  • The ping command is also supported by ICMP.
          0                       8                   16                                        31
          |     Message type     |        Code       |                Checksum                   |
          |                                       Unused                                         |
          |                                       Data                                           |

The ICMP message consists of an 8 bit type, an 8 bit code, an 8 bit checksum, and contents which vary depending on code and type



ICMP echo messages (ICMP type 8) are sent to a remote computer and are returned in an echo-reply response. The primary use for these messages is to check the availability of the target computer.

  1. Computer A creates an ICMP ECHO datagram, using computer A's IP address as the source IP address, and computer B's IP address as destination.
  2. The ICMP ECHO datagram is transmitted via the network to destination B.
  3. The destination B copies the ECHO information into a new ECHO-REPLY message datagram.
  4. B destroys the original ICMP ECHO message.
  5. B now becomes the source of a new ECHO-REPLY datagram and places it's own address in the source IP address field of the IP header, and host A's IP address in the destination field of the IP header.
  6. The datagram is transmitted to the network and is routed to A.

Host unreachable

When a datagram is being forwarded, and reaches a gateway attached to the network a destination host is to be found on, and the route to the host is down, the host is not responding or does not respond on the service port in question, a Host Unreachable message is sent back.

This message is usually displayed as a U in a ping result, and as !H in a traceroute result.


ICMP redirect messages can be used to redirect a source host to use a different gateway that may be closer to the destination. These redirect messages are sent by the receiving gateway and the source host should adapt it's forwarding accordingly when receiving this message. ICMP Redirects are most often used in source routing environments where the source host calculates routing paths to all destinations itself.

Even though the gateway has instructed the source to redirect it's traffic, it still forwards the original datagram that triggered the redirect message, however the source should no longer continue to direct packets to that gateway and should instead use the gateway specified in the response to the redirect.

Source quench

An ICMP source quench message is intended as a congestion control mechanism in IP. Source quench messages are used when a network gateway cannot forward a message because its message buffers are full. The gateway transmits a source quench message back to the source host machine to request that the source reduce it's transmission rate until it no longer receives source quench messages from the gateway. Thus, this effectively throttles back the source's transmission rate.

The gateway can transmit multiple source quench messages, one for each packet it receives from a source. The source machine is not required to respond to these source quench messages.

Time (To Live) Exceeded/Expired

Every IP datagram contains a field called "time to live" or TTL. On each hop along the path to the destination, the TTL field is decremented by one. When the value of the TTL field equals zero, the datagram is discarded to prevent the datagram from floating around the network forever. The gateway may also notify the source host via the ICMP time exceeded message.

Because hosts along a network path may not have the same ammount of memory for buffering data, it is sometimes necessary to fragment a packet into smaller pieces. These fragments must later be reassembled. If a host is missing fragments, and is unable to reassemble the datagram, and the TTL has expired, an ICMP message can be sent to the transmitting host.

Traceroute uses the TTL exceeded message to track the path through the network from source to destination. Traceroute sets the TTL on it's first set of packets to 1 and waits for the TTL exceeded response, which returns with the sender's IP address (both round trip time to that device, and its IP address are aquired).


The data received (a timestamp) in the message is returned in the reply together with an additional timestamp. The timestamp is 32 bits of milliseconds since midnight UT.

The Originate Timestamp is the time the sender last touched the message before sending it, the Receive Timestamp is the time the echoer first touched it on receipt, and the Transmit Timestamp is the time the echoer last touched the message on sending it. If the time is not available in miliseconds or cannot be provided with respect to midnight UT then any time can be inserted in a timestamp provided the high order bit of the timestamp is also set to indicate this non-standard value.

The identifier and sequence number may be used by the echo sender to aid in matching the replies with the requests. For example, the identifier can be used like a port in TCP or UDP to identify a session, and the sequence number can be incremented on each request sent. The destination returns these same values in the reply.


  1. Source host generates an ICMP Protocol Data Unit (PDU).
  2. The ICMP PDU is encapsulated in an IP datagram, with the source and destination IP addresses in the IP header. The datagram is an ICMP ECHO datagram, but is often called an IP datagram because that is what it looks like to the networks it is sent over.
  3. Source host notes the local time on it's clock as it transmits the IP datagram towards the destination. Each host that receives the IP datagram checks the destination address to see if it matches their own address or is the all hosts address (all 1's in the host field of the IP address).
  4. If the destination IP address in the IP datagram does not match the local host's address, the IP datagram is forwarded to the network where the IP address resides.
  5. The destination host receives the IP datagram, finds a match between itself and the destination address in the IP datagram.
  6. Destination host notes the ICMP Echo information in the IP datagram, performs any necessary work, then destroys the original IP/ICMP Echo datagram.
  7. The destination host creates an ICMP Echo Reply, encapsulates it in an IP datagram placing it's own IP address in the source IP address field, and the original sender's IP address in the destination field of the IP datagram.
  8. The new IP datagram is routed back to the originator of the ping. The host receives it, notes the time on the clock and finally prints ping output information, including elapsed time.

This is repeated until all requested ICMP Echo packets have been sent and their responses have been received or the default 2-second timeout expired (The default 2-second timout is local to the host initiating the ping and is not the Time-To-Live value in the datagram).

The response times for ping are Round Trip and cumulative over the entire path out and back to that destination. Ping reveals nothing regarding the intermediate devices. It does not tell *where* a latency or packet loss occurs, nor if some sort of queuing stragegem is in place altering the results. It cannot be trusted for other purposes than to verify that a host is up and functioning.


Most routers come with the option to set the router to ignore or drop ICMP redirects because they can be used to attack networks by confusing hosts as to where the correct default gateway is. ICMP redirects may also be used to set up Man-in-the-Middle attacks.

The variable size of the ICMP packet data section has been exploited a lot. In the well-known "Ping of death," large or fragmented ping packets are used for denial-of-service attacks. ICMP can also be used to create covert channels for communication (see LOKI exploit). When people talk about blocking ICMP they're really talking about ping and traceroute. Ping can be used to determine if a host is alive, Time Exceeded (as part of a traceroute) can be used to to map out network architectures, or void forbid a Redirect (type 5 code 0) to change the default route of a host.

Reasons why we may not want to restrict ICMP:

  • Path MTU Discovery - We use a combination of the Don't Fragment flag and type 3 code 4 (Destination Unreachable - Fragmentation required, and DF flag set) to determine the smallest MTU on the path between the hosts. This way we avoid fragmentation during the transmission.
  • Active Directory requires clients ping the domain controllers in order to pull down GPOs. They use ping to determine the "closest" controller and if none respond, then it is assumed that none are close enough. So the policy update doesn't happen.

Blocking ICMP in its entirety is probably not a good idea, picking and choosing what to block and to/from where, probably will get us what we want. See ufw rules and iptables rules.

Simple routing

Inbound traffic is captured based on ARP and IP address configuration. Outbound traffic is managed by routes. Routing determines the path these packets take so that they are sent to their destinations. This is required for all IP traffic, local and remote, including when multiple network interfaces are available. Routes are held by the kernel routing table.

  • Direct routing table entries occur when the source and destination hosts are on the same physical network and packets are sent directly from the source to the destination.
  • Indirect routing table entries occur when the source and destination hosts are on different physical networks. The destination host must be reached through one or more IP gateways. The first gateway is the only one which is known by the host system.
  • Default routing defines a gateway to use when the direct network route and the indirect host routes are not defined for a given IP address.

For static routes IP uses the routing table to determine where packets should be sent. First the packet is examined to see if its destination is for the local or a remote network. If a remote network, the routing table is consulted to determine the path. If there is no information in the routing table then the packet is sent to the default gateway. Static routes are set with the route command

For dynamic routes the Routing Information Protocol (RIP) is used If multiple routes are possible, RIP will choose the shortest route (fewest hops between routers not physical distance). Routers use RIP to broadcast the routing table over UDP port 520. The routers then add new or improved routes to their routing tables.

Mesh network routing

Ad-Hoc is one of the modes of operation for an 802.11 radio at OSI layer 1, the physical layer, and it basically means that all devices can communicate directly to any other device that is within radio range. Normally, in "infrastructure mode", wireless devices can only communicate with a central Access Point (AP) or router and that device is responsible for re-transmitting packets from one client device to another client device (even if they are right next to each other). Ad-Hoc networks get rid of the middle-man that is the AP, however they don't have any inherent capability for multi-hop. That means, if device A can reach device B, and device B can reach device C, but A cannot reach C, then A and C cannot communicate because B will not re-transmit any packets.

Mesh networking, also know as Mesh Routing happens at OSI layer 3, the network layer. Mesh Routing allows each device on a network (also called nodes) to act as a router and re-transmit packets on behalf of any other devices. Mesh Routing provide the multi-hop facility that Ad-Hoc mode lacks. By combining Ad-Hoc mode at layer 1 and Mesh Routing at layer 3 we can create wireless mesh networks purely between client devices without any need for centralized Access Points or Routers. Both Ad-Hoc and Mesh Routing can be described as P2P as they are both instances of clients-to-client communication, just at different layers of the OSI model.

Routing protocols are organized as:

  • Reactive or on-demand routing protocols where the route is discovered when needed. These protocols tend to decrease the control traffic messages overhead at the cost of increased latency in discovering a new routes. In reactive protocols there is no need of distribution of information. It consumes bandwidth when data is transferred from source to destination. Examples are AODV (ad-hoc on demand distance vector), DSR (distance vector routing) and ABR (associatively based routing) protocols.
  • Proactive routing protocols where every node stores information in the form of tables and changes in network topology require an update to the tables. The nodes swap topology information. There is no route discovery delay associated with finding a new route. The fixed cost of proactive routing is greater than that of a reactive protocols. Examples are DSDV (destination sequenced demand vector) and OLSR (optimized link state routing protocols).
  • Hybrid routing protocols are a combination of both reactive and proactive routing protocols. It was proposed to reduce the control overhead of proactive routing protocols and decrease the latency caused by route discovery in reactive routing protocols. Examples are ZRP (zone routing protocol) and TORA (temporarily ordered routing algorithm).

Reactive routing

  • The Ad hoc On Demand Distance Vector (AODV) routing algorithm is an on demand routing protocol designed for ad hoc mobile networks. AODV is capable of both unicast and multicast routing. It is an on demand algorithm, meaning that it builds routes between nodes only as desired by source nodes. It maintains these routes as long as they are needed by the sources. Additionally, AODV forms trees which connect multicast group members. The trees are composed of the group members and the nodes needed to connect the members. AODV uses sequence numbers to ensure the freshness of routes. It is loop-free, self-starting, and scales to large numbers of mobile nodes.

Proactive routing

  • The Optimized Link State Routing Protocol (OLSR) is a dynamic linkstate Protocol which collects link data and dynamically calculates the best routes within the network.
  • The Better Approach To Mobile Ad-Hoc Networking (B.A.T.M.A.N.) is a protocol under development by the "Freifunk" community and intended to replace OLSR. No single node has all the data. Knowledge about the best route through the network is decentralised, eliminating the need to spread information concerning network changes to every node in the network. Individual nodes only save information about the “direction” it received data from. Data gets passed on from node to node and packets get individual, dynamically created routes. A network of collective intelligence is created.
  • Caleb James DeLisle's Network Suite (CJDNS) is a table driven networking protocol designed to make every node equal; there is no hierarchy or edge routing. Rather than assigning addresses based on topology, all cjdns IPv6 addresses are within the FC00::/8 Unique local address space (keys which do not hash to addresses starting with 'FC' are discarded). Although nodes are identified with IPv6 addresses, Cjdns does not depend upon having IPv6. Each node connects to a couple other nodes by manually configured links over an IPv4 network (such as the Internet) or via the Ethernet Interface.
  • olsrd and olsrd2 are both table driven Link State Routing Protocol implementations optimized for Mobile ad hoc networks on embedded devices like commercial of the shelf routers, smartphones or normal computers. Sometimes these networks are called "mesh networks". olsrd and olsrd2 are the routing daemons which make up the mesh.

Hybrid routing

  • The Zone Routing Protocol (ZRP) framework is a hybrid routing framework suitable for a wide variety of mobile ad-hoc networks, especially those with large network spans and diverse mobility patterns. Each node proactively maintains routes within a local region (referred to as the routing zone). Knowledge of the routing zone topology is leveraged by the ZRP to improve the efficiency of a globally reactive route query/reply mechanism. The proactive maintenance of routing zones also helps improve the quality of discovered routes, by making them more robust to changes in network topology. The ZRP can be configured for a particular network by proper selection of a single parameter, the routing zone radius.


  • The Ad-Hoc Configuration Protocol (AHCP) is an autoconfiguration protocol for IPv6 and dual-stack IPv6/IPv4 networks designed to be used in place of router discovery and DHCP on networks where it is difficult or impossible to configure a server within every link-layer broadcast domain. AHCP will automatically configure IPv4 and IPv6 addresses, name servers and NTP servers. It will not configure default routes, since it is designed to be run together with a routing protocol (such as Babel or OLSR).

Anonimising proxies

An anonymising proxy server is a server whose only function is to be a node. It only reroutes requests from one location to another. If Cathy wants to make a connection to Heathcliff without him knowing that it is Cathy connecting to him, she would fill in Heathcliff's IP address at a proxy server. The proxy server would then make a connection to Heathcliff and relay all the information Heathcliff sends to it to Cathy.

This does not use expensive encryption techniques and is easy to understand and use.


  • If an unreliable third party controls a proxy server, a group of criminals who use the proxy server for phishing, users are no longer guaranteed secure and anonymous communication (over that route).


Tunneling means that the complete IP packet to be sent from source to destination is encapsulated into another IP packet. This new packet has a legal internet IP address.

SSH tunneling

Secure shell (SSH) is used to securely acquire and use a remote terminal session and has other uses as well. You can use SSH to tunnel your traffic, transfer files, mount remote file systems, and more. SSH also uses strong encryption and you can set your SSH client to act as a Socks proxy. Once you have, you can configure applications on your computer – such as your web browser – to use the Socks proxy. The traffic enters the Socks proxy running on your local system and the SSH client forwards it through the SSH connection – this is known as SSH tunneling. This works similar to browsing the web over a VPN. From a web server perspective, traffic appears to be coming from the SSH server. The traffic between source and the SSH server is encrypted, so you can browse over an encrypted connection as you could with a VPN. You must configure each application to use the SSH tunnel’s proxy.

Port forwarding or port mapping is a name given to the combined technique of:

  • Translating the address and/or port number of a packet to a new destination.
  • Possibly accepting such packet(s) in a packet filter (firewall).
  • Forwarding the packet according to the routing table.

Socks proxy

A Socks proxy is different from a "normal" proxy in that they are application proxies. For example, when you use a HTTP proxy you are actually forwarding the HTTP request, and the HTTP proxy server then performs the request on your behalf. Socks provides authentication for protocols that cannot be authenticated and bypasses default routing in the internal network.

Socks protocol

The Socks protocol is roughly equivalent to setting up an IP tunnel with a firewall and the protocol requests are then initiated from the firewall.

  1. The client contacts the Socks proxy server and negotiates a proxy connection.
  2. When a connection is established, the client communicates with the Socks server using the Socks protocol.
  3. The external server communicates with the Socks server as if it were the actual client.

Secure shell

SSH tunnels can be created in several ways using different kinds of port forwarding mechanisms. Ports can be forwarded in three ways:

Virtual Private Network (VPN)

VPN is used for connecting to private networks over public networks (internet). A VPN client communicates over the internet and sends the computer’s network traffic through the encrypted connection to a VPN server. The encryption provides a secure connection, which means petty tyrants (adversaries) can not snoop on the connection and see sensitive information. Depending on the VPN service, all the network traffic may be sent over the VPN, or only some of it.

A VPN works more at the operating system level than the application level. In other words, when you set up a VPN connection, your operating system can route all network traffic through it from all applications (although this can vary from VPN to VPN, depending on how the VPN is configured). You don’t have to configure each individual application.

Point-to-Point Tunneling Protocol (PPTP)

PPTP defined in RFC 2637 allows multiprotocol traffic to be encrypted and then encapsulated in an IP header to be sent across an IP network or a public IP network. PPTP can be used for remote access and site-to-site VPN connections. When using the internet as the public network for VPN, the PPTP server is a PPTP-enabled VPN server with one interface on the internet and a second interface on the intranet.

PPTP encapsulates Point-to-Point Protocol (PPP) frames in IP datagrams for transmission over the network. PPTP uses a TCP connection for tunnel management and a modified version of Generic Routing Encapsulation (GRE) to encapsulate PPP frames for tunneled data. The payloads of the encapsulated PPP frames can be encrypted, compressed, or both. A PPTP packet containing an IP datagram:

                                                                <          Encrypted          >
                    |  IP header  |  GRE header  |  PPP header  |  PPP payload (IP datagram)  |
                                                 <                PPP Frame                   >

PPTP is taking advantage of the underlying PPP encryption and encapsulating a previously encrypted PPP frame.

Nowadays usually only found using 128-bit encryption keys, in the years since it was first bundled with some Windows OS back in 1999, a number of security vulnerabilities have come to light, the most serious of which is the possibility of unencapsulated MS-CHAP v2 Authentication. Using this exploit, PPTP has been cracked within 2 days, and although Micro$oft has patched the flaw (through the use of PEAP authentication), it has itself issued a recommendation that VPN users should use L2TP/IPsec or SSTP instead [7]. The vulnerable MS CHAPv2 authentication is still the most common in use.

Knowing that PPTP was insecure anyway, it came as no surprise to anybody that the NSA almost certainly decrypts PPTP encrypted communications as standard. Perhaps more worrying is that the NSA has (or is in the process of) almost certainly decrypted the vast amounts of older data it has stored, which was encrypted back when even security experts considered PPTP to be secure.

Layer 2 Tunneling Protocol (L2TP)

L2TP defined by RFC 2661, allows multiprotocol traffic to be encrypted and then sent over any medium that supports point-to-point datagram delivery, such as IP or Asynchronous Transfer Mode (ATM). L2TP is a combination of PPTP and Layer 2 Forwarding (L2F) with the best features of both.

Encapsulation for L2TP/IPsec packets consists of two layers. First L2TP encapsulation: A PPP frame (an IP datagram) is wrapped with an L2TP header and a UDP header:

                    |  IP header  |  UDP header  |  L2TP header  |  PPP header  |  PPP payload (IP datagram)  |
                                                                  <                PPP Frame                  >
                                                  <                       L2TP Frame                          >
                                   <                              UDP Frame                                   >                                           

The resulting L2TP message is then wrapped with an IPsec Encapsulating Security Payload (ESP) header and trailer, an IPsec Authentication trailer that provides message integrity and authentication, and a final IP header. In the IP header is the source and destination IP address that corresponds to the VPN client and VPN server.

                    |  IP header  |    IPSec     |  UDP header  |  L2TP header  |  PPP header  |   PPP payload   |     IPSec     |     IPSec      |
                    |             |  ESP header  |              |               |              |  (IP datagram)  |  ESP trailer  |  AUTH trailer  |
                                                        <                       Encrypted by IPSec              >                                        

The L2TP message is encrypted with either Data Encryption Standard (DES) or Triple DES (3DES) by using encryption keys generated from the negotiation process.

IPsec encryption has no major known vulnerabilities, and if properly implemented may still be secure. However, Edward Snowden’s revelations have strongly hinted at the standard being compromised by the NSA, and as John Gilmore (security specialist and founding member of the Electronic Frontier Foundation) explains in this post, it is likely that it has been been deliberately weakened during its design phase [8].

Secure Socket Tunneling Protocol (SSTP)

Secure Socket Tunneling Protocol (SSTP) is a tunneling protocol that uses the HTTPS protocol over TCP port 443 to pass traffic through firewalls and Web proxies that might block PPTP and L2TP/IPsec traffic. SSTP provides a mechanism to encapsulate PPP traffic over the Secure Sockets Layer (SSL/TLS v3) channel of the HTTPS protocol. The use of PPP allows support for strong authentication methods, such as EAP-TLS. SSL/TLS provides transport-level security with enhanced key negotiation, encryption, and integrity checking.

SSTP encapsulates PPP frames in IP datagrams for transmission over the network. SSTP uses a TCP connection (over port 443) for tunnel management as well as PPP data frames. The SSTP message is encrypted with the SSL/TLS channel of the HTTPS protocol.

When a client tries to establish a SSTP-based VPN connection, SSTP first establishes a bidirectional HTTPS layer with the SSTP server. Over this HTTPS layer, the protocol packets flow as the data payload.

SSTP is a proprietary standard owned by Micro$oft. This means that the code is not open to public scrutiny, and Microsoft’s history of co-operating with the NSA, and on-going speculation about possible backdoors built-in to the Windows operating system, does not inspire trust and confidence in the standard.

Open Virtual Private Network (OpenVPN)

OpenVPN is a fairly new open source technology that uses the OpenSSL library and SSLv3/TLSv1 protocols, along with a combination of other technologies, to provide a strong and reliable VPN solution. One of its major strengths is that it is highly configurable, and although it runs best on a UDP port, it can be set to run on any port, including TCP port 443. This makes traffic on it impossible to tell apart from traffic using standard HTTPS over SSL/TLS, and it is therefore extremely difficult to block.

Another advantage of OpenVPN is that the OpenSSL library used to provide encryption supports a number of cryptographic algorithms (e.g. AES, Blowfish, 3DES, CAST-128, Camellia and more), although VPN providers almost exclusively use either AES or Blowfish. 128-bit Blowfish is the default cypher built in to OpenVPN, and although it is generally considered secure, it does have known weaknesses. Blowfish is known to be susceptible to attacks on reflectively weak keys. This means Blowfish users must carefully select keys as there is a class of keys known to be weak, or switch to more modern alternatives like Blowfish's successors Twofish and Threefish.

OpenVPN has become the default VPN connection type, and while natively supported by no platform, is widely supported on most through third party software (including iOS and Android).

It seems OpenVPN has not been compromised or weakened by the NSA. Although no-one knows the full capabilities of the NSA for certain, the mathematics strongly points to OpenVPN, and if used in conjunction with a strong cipher, could be the best choice.

Internet Key Exchange (IKEv2)

Internet Key Exchange (version 2) is an IPSec based tunneling protocol jointly developed by Micro$oft and Cisco, and baked into Windows versions 7 and above. The standard is supported by Blackberry devices, and independently developed (and compatible) open source implementations are available for Linux and other operating systems. IOW, the code can be inspected and if that implementation is used, we can perhaps be a little bit less wary.

It is not as ubiquitous as IPSec but is considered at least as good as, if not superior to, L2TP/IPsec in terms of security, performance (speed), and stability. Mobile users in particular benefit the most from using IKEv2 because of its support for the Mobility and Multihoming (MOBIKE) protocol, which also makes it highly resilient to changing networks.

DNS leaks

When using an anonymity or privacy service, it is extremely important that all traffic originating from your computer is routed through the anonymity network. If any traffic leaks outside of the secure connection to the network, any adversary monitoring your traffic will be able to log your activity.

Mix networks

In the eighties, digital mixes (sometimes called mix networks or mixnets) to achieve a higher level of anonymity with personal communication appeared. Digital mixing uses a similar system as routing but it adds several layers in the connection between the sender and receiver of the communication. The layers are created using public key cryptography. Using digital mixing is comparable to sending a letter encased in four envelopes pre-addressed and pre-stamped with a small message reading, "please remove this envelope and repost".

Note: Mixnets are not designed to disguise the fact that you are using a mix network. If an adversary can simply lock you up for using anonymity tools, you need to disguise your use of anonymity tools.

Sending mixed messages

      _____________              +---------+   _____________                       _____________                  _____________                _____________            
      |           |        +----------+   /|   |           |        +----------+   |           |                  |           |                |           |
      |           |   +----------+   /|__/ |   |           |   +----------+   /|   |           |   +----------+   |           |                |           |
      |           |   |\        /|__/ |_\__|   |           |   |\        /|__/ |   |           |   |\        /|   |           |                |           |
      |___________|   | \______/ |_\__|        |___________|   | \______/ |_\__|   |___________|   | \______/ |   |___________|                |___________|
        _|_____|_     |__/____\__|               _|_____|_     |__/____\__|          _|_____|_     |__/____\__|     _|_____|_                    _|_____|_
       / ******* \ ............................ / ******* \ ....................... / ******* \ .................. / ******* \ ................ / ******* \
      / ********* \                            / ********* \                       / ********* \                  / ********* \                / ********* \
     ---------------                          ---------------                     ---------------                ---------------              ---------------

If Cathy wants to send a message to Heathcliff, without a third person being able to find out who the sender or recipient is, she would encrypt her message three times with the aid of public key cryptography. She would then send her message to a proxy server who would remove the first layer of encryption and send it to a second proxy server through the use of permutation. This second server would then decrypt and also permute the message and the third server would decrypt and send the message to the intended recipient.

Threshold batching

A mix node must collect more than one message before sending any out - otherwise the node is behaving as an onion router node with a time delay. The more messages collected, the more uncertainty is introduced as to which message went where. Using this threshold batching strategy to solve a lack of messages can make the period between the sending and the eventual receiving of the message long, like several hours, depending on the amount of messages deemed critical.

This system is thought effective because as long as the three successive recipients, the re-senders, send enough messages to different mixnodes it is impossible for a third person like an ISP (and government (policing) agencies) to find out what message was originally sent by whom and to whom. Mixing is specifically designed to provide security even if an adversary can see the entire path. See Simulation: Mixnets.


  • Only works if the resenders send enough messages (at any given moment and during a set amount of time). Because (most) nodes, the resending servers, do not send enough messages at the same time, digital mixing could be vulnerable to statistical analysis such as data mining by governments or government policing and intelligence agencies.
  • The use of public key cryptography in itself is not very fast, and has its own vulnerabilities.

Tor onion routing

Tor combines aspects of digital mixing and anonymising proxies.

Mix networks get their security from the mixing done by their component mixes, and may or may not use route unpredictability to enhance security. Onion routing networks primarily get their security from choosing routes that are difficult for the adversary to observe, which for designs deployed to date has meant choosing unpredictable routes through a network. And onion routers typically employ no mixing at all. This gets at the essence of the two even if it is a bit too quick on both sides. Mixes are also usually intended to resist an adversary that can observe all traffic everywhere and, in some threat models, to actively change traffic. Onion routing assumes that an adversary who observes both ends of a communication path will completely break the anonymity of its traffic. Thus, onion routing networks are designed to resist a local adversary, one that can only see a subset of the network and the traffic on it. - Paul Syverson - Why I'm not an Entropist [9]


  • If Cathy wants to make a connection to Heathcliff through the Tor network, she makes an unencrypted connection to a centralised directory server containing the addresses of Tor nodes.
  • After receiving the address list from the directory server the Tor client software connects to a random node (the entry node), through an encrypted connection.
  • The entry node makes an encrypted connection to a random second node which in turn does the same to connect to a random third Tor node.
  • The third node (the exit node) connects to Heathcliff.

Every Tor node is chosen at random (the same node cannot be used twice in one connection and depending on data congestion some nodes will not be used) from the address list received from the centralised directory server, both by the client and by the nodes, to enhance the level of anonymity as much as possible.

Changing routes

If the same connection (the same set of nodes) were to be used for a longer period of time a Tor connection would be vulnerable to statistical analysis, which is why the client software changes the entry node every ten minutes.

Be a node

If Cathy uses the Tor network to connect to Heathcliff and also functions as node for Jane she also connects to a Tor node for that. An ill-willing third party will find it extremely hard to know which connection is initiated as a user and which as a node.


  • If an adversary is able to see the entire path, onion routing loses its security. If a government makes their own national internet, running Tor would not provide security because the government would be able to see the entire path.
  • Not only that. If an attacker can see you, and can see the website you're visiting, even if you create a path outside the adversary's control - they will still be able to correlate the traffic and learn you are visiting the website. This clearly raises concerns about using onion routing to visit a website or websites related to your own government.

I2P garlic routing

Garlic routing is a variant of onion routing that encrypts multiple messages together to make it more difficult for attackers to perform traffic analysis I2P implements a packet switched routing instead of circuit switched (like Tor). Tunnels are unidirectional, and Tor's circuits are bidirectional.

I2P uses garlic routing, bundling and encryption in three places:

  • For building and routing through tunnels (layered encryption)
  • For determining the success or failure of end to end message delivery (bundling)
  • For publishing some network database entries (dampening the probability of a successful traffic analysis attack) (ElGamal/AES)


An i2p tunnel is a directed path through an explicitly selected list of routers. The first router that belongs to a tunnel is named gateway. The communication within a tunnel in unidirectional, this means that it is impossible to send back data without using another separated tunnel:

  • outbound tunnels are those tunnels used to send messages away from the tunnel creator.
  • inbound tunnels are those tunnels used to bring messages to the tunnel creator.


There is no rigid distinction between a server and a pure client like there is in the Tor architecture.

Information transits on network routers that are able to decrypt only the respective layer. The information managed by each single node is composed by:

  • IP address of the next router
  • Encrypted data to transfer.

The network database

netDb is a pair of algorithms used to share the following metadata with the network:

  • routerInfo is a data structure to provide routers the information necessary for contacting a specific router (public keys, transport addresses, etc). Each router send its routerInfo to the netDb directly, that will collect info on the entire network.
  • leaseSets is a data structure to give routers the information necessary for contacting a particular destination. A leaseSet is a collection of leases. Each specifies a tunnel gateway to reach a specific destination. It is sent through outbound tunnels anonymously, to avoid correlating a router with its leaseSets. A lease contains the following info:
    • Inbound gateway for a tunnel that allows reaching a specific destination.
    • Expiration time of a tunnel.
    • A pair of public keys to encrypt messages (to send through the tunnel and reach the destination).


  • When Cathy wants to send a message to Heathcliff, she does a lookup in the netDb to find Heathcliff’s leaseSet, giving her his current inbound tunnel gateways.
  • Cathy's router aggregates multiple messages into a single garlic message, encrypting it using a public key.
  • The garlic is encrypted using the public key published in Heathcliff's leaseSet, allowing the message to be encrypted without giving out the public key to Heathcliff's router.
  • Cathy's router selects one of her outbound tunnels and sends the data, including instructions for the outbound tunnel's endpoint to forward the message on to one of Heathcliff's inbound tunnel gateways.
  • When the outbound tunnel endpoint receives those instructions, it forwards the message according the instructions provided, and when Heathcliff’s inbound tunnel gateway receives it, it is forwarded to his router.
  • If Cathy wants Heathcliff to be able to reply to the message, she needs to transmit her own destination explicitly as part of the message itself.

I2P is end-to-end encrypted. No information is sent in clear or decrypted. Each node has an internal network address different from the network IP address (and is not used).

Layered encryption

I2P uses cryptographic ID's to identify routers and end point services. Naming identifiers use Base 32 Names: a SHA256 digest is attributed to the base64 representation of the destination. The hash is base 32 encoded and .b32.i2p is concatenated onto the end of the hash.

  • During connecting (build up of tunnel) only the routing instructions for the next hop are exposed to each peer.
  • During data transfer, messages are passed through the tunnel. Message and its routing instructions are only exposed to the endpoint of the tunnel.
  • An additional end to end layer of encryption hides the data from the outbound tunnel endpoint and the inbound tunnel gateway.
  • Each tunnel has an encryption layer to avoid unauthorized disclosure to peers inside the network.


Standardisation committees are where "communications" and "investment" meet ...




Pirate boxes

  • For Free Information and Open Internet Independent journalists, community media and hacktivists take action” (pdf): PirateBox or How to Escape the Big Brothers of the Internet (mathieu lapprand) – starts on page 143

Low-latency onion routing



  1. The FCC, the Internet and Net Neutrality
  2. IEEE 802
  3. Routing Loop Attack using IPv6 Automatic Tunnels: Problem Statement and Proposed Mitigations
  4. When moving to IPv6, beware the risks
  5. IP fragmentation attack
  6. Rose Fragmentation Attack explained
  7. Microsoft Security Advisory 2743314: Unencapsulated MS-CHAP v2 Authentication Could Allow Information Disclosure
  8. Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"
  9. Why I'm not an Entropist