It has been more than a decade since IPv6 first hit news. The reason for its creation was to address the short address span of IPv4 (2^32 addresses). IPv6 supports 2^128 addresses which could enable every device on the planet to have its own unique IPv6 address.
I was going through this book “IPv6: The New Internet Protocol” by Christian Huitema some time ago which gave me a little more perspective into this beast. IPv6 is not just about the expansion of the address space to 128bits but a whole lot more. It also encompasses network autoconfiguration, security and multicast in a much cleaner design. There are features in IPv6 which help in much faster routing (like dropping per hop checksum validation). However, even these have not been enough to lure ISPs and other networks to adopt it. Most common reasons are the inertia of the present Internet, the added expense of gateways and support personnel.
Going through this book I found yet another technical reason which might hinder the adoption of IPv6. However, I never came across anything similar anywhere so I thought I would write this down. The current biggest nightmare in routing IPv4 on the Internet is the size of the routing tables. This happens mainly because the owner of the IPv4 address is the firm which has the end systems for them. When the firm moves somewhere else, or if the allocation is given to a different firm, the routers in the Internet must be configured so that packets for those addresses are now delivered to the new location.
An IP packet is forwarded depending on the ‘prefix’ of its destination address. A smaller prefix means a large set of addresses need to be forwarded towards the same direction (on the same link) and vice versa. Moving the location of a small set of addresses causes a large prefix to be added as an exception to the list of small prefixes in the routing table. If a large number of such moves happen then the routing table will be filled with a large number (tens of thousands) of such exceptions. Also, if these address moves span large geographic distances then the routing tables of many routers will be affected. A large routing table hinders fast search to find the appropriate destination of a packet and increases the cost of the router because of the need for faster processors and faster memory.
IPv6 was designed to overcome this problem too. The basic idea is that the IPv6 addresses are no longer the property of the firm having the end systems. The addresses belong to the ISP who’s geographical location (or its relative location in the ISP map) is well defined. Thus, addresses are allocated in a way (to these ISPs) such that no prefix exceptions are required while forwarding packets. This has two important side effects which are the additional reasons, I think, hindering the adoption of IPv6.
- The fact that firms with end systems don’t own their addresses and the dynamic nature of IP addresses requires a change in the DNS configuration of the firm whenever the firm makes a geographical movement. This might not seem like big hassle but with IPv4 this was not present at all.
- Whenever a firm leases lines with two or more ISPs it may have two or more addresses per end host system. Packets having one of these destination addresses will never go through the other ISP (they won’t forward packets on another prefix). Thus to take advantage of multiple ISPs leasing lines to a firm, one must deal with each of the address sets given to the firm by every ISP. IPv6 supports having multiple IP addresses for a given network interface. It is also possible to configure DNS such that a name on the Internet resolves to numerous IP addresses. However, this again puts more administrative stress on the DNS.
Firms often lease lines from multiple ISPs to improve reliability and performance of their access to the Internet. However, given the separation in the IP addresses being forwarded by the ISPs we loose this reliability and performance advantage. It is like the network shedding off another of its “maintenance intelligence” and hoping that an end-to-end principle would exist to deal with the same (very much like end to end TCP congestion control we have on the Internet now). Fortunately such an end to end technology is already present. SCTP or Stream Control Transmission Protocol is a transport protocol for the Internet like TCP. In addition to the features present in TCP it also supports transparent multihoming which is a fancy term for “multiple IP addresses”. Thus, an SCTP stream could connect to all the IPv6 address of the remote IPv6 capable system (which has two or more ISPs providing it Internet access) and have transparent failover from one set of IP addresses to another. Hopefully, because the abstraction for connecting to a site in both the IPv4/TCP and IPv6/SCTP cases still remains the same uptill the DNS resolution [addr_abstraction=gethostbyname(site_name); connect(addr_abstraction)], migration to this new paradigm will not be difficult.