Gigabit Ethernet: A Replacement for ATM?

May 1, 1997


Abstract

As applications with high bandwidth and real-time network demands become more popular, it is necessary to reevaluate the implementation of local area networks. Although ATM was initially considered by many to be the solution of the future, it has been slow in gaining acceptance in a community dominated by connection-less networks. With the introduction of Gigabit Ethernet, the bandwidth advantage of ATM is no longer so clear. This paper evaluates Gigabit Ethernet as an alternative to ATM in the local area network. It evaluates Gigabit Ethernet on the basis of implementation, scalability, and quality of service. Analysis indicates that the technological advantage of ATM will not be sufficient to take over the LAN market, but it will sustain ATM as the solution for especially demanding environments.

Background

The last several years have seen new demands placed on networks. Applications ranging from the Web to Network Video have pushed networks to the limit in both quality of service (QoS) and overall bandwidth. Conventional wisdom has been that asynchronous transfer mode (ATM) is the technology to solve these problems in Local Area Networks (LANs)[Hurwicz 97a]; however, complications supporting the Internet Protocol (IP) and emulating the IEEE 802 LAN standards, as well as high cost, have prevented ATM from dominating the marketplace.

Meanwhile, new technology from the Ethernet community threatens to replace ATM as the network of the future. According to supporters, Gigabit Ethernet will deliver 1000Mbps bandwidth without the complexities of ATM. Even though the IEEE 802.3z Gigabit Task Force has yet to complete and approve a standard, Gigabit Ethernet products have been introduced by several vendors, including NBase Communications and Rapid City Communications, and an avalanche of additional products are within a year to market. Supporting this movement is the Gigabit Ethernet Alliance, a conglomeration of over 110 companies pushing Gigabit Ethernet through the standardization process [Gigabit Ethernet Alliance 97]. As network administrators seek solutions for application needs, several questions need to be answered:

At its core, Gigabit Ethernet is a 1000Mbps extension to the 10Mbps and 100Mbps IEEE 802.3 Ethernet standards. Gigabit Ethernet supports new full-duplex operating modes for switch-to-switch and switch-to-end-station connections and half-duplex operating modes for shared connections using repeaters and the CSMA/CD access method. Initially operating over optical fiber, Gigabit Ethernet will also be able to use Category 5 unshielded twisted-pair cabling and coax [Gigabit Ethernet Alliance 96]. Gigabit Ethernet products most often refer to high-capacity 10Mbps/100Mbps switches that also support 1000Base-X ports [Cohen 96], although they also include network interface cards, all-gigabit Ethernet repeaters, and gigabit-ethernet routers [Strom 96].

Gigabit Ethernet technology is most applicable to three cases: switch-to-switch connections, server-to-switch connections, and endstation-to-concentrator connections [Tolly 97]. Switch-to-switch boils down to a fatter pipe between points of congestion. Each switch may support dozens of 10Mbps or 100Mbps Ethernet connections; a Gigabit Ethernet is an aggregation of these links. Server-to-switch connections help eliminate the network bottleneck from high-performance servers. Such servers may simultaneously support hundreds of end users; Gigabit Ethernet keeps data from merely trickling in and out. Finally, endstation-to-concentrator connections bring gigabit bandwidth to the desktop, where the half-duplex CSMA/CD implementation is most likely to be deployed. Although few desktop workstations can actually handle this level of throughput, it may be appropriate for especially demanding applications.

Choosing between Gigabit Ethernet and ATM for LAN solutions requires a careful study of each technology and its applicability to individual environments. However, several widely accepted high-level truths will indubitably factor into the decision:

Once these economic and feasibility issues are considered, attention must be diverted to the technical features of each platform.

Issues

The decision for or against Gigabit Ethernet comes down to three key technical issues, each of which must be compared against ATM:

Supporters claim that low implementation cost is one of Gigabit Ethernet's strongest features. The low cost is based on two factors: cost of the actual hardware and easy upgrade from current Ethernet platforms. However, the cost case for Gigabit Ethernet may not be so easy. A number of technical issues must be resolved by the IEEE standards committee in order for Gigabit Ethernet to support CSMA/CD without a serious degradation in bandwidth. Further, it is not obvious that an upgrade from an existing Ethernet to Gigabit Ethernet will be any easier than an upgrade to ATM.

One of the selling points for ATM has always been its scalability. While the Ethernet approach to scalability is massive bandwidth, ATM offers more options for network management. If an Ethernet link becomes overloaded, there are two options: upgrade the link to a higher bandwidth or reorganize the network to alleviate the load. Although Gigabit Ethernet provides sufficient bandwidth today, how it will scale to future network demands is unclear.

With the proliferation of applications such as audio-video conferencing, QoS demands are growing in importance. ATM, as a circuit-switched technology, can provide QoS guarantees. Gigabit Ethernet supporters argue that, first, protocols such as RSVP, RTP, and IPv6, will provide sufficient QoS for most applications and, second, Gigabit Ethernet provides sufficient bandwidth that the QoS problem disappears.

Analysis

Implementation

More bandwidth for less money is a compelling argument in favor of Gigabit Ethernet. As Table 1 projects, end-user, per-port prices for backbone and LAN-segmentation switches for Gigabit Ethernet will fall below comparable ATM hardware with less bandwidth [Hurwicz 97b]. Gigabit Ethernet supporters argue that the case does not end here. For sites with 10Mbps and 100Mbps Ethernet already in use, Gigabit Ethernet employs the same CSMA/CD protocol, same frame format, and same frame size. Thus, extending an existing network investment to gigabit speeds involves a reasonable initial cost with essentially no reinstrumentation of the network administration infrastructure or retraining of staff. There is no need for additional protocol stacks or middleware, and LAN emulation for desktop links is not a concern [Gigabit Ethernet Alliance 96].

Year 10Mbps Ethernet 100Mbps Ethernet Gigabit Ethernet 155Mbps ATM
1996 $612 $785 N/A $2109
1997 $400 $628 $2200 $1898
1998 $340 $534 $1540 $1613
2000 $266 $462 $809 $1033

Table 1. Project end-user, per-port prices for backbone and LAN-segmentation switches. Source: BYTE Magazine [Hurwicz 97b].

ATM, on the other hand, requires new tools and retraining, and LAN emulation is one of its most troubling aspects. The fundamental complication with ATM as a LAN framework is that ATM is connection-oriented while traditional LANs are connectionless. For LAN-based operations to run across an ATM LAN, a connection-less environment must be emulated. A physical LAN segment can be emulated by connecting a group of end stations on the ATM network to an ATM multicast virtual connector, which emulates the broadcast physical medium of the IEEE 802 LAN. Membership in the LAN is determined by logical connection to this virtual connector, and any station may use it to broadcast to all others on the ATM LAN segment [Newman 94]. LAN emulation is gaining acceptance but remains awkward and complex. It is largely these complications, as well as prohibitive costs, that have hindered ATM's progress to the desktop.

While ATM LANs are certainly complex, the transition to Gigabit Ethernet may not be as easy or rewarding as it seems. First of all, the data in Table 1 does not take into account wiring. The initial Gigabit Ethernet standard specifies a Fibre Channel physical layer [IEEE 802.3z 97]. Most existing 10Mbps and 100Mbps Ethernet installations are based on unshielded twisted-pair (UTP). Unfortunately, a UTP standard for Gigabit Ethernet could trail the Fibre Channel standard by more than a year [Hurwicz 97b]. In the meantime, the cost of installing fiber-optic cabling will significantly raise the overall price of Gigabit Ethernet and bring the complexity of upgrading closer to that of ATM.

Even more interesting than the cost of deploying Gigabit Ethernet technology is the question of how much bandwidth it actually delivers. One factor which affects bandwidth is collision detection in half-duplex environments. The IEEE 802.3z Gigabit Task Force elected to continue support of CSMA/CD. According to the CSMA/CD standard, the worst-case round-trip delay of the network must be less than or equal to the transmission time of the shortest legal frame. Since signal propagation speed is roughly the same through the various physical layers, the greater the network bandwidth, the smaller its maximum span. For instance, the maximum span of 10Mbps and 100Mbps Ethernet are approximately 2000 and 200 meters, respectively. Along these lines, the maximum span of a Gigabit Ethernet would be about 20 meters, which is too short for general purpose use.

The IEEE 802.3z draft describes a solution for this problem. First the length of the shortest legal frame is increased from 64 bytes to 512 bytes. The increase is enforced at the MAC sublayer, where a sequence of carrier extension bits are appended to any frame smaller than the legal amount. An extension bit is a data signal recognizable by remote adaptors as filler for a transmission window [IEEE 802.3z 97].

The carrier extension bit solution is simple and effective. It is straightforward to implement and increases the potential network span from 20 to 200 meters. However, transmitting extension bits rather than real data will result in low network utilization for small frames. For instance, a 64-byte frame, the minimum in 10Mbps and 100Mbps Ethernet, will transmit 64 bytes of data and 448 bytes of extension bits, for just over 10% efficiency. Consequently, an enhancement to the carrier extension bit solution, which increases network utilization for small packets, has been incorporated into the IEEE 802.3z draft. Specifically, after an initial frame padded with carrier extension bits, senders may transmit additional non-padded frames without relinquishing control of the medium. This technique is called a packet burst. Essentially, the initial frame is extended as necessary to ensure that there are no collisions. Then, all ensuing frames are delimited by special bit sequences to identify the packet burst to receiving stations. The packet burst is limited by a maximum burst length, after which the sender must again contend for the medium [IEEE 802.3z 97].

Using carrier extension bits with packet bursts, it is possible to extend the Gigabit Ethernet collision span to 200 meters, while maintaining 30 to 40 percent utilization for small frames and perhaps as high as 90 percent for large frames [Hurwicz 97b]. While better knowledge of packet size distribution is necessary to fully evaluate this advance, even the small packet network utilization in Gigabit Ethernet represents a bandwidth two to eight times that of 100Mbps Ethernet [Weizman 97]. In practice, almost all Gigabit Ethernet products will eliminate this problem by concentrating on full-duplex switched environments. Since there are only two stations per segment and each has a clear channel, collisions cannot occur. Moreover, the fiber-based physical layer will support distances from 500 meters up to 2 kilometers [Lo 97]. Nonetheless, CSMA/CD technology will continue to be important if Gigabit Ethernet is to be deployed economically to the desktop.

Even if Gigabit Ethernet can be deployed to deliver optimal bandwidth to end nodes, it is questionable whether the bandwidth is worth the additional cost. Consider the case of a Gigabit Ethernet linking client hosts to a centralized server. A critical point is not the bandwidth, but rather how the bandwidth is handled on the transmit and receive end of the server's network interface card (NIC), as well as on the server bus and in its processor. Any of these can become a bottleneck, delaying handling of incoming traffic. For instance, if the server bus gets congested, it will cause delays and back up the NIC. If the host NIC cannot handle incoming traffic from simultaneous users, it will start dropping packets. With dropped packets come retransmissions, which further exacerbate the situation [MacAskill and Le Baron 96]. The bottom line is that the increased bandwidth offered by Gigabit Ethernet should be wisely deployed with careful consideration for its purpose and effects on the environment.

Scalability

The Gigabit Ethernet Alliance claims that the Fast Ethernet Standard (100Mbps) established Ethernet as a scalable technology [Gigabit Ethernet Alliance 96]. That is, the fact that the same frame format, protocol stack, and in some cases physical layer could support 10Mbps, 100Mbps, and even 1000Mbps Ethernet establishes the underlying Ethernet standard as scalable to higher bandwidths. While this may be true, scalability in the ATM realm takes a more scientific approach.

The scalability of Ethernet in a LAN is fundamentally limited by the Spanning Tree Algorithm. Loops in the network topology will always be blocked, meaning that there is exactly one active path between any two nodes. For instance, if the network consisted of three hosts, each connected to the other two forming a triangle, the Spanning Tree Algorithm would shut down one link, effectively reducing by one third the total capacity of the network.

ATM, on the other hand, is a naturally scalable technology. If one link between two devices is insufficient to carry load, additional links can be added, providing load sharing and load balancing. There is no need to upgrade the existing link -- the problem is solved by adding a new one. This property of ATM carries over to reliability. For optimum reliability, each device should be directly connected to at least two network switches. ATM supports any such topology. Failover proceeds simply and efficiently; when one link fails, only a few seconds are required for traffic to pass over to the other. Again, Ethernet is limited by the Spanning Tree Algorithm. If a link fails, recalculating the spanning tree can take 30 seconds to several minutes, disrupting all users on the network as all links shut down [Bay Networks 96].

Although Ethernet technology continues to deliver links with higher bandwidth, moving more bits is not the only way to solve scalability problems. The design and construction of an ATM LAN may be tricky, but if done well, it will offer more options for scalability and reliability in the long run.

Quality of Service

Ethernet has satisfied the networking needs of a vast majority of users for many years; however, it is a relatively unadvanced technology. As the ways in which LANs are used evolves to include interactive collaborative applications and desktop videoconferencing, it will become clear that not all network problems can be solved with greater bandwidth. ATM offers significant technical advantages in this realm. First, ATM traffic is divided into 53-byte cells, allowing a fine-grained blending of different traffic streams. In addition, since the cell size is fixed, the mixed traffic flow is smoother with smaller delays. Ethernet frames, on the other hand, vary from 64 bytes to about 1500 bytes (or greater with packet bursts). The result is a less predictable mix of traffic streams, regardless of bandwidth [Hurwicz 97b]. Fixed-length cells also facilitate the implementation of network switches. When the size of data is known in advance, switching problems become easier. Easier problems can be solved efficiently in hardware and with great parallelization. This is not to say that hardware and parallelization are useless in switching variable-length frames, but it is certainly easier with cells. Small fixed-length cells also provide better control over queues [Peterson and Davie 96]. Overall, by smoothing traffic over the network and increasing the efficiency of switches, ATM networks can offer better guarantees on deliverable latency and bandwidth.

The ATM QoS advantage does not stop with fixed cells. ATM, as a connection-oriented platform, has built-in support for bandwidth reservation. By signalling and reserving bandwidth, applications can obtain guarantees on bandwidth and latency. Unfortunately, the path through the network must be completely ATM in order to fully utilize its QoS guarantees. Thus, rather than deploying ATM as a LAN backbone technology, it must run all the way to the desktop -- an expensive and complex process. Running lightly loaded or full-duplex high bandwidth Ethernet links from desktop to ATM backbone is a more economical solution, but it sacrifices some of the benefits of ATM.

The need for QoS guarantees has not gone unnoticed in the Ethernet community. As a connection-less platform, Ethernet relies on higher-level protocols to manage its traffic flow. At Layer 3, for instance, the Internet Engineering Task Force (IETF) is standardizing the Resource ReSerVation Protocol (RSVP). RSVP is a protocol by which receiving nodes can signal QoS requests through the network. At each node between the receiving host and the data source, the QoS request is delivered to an RSVP daemon. The RSVP daemon decides whether to grant the request based on two factors: policy control and admission control. Policy control determines whether the user has sufficient privileges to make the request, while admission control determines whether the node has the resources to grant the request. If either check fails, the RSVP daemon returns a denial message to the requesting node. Otherwise, the RSVP daemon records packet classification and scheduling data in order to identify packet streams and satisfy the desired QoS. The RSVP daemon then forwards the resource request along the next hop toward the source of the data. At nodes where QoS requests converge, such as in multicast trees, they are automatically merged [Zappala 96].

RSVP has several interesting features that differ from ATM. First RSVP maintains the robustness of connectionless networks by maintaining soft state rather than hard state. Soft state does not need to be explicitly deleted when it is no longer needed, whereas hard state will remain until explicitly dropped. Soft state is dropped via timeout -- that is, it is deleted if it is not periodically refreshed. The first result of this approach is that connections and routers can crash and be restored with little residual damage. Connections will be reestablished, and no abandoned reservations will linger. Another interesting result is that reservations can be more dynamic. The periodic refresh messages used to prevent timeouts can be used to request new levels of resources.

Another significant feature in which RSVP and ATM differ is that RSVP is a receiver-oriented approach. While reservations in RSVP are made by receivers, ATM reservations are made by the sender. The receiver-oriented approach is especially applicable to multicast, where the receivers typically outnumbers the senders. Different receivers may have different QoS needs. For instance, some may want only the audio portion of a videoconference, and others may want data from only one sender. In an ATM multicast implementation, the sender must keep track of all these characteristics, whereas in an RSVP implementation, receivers manage their own requests [Peterson and Davie 96].

Although the outlook for protocols like RSVP is favorable, its ability to propel Ethernet into the QoS world is currently limited by several factors. First, RSVP has not yet been standardized by the IETF. ATM standards for reserving resources in a network are established and in use. Second, just as implementing an ATM network requires additional training, integrating RSVP into an Ethernet-based environment is not automatic. Last, due to fundamental QoS limitations in Ethernet technology, RSVP cannot deliver guarantees as solid as ATM. Despite these issues, it is unlikely that QoS limitations will slow the proliferation of Ethernet. The majority of LAN segments today do not carry critical real-time video or voice streams, rendering ATM's advantages largely irrelevant in those environments. Even in situations where real-time video and voice streams are infrequent, Ethernet delivery can be quite smooth if bandwidth is plentiful compared to demand [Hurwicz 97b].

Conclusions

Networking with Ethernet is like watching VHS videos on a VCR [Buerger 96]. The technology is old and second-rate, but the shear numbers of VHS users and suppliers have made it virtually synonymous with renting a movie. Even in competition with superior products, such as laser disks, VHS continues to dominate. Similarly, few dispute that Ethernet is not as technologically advanced as ATM. It cannot provide the same guarantees in terms of scalability, reliability, or quality of service. However, it comes close, and for the myriad of network administrators comfortable with Ethernet, close is good enough. Ethernet will continue its domination of LANs, with Gigabit Ethernet links on backbones, server uplinks, and other connections where especially high bandwidth is needed.

The question remains whether ATM will fizzle and disappear from local area networks. Fortunately for ATM, its virtues are recognized by discerning network administrators who have the budget and expertise to implement it. As a result, ATM will continue to be the solution where scalability and QoS are especially important and implementation complexities are not. Hence, rather than one technology obliterating the other, Gigabit Ethernet and ATM will continue to exist as complimentary networking solutions.

References

[Bay Networks 96] Bay Networks, "White Paper: Personal Networking," 1996.

[Buerger 96] D. Buerger, "And the winning fast LAN of the future is ... Ethernet," Network World Fusion, May 13, 1996.

[Cohen 96] J. Cohen, "Getting ready for gigabit Ethernet," Network World Fusion, May 27, 1996.

[Gigabit Ethernet Alliance 96] Gigabit Ethernet Alliance, "White Paper: Gigabit Ethernet," August 1996.

[Gigabit Ethernet Alliance 97] Gigabit Ethernet Alliance, "Gigabit Ethernet Alliance To Demonstrate Gigabit Ethernet Technology from Multiple Member Companies for the First Time at NetWorld+Interop," April 29, 1997.

[Hurwicz 97a] M. Hurwicz, "Faster, Smarter Nets," BYTE, April 1997 p. 83-88.

[Hurwicz 97b] M. Hurwicz, "Ethernet with an Attitude," BYTE, April 1997 p. 88NA 3 - 88NA 8.

[IEEE 802.3z 97] Lan MAN Standards Committee of the IEEE Computer Society, "IEEE Draft P802.3z/D2 Supplement to Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method & Physical Layer Specifications: Media Access Control (MAC) Parameters, Physical Layer, Repeater and Management Parameters for 1000 Mb/s Operation," IEEE Standards Department, Piscataway, NJ, February 19, 1997.

[Lo 97] S. Lo, "Inside Gigabit Ethernet," BYTE, May 1997 p. 55-56.

[MacAskill and Le Baron 96] S. MacAskill and M. Le Baron, "What makes good network design?" Network World Fusion, May 13, 1996.

[Newman 94] P. Newman, "ATM Local Area Networks," IEEE Communications Magazine, March 1994.

[Peterson and Davie 96] L. L. Peterson and B. S. Davie, Computer Networks: A Systems Approach, Morgan Kaufmann, San Francisco, 1996.

[Strom 96] S. Strom, "Gigabit Ethernet wares, and why," Network World Fusion, December 16, 1996.

[Tolly 97] K. Tolly, "Planning for Gigabit Ethernet", Network World Fusion, April 28, 1997.

[Weizman 97] M. Weizman, "Packet bursting helps Ethernet scale to gigabit-per-second speeds," Network World Fusion, February 10, 1997.

[Zappala 96] D. Zappala, "RSVP Protocol Overview," USC Information Sciences Institute, Available: http://www.isi.edu/div7/rsvp/overview.html, September 1996.