Sustained Data Transfer Rates of GigE Using a 9000 Byte MTU

John Clyne
5/08/00
This document describes the results of sustained data transfer rate performance testing conducted using Gigabit Ethernet (1000BaseFX) with a 9000-byte MTU. Support for the larger, 9000-byte MTU is a feature available in IRIX OS, starting with release 6.5.3.
 

Experiments

The experiments were conducted using a locally modified version of nettest. All experiments measure memory-to-memory performance (no disk I/O was performed). Two host systems were involved (described below). The two machines were attached via a Cisco Catalyst 6509 with an 8 port GigE card. While the host systems were reasonably quiescent during this period, no attempt was made to prevent user logins.

The experiments below look at the effects on sustained transfer rates of varying TCP window and request buffer size. The default TCP window size was 186368 bytes. The buffer size is application dependent, hence there are no default values. In all cases, each nettest session completed 40960 transfers of data, each transfer consisting of transfer size bytes (40960 * transfer_size bytes, total). Three sessions were conducted for each window/buffer pair. Results reported represent the average of the three sessions.

Test Platforms

Experiments were conducted between the following two host systems:
 

Host 1 (Magic)

SGI Onyx2 (Origin2k)
8x250MHz R10k
3 GB's RAM
Gigabit Ethernet: eg0, module 1, XIO slot io5, firmware version 12.4.3
IRIX 6.5.7
 

Host 2 (Graywolf)

SGI Octane (Origin2k)
1x250MHz R10k
256 MB's RAM
Gigabit Ethernet: eg0, PCI slot 1, firmware version 12.4.3
IRIX 6.5.7
 

Irix Configuration

  1. The MTU size specified in /var/sysgen/master.d/if_eg was changed from 1500 to 9000 bytes (int eg_mtu[10])
  2. The buffer wait time  specified in /var/sysgen/master.d/if_eg was changed from 72  to 17 uSECs (int eg_recv_coal_ticks[10)
  3. The TCP send/receive size specified by 'systune' was change from 61440 to 186368 bytes.

Results

The plots shown in figures 1 and 2 depict transfer rates between the two SGI's, magic and graywolf. The first plot (Figure 1) shows transfers from magic to graywolf, the second (Figure 2) depicts transfer rates from graywolf to magic. Note the slight asymmetry in performance: transfers to magic perform significantly better than transfers to graywolf. This asymmetry is most likely attributed to the lack of multiple processors available on graywolf. Note that in both cases the performance is largely unaffected by changes in window size (most likely due to the short physical distance between the two machines). Changes in transfer block sizes also have relatively little impact: much as expected, larger blocks generally produce better results.

Figure 1: Transfer rates vs TCP window size for data sent from magic to graywolf


Figure 2: Transfer rates vs TCP Window size for data sent from graywolf to magic


This page maintained by John Clyne (clyne@ncar.ucar.edu)

$Date: 2000/05/08 18:23:32 $, $Revision: 1.1 $