Why is this network link so reduce?

I am having some troubles with network performance rate on a Linux server running Ubuntu 9.10. Transfer rates on all sorts of website traffic are around 1.5 MB/s on a 1000mbit/s wired ethernet link. This server has actually attained 55MB/s over samba in the current past. I have actually not transformed the equipment or network set up. I do run updates often and also the most up to date and also best from Ubuntu's databases is working on this equipment.

Equipment set up

Desktop Computer Windows PC - 1000 button - 1000 button - Linux server

All buttons are netgear, and also they all show a thumbs-up for their links which suggests the link is 1000mbit/s. The lights are yellow when the link is just 100mbit/s. Various other analysis details:

[email protected]:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0c:6e:3e:ae:36
          inet addr:192.168.1.30  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:6eff:fe3e:ae36/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:28678 errors:0 dropped:0 overruns:0 frame:0
          TX packets:73531 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2109780 (2.1 MB)  TX bytes:111039729 (111.0 MB)
          Interrupt:22

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:113 errors:0 dropped:0 overruns:0 frame:0
          TX packets:113 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:23469 (23.4 KB)  TX bytes:23469 (23.4 KB)


[email protected]:~# ethtool eth0
Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: pg
        Wake-on: g
        Current message level: 0x00000037 (55)
        Link detected: yes

[email protected]:~# mii-tool
eth0: negotiated 1000baseT-FD flow-control, link ok

The server assumes its obtained a 1000mbit/s link. I have actually examined the rate of transfer by replicating documents making use of Samba. I have actually additionally made use of netcat (nc target 10000 < aBigFile) on the server to move to Windows (nc -l -p 10000) and also saw comparable degrees of inadequate performance.

I examined the rate of the disk drives making use of hdparm and also obtained:

[email protected]:~# hdparm -tT /dev/md0
/dev/md0:
 Timing cached reads:   1436 MB in  2.00 seconds = 718.01 MB/sec
 Timing buffered disk reads:  444 MB in  3.02 seconds = 147.24 MB/sec

Reviewing the very same apply for transfer making use of DD generated the adhering to:

[email protected]:/home/share/Series/New$ dd if=aBigFile of=/dev/null
3200369+1 records in
3200369+1 records out
1638589012 bytes (1.6 GB) copied, 12.7091 s, 129 MB/s

I am puzzled. What could be creating the inadequate network performance which is 2 orders of size less than what the network can?

0
2019-05-05 20:51:26
Source Share
Answers: 5

You might examine the blockage of your network ; probably a few other tools are eating every one of your transmission capacity?

Past that, possibly something is incorrect with your network user interface and/or its vehicle driver. Pretty unusual.

0
2019-05-08 12:55:50
Source

Some points you need to take into consideration examining :

  1. Duplex - - if one side assumes the link is complete duplex and also the opposite side assumes the link is half duplex, anticipate badness.
  2. Malfunctioning button? Bypass it/them.
  3. Jumbo structures. 9000 byte MTU lowers overhanging, which need to increase throughput (waiving a little latency). It seems like your trouble is so negative that this will not aid, however.
  4. TCP attributes : ECN, SACK, blockage control alg
  5. TCP Send/Receive window dimensions (details for linux)

netperf is wonderful at repairing network performance. Yet netcat's tolerable in a pinch.

0
2019-05-08 11:45:16
Source
  1. Try netstat -i and also seek rx/tx mistakes.
  2. Attempt netstat -s and also seek tcp concerns - contrast values prior to and also after the documents duplicate and also seek huge spikes in resets or retransmits.
0
2019-05-08 10:28:42
Source

If in all feasible, to remove most question that it is without a doubt an OS/driver/card concern, connect the computer systems with each other making use of a changeover wire. This will certainly remove the button and also various other feasible networking concerns from your formula.

0
2019-05-08 03:01:44
Source

In my specialist experience, I've battled to get excellent strong network performance with Samba on GNU/Linux. You stated you have actually attained rates of 55 MBps with it, which I think, so I'm presuming another thing is most definitely at play.

Nonetheless, have you attempted NFS, FTP and also SCP? Are the transmission capacity concerns regular throughout the various methods? If so, it's most likely limited to the physical link. If you get irregular outcomes, after that it's most likely a software trouble.

In addition to examining the various other methods, are you making use of security on the transfer? As an example, making use of rsync -z is pleasant for making it possible for compression, yet it's comes with a CPU price, which drastically influences total rate of the transfer. If making use of SSH with rsync, after that you have security in addition to compression, and also your CPU will certainly be under a little stress and anxiety, creating extreme rate fines.

0
2019-05-08 01:29:45
Source