NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Network very very slow... was iSCSI and jumbo frames
RVP a écrit :
> On Thu, 4 Feb 2021, Michael van Elst wrote:
>
>> joel.bertrand%systella.fr@localhost (=?UTF-8?Q?BERTRAND_Jo=c3=abl?=) writes:
>>
>>> legendre# dd if=/dev/rsd1a of=/dev/null bs=1m count=1000
>>> 1000+0 records in
>>> 1000+0 records out
>>> 1048576000 bytes transferred in 3.503 secs (299336568 bytes/sec)
>>> legendre# dd if=/dev/rsd1a of=/dev/null bs=1m count=10000
>>> 10000+0 records in
>>> 10000+0 records out
>>> 10485760000 bytes transferred in 407.568 secs (25727633 bytes/sec)
>>
>>
>> That should never happen. Can you verify that /dev/null is a device
>> and not a file ?
Of course :
legendre:[/dev] > ls -l null
crw-rw-rw- 1 root wheel 2, 2 Feb 5 08:50 null
> Yes. That puzzled me too; and your idea would explain that speed
> difference.
>
> On Thu, 4 Feb 2021, BERTRAND Joël wrote:
>
>> OK. I have istgt installed on legendre (as I have exactly the same
>> poor
>> performances with iscsi-target). My istgt installation runs fine and
>> provides swap volumes for a lot of diskless workstations. It exports
>> ccd0 device.
>> ...
>> legendre# dd if=/dev/zero of=test bs=1m count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes transferred in 4.697 secs (223243772 bytes/sec)
>> legendre# dd if=/dev/zero of=test bs=1m count=5000
>> 5000+0 records in
>> 5000+0 records out
>> 5242880000 bytes transferred in 247.150 secs (21213352 bytes/sec)
>>
>
> That is slow. For the istgt target you have to make sure that
> the underlying partition passed to istgt is the raw device and not
> the block one. A test on my 8+ year-old laptop:
>
> #
> # Find out what type of disk device is being used.
> # (dk5 is the last partition, cylinder-wise, on my disk.)
> #
> $ fgrep dk5 /tmp/istgt.conf
> LUN0 Storage /dev/dk5 Auto
My itstg configuration is, for this partition :
[LogicalUnit4]
Comment "iSCSI test"
TargetName test
TargetAlias "test loopback legendre"
Mapping PortalGroup1 InitiatorGroup4
AuthMethod Auto
AuthGroup AuthGroup1
UseDigest Auto
UnitType Disk
LUN0 Storage /dev/dk6 Auto
I can change dk6 by rdk6, but it's not a solution to fix issue with
iscsid, when NetBSD initiator tries to use a iSCSI target. In my case,
istgt runs as expected even if I can increase its performance.
> #
> # So, istgt will use a block device. Measure throughput.
> # (I do a seek to start reading the innermost blocks
> # (the slowest ones) on disk.)
> #
> $ sudo dd if=/dev/rsd0 of=/dev/null bs=1m seek=250 count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes transferred in 30.552 secs (34321026 bytes/sec)
>
> $ sudo dd if=/dev/rsd0 of=/dev/null bs=1m seek=250 count=10000
> 10000+0 records in
> 10000+0 records out
> 10485760000 bytes transferred in 305.034 secs (34375708 bytes/sec)
>
> #
> # Not so good.
> # Use raw char. device instead of the block one.
> #
> $ sudo sed -i'' -e 's;/dev/dk5;/dev/rdk5;' /tmp/istgt.conf
>
> $ fgrep dk5 /tmp/istgt.conf
> LUN0 Storage /dev/rdk5 Auto
>
> #
> # 3x improvement in throughput now.
> #
> $ sudo dd if=/dev/rsd0 of=/dev/null bs=1m seek=250 count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes transferred in 10.663 secs (98337803 bytes/sec)
>
> $ sudo dd if=/dev/rsd0 of=/dev/null bs=1m seek=250 count=10000
> 10000+0 records in
> 10000+0 records out
> 10485760000 bytes transferred in 108.837 secs (96343706 bytes/sec)
>
> Recommendations:
> 1. Except when mounting, use the raw devices instead of the block
> devices everywhere.
OK.
> 2. Try disabling `estd'. Since iSCSI is somewhat CPU-bound also,
> `estd' keeping the CPU freq. low will make a difference with
> read/write throughput.
I have tried, but even if CPU frequency remain at 800 MHz, [system]
doesn't take more than 10%.
With CPU frequency set to 3400 MHz :
legendre# dd if=/dev/zero of=/opt/test bs=1m count=5000
^C1468+0 records in
1467+0 records out
1538260992 bytes transferred in 159.727 secs (9630563 bytes/sec)
> My CPU was always at 800MHz even when all these tests were going
> on. Throughput increased by a few MB with the CPU speed set to max.
JKB
Home |
Main Index |
Thread Index |
Old Index