NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Network very very slow... was iSCSI and jumbo frames
RVP a écrit :
>> legendre# dd if=/dev/zero of=/opt/bacula/test.dd count=10 bs=100m
>> 10+0 records in
>> 10+0 records out
>> 1048576000 bytes transferred in 53.396 secs (19637725 bytes/sec)
>
> This is odd. How much of RAM do you have on the NetBSD client? And,
> how much disk space is free?
A precision : /opt is my iSCSI target
legendre# df -h
Filesystem Size Used Avail %Cap Mounted on
/dev/raid0a 31G 3.3G 26G 11% /
/dev/raid0e 62G 25G 34G 42% /usr
/dev/raid0f 31G 23G 6.0G 79% /var
/dev/raid0g 252G 48G 191G 20% /usr/src
/dev/raid0h 523G 209G 288G 42% /srv
/dev/dk0 3.6T 2.4T 1.0T 70% /home
kernfs 1.0K 1.0K 0B 100% /kern
ptyfs 1.0K 1.0K 0B 100% /dev/pts
procfs 4.0K 4.0K 0B 100% /proc
tmpfs 4.0G 28K 4.0G 0% /var/shm
/dev/dk5 11T 8.9T 1.1T 88% /opt
raid0 and dk0 (raid1) are local disks, dk5 is my qNAP NAS.
legendre# iscsictl list_targets
1: iqn.2004-04.com.qnap:ts-431p2:iscsi.euclide.3b96e9 (QNAP Target)
2: 192.168.12.2:3260,1
legendre# dkctl dk5 getwedgeinfo
dk5 at sd0: bacula
dk5: 23028563901 blocks at 34, type: ffs
In dmesg, kernel returns :
[ 82537.241741] scsibus0 at iscsi0: 1 target, 16 luns per target
[ 82537.241741] sd0 at scsibus0 target 0 lun 0: <QNAP, iSCSI Storage,
4.0> disk fixed
[ 82537.271751] sd0: fabricating a geometry
[ 82537.271751] sd0: 10980 GB, 11244416 cyl, 64 head, 32 sec, 512
bytes/sect x 23028563968 sectors
[ 82537.291758] sd0: fabricating a geometry
[ 82537.321769] sd0: GPT GUID: a5d27c7c-8eda-40e8-a29b-e85a539a5bc7
[ 82537.321769] dk5 at sd0: "bacula", 23028563901 blocks at 34, type: ffs
[ 82537.331773] sd0: async, 8-bit transfers, tagged queueing
> If you don't have free RAM, or a lot of free disk space to spare, then
> dd(1) allocating a 100MB buffer for I/O might slow things down, esp.
> as you haven't passed `iflag=direct oflag=creat,direct' to bypass the
> FS caches. Still, this is odd. Creating a 1GB file should not take so
> long.
>
>> legendre# dd if=/dev/zero of=/opt/bacula/test.dd count=10 bs=1000m
>> 10+0 records in
>> 10+0 records out
>> 10485760000 bytes transferred in 1026.927 secs (10210813 bytes/sec)
>> legendre#
>
> This is an atypical workload. 1GB of RAM is tied up by dd(1) for its
> I/O _and_ then you're creating a 10GB file which means the system is
> definitely allocating doubly/triply-indirected blocks. FS performance
> in such cases would not be optimal. On my directly-connected old laptop
> SATA disk I get 1/2 the speed of the `bs=10m' case (~80 MB/sec).
>
> I wouldn't be surprised if the OS had started using swap during this
> dd run.
It doesn't swap. I have verified.
>> And I don't understand. If iSCSI target or raid6 subsystem on qNAP were
>> the bottleneck, CPU load should be greater than 1 and it's not the case.
>
> Use top(1) on the NAS; looking CPU load is not very useful here.
Top is very strange on qNAP, but cpu usage doesn't reach 10%.
>> Maybe I will try to add a SSD disk as cache (but I'm not sure that this
>> NAS supports cache over USB3).
>
> How're the disks connected to the client (SATA?) and to the NAS (USB3?)?
NAS is connected to client by a direct iSCSI gigabit connection
(without any switch, cat6e). All disks are Sata Toshiba disks (64 MB or
128 MB cache, I don't remember and 7200 rpms).
> Q for the FS devs: does FFS+WAPBL honour the O_DIRECT flag? Because
> even with `iflag=direct oflag=creat,direct', top(1) shows a steadily
> shrinking `Free' mem and a steadily increasing `File' mem.
>
> -RVP
JKB
Home |
Main Index |
Thread Index |
Old Index