On 14.07.21 13:06, Matthias Petermann wrote:
Next, I will try to take the wd2 (dk2) component offline.
...which I just had to do sooner than I had planned because my I/O came to a complete standstill. The result is very positive - I/O is running again at almost full speed, and the use of redundancy through ZFS does not seem to be as significant as I had feared when a disk fails. I think I will now order a replacement disk and try my hand at reconstruction.
``` saturn$ doas zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 dk0 ONLINE 0 0 0 dk1 ONLINE 0 0 0 dk2 ONLINE 0 0 0 dk3 ONLINE 0 0 0 errors: No known data errors saturn$ doas dkctl wd2 listwedges /dev/rwd2: 1 wedge: dk2: zfs2, 5860528128 blocks at 4096, type: saturn$ doas zpool offline tank dk2 saturn$ doas zpool status pool: tank state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 dk0 ONLINE 0 0 0 dk1 ONLINE 0 0 0 12938104637987333436 OFFLINE 0 0 0 was /dev/dk2 dk3 ONLINE 0 0 0 errors: No known data errors ``` Matthias
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature