NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: re-transmission: Re: Prepping to install
- Subject: Re: re-transmission: Re: Prepping to install
- From: "William A. Mahaffey III" <wam%hiwaay.net@localhost>
- Date: Sun, 21 Jun 2015 08:49:00 -0453
On 06/19/15 02:06, Martin Husemann wrote:
On Fri, Jun 19, 2015 at 01:50:12AM -0453, William A. Mahaffey III wrote:
missed it & got snared, shouldn't have happened. I *think* (& I will
know this weekend) the FreeBSD installer reminds one to 'ctrl-D' out,
rather than rebooting ....
The NetBSD one also prints hints to ^D or exit (but I don't think it
explicitly mentions reboot).
Martin
Well, the saga continues. I made a block of time yesterday, booted
FreeBSD 9.3R from a USB disk & wiped the 2 offending HDD's (wd0 & wd1)
using 'dd if=/dev/zero ....', then booted into the NetBSD 6.1.5 install
disk & proceeded w/ the install. I exited into /bin/sh when offered &
fdisk'ed -iau0 wd[0,1] into 1 large partition, then disklabel'ed the 2
partitions using 'disklabel -R wd[0,1] <proto-file.txt>', which I
attach. The file for disk1 differed only in comment & label, wd0 --> wd1
& disk0 --> disk1. I then assembled the RAID1 for root from wd[0,1]a,
see next attached. I also assembled the RAID0 for /usr & /home. I
departed from a RAID5 for /home & went w/ a RAID0. I followed the
attached overall scheme (see README file) closely, except for different
drive sizes & a RAID0 for /home (my *big* partition) instead of a RAID5
for /home/media (his *big* partition). In particular, I mounted my 3
RAID's under altroot, checked them for OK-ness, & created the fstab file
in /altroot/etc to specify mount points for root, /usr & /home, as well
as the 6 swap partitions. I also followed the last few instructions
about setting up the boot process from the RAID1'ed root drive, copying
/altroot/usr/mdec/boot files as described late in the last attachment.
With everything looking OK, I exited from the shell & finished up the
install, telling it to keep boot info as it was on the (RAID) disks. I
rebooted, and .... got screenfuls of messages which went by too fast to
read, saying something about boot media not found, install boot media &
hit any key. I tried to do that on the fly, nogo, also tried
CTRL-ALT-DEL w/ the install media already there, same thing. Soooooo
.... I'm still in the drink. FWIW, the installer was mute on exiting
back to the installer from the shell, but I did manage to avoid that
foul-up this time .... Any help appreciated, any more info needed, just
ask. It will be a bit tough to recover stuff from the installation, but
anything else, no problema. TIA & have a happy Father's Day :-) ....
--
William A. Mahaffey III
----------------------------------------------------------------------
"The M1 Garand is without doubt the finest implement of war
ever devised by man."
-- Gen. George S. Patton Jr.
# /dev/rwd0d:
type: ESDI
disk: HGST HTS721010A9
label: disk0
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 1938021
total sectors: 1953525168
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # microseconds
track-to-track seek: 0 # microseconds
drivedata: 0
6 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 33554432 2048 RAID # (Cyl. 2*- 33290*)
c: 1953523120 2048 unused 0 0 # (Cyl. 2*- 1938020)
d: 1953525168 0 unused 0 0 # (Cyl. 0 - 1938020)
e: 33554432 33556480 swap # (Cyl. 33290*- 66578*)
f: 1886414256 67110912 RAID # (Cyl. 66578*- 1938020)
# RAID1 on 2 partitions, 32K blocksize ....
# row col spare
START array
1 2 0
START disks
/dev/wd0a
/dev/wd1a
# secPerSU SUsPerParityUbit SUsPerReconUnit RAID-level
START layout
32 1 1 1
START queue
fifo 100
Aug
10
Setting up an 8TB NetBSD file server
If anyone reading this is hoping for a new chapter in my Orange Customer Services Experience they will be sorely disappointed - this post is horribly geeky - just stop reading now.
For anyone else, this will have the slight flavour of a HOWTO - just adjust to taste.
I set this system up a few weeks ago & recorded these notes - I've only just got around to posting this due to time constraints :)
Prolog...
Time to update my home fileserver. Budget: under £500, minimise power usage, noise & size, maximisie disk space.
With current prices that means 2TB disks - five is the natural number to give a power of two data stripe with RAID5. So some quick ebuyer.com browsing later:
One HP MicroServer (~£140 after rebate)
5 * 2TB disks (~£55 each)
3.5 to 5.25 mounting adaptor (~£2)
PCI-Express network card (~£8)
Assemble the box - HP still love their bizarre internal hex headed screws, but at least in this case the door has all of them you would need in a neat row, complete with the hex head allen key - nice.
Next, install NetBSD to a cheap USB key - just boot a standard install CD and run through the steps & setup to allow remote root ssh. Alternatively download a live ISO image - either way the goal is to have something you can boot and then work on in a laptop window from the comfort of the sofa.
Filesystem overview
The plan is to have the majority of the disks in a RAID5, leaving 40GB unused at the start of each disk. This will give:
7.3 TB RAID5 /home/media (disks 0,1,2,3,4)
40GB RAID1 / (disk 0,1)
40GB RAID1 /home (disk 2,3)
10GB swap (disk 4)
30GB /tmp (disk 4)
Everything except swap and /tmp are raided to avoid data loss in the event of a single disk failure. Multiple disk failure is outside the scope of this page, but setting up a second similar server at a remote location is always a good call :)
This is a pretty generic layout - the reserved space can be easily adjusted up or down at this stage to cater for different usage, if you are paranoid about resilience then the swap should be on RAID1, and if you plan on htting /tmp and swap heavily at the same time then shuffle things around.
The first 40GB of disk4 is actually setup as a single device RAID0 so we can take advantage of raidframe's autoconfigure - if the disks are shuffled around everything will still work.
Setup
Anyway, onto the setup. We shall assume you are logged in via a USB install or Live image:
First lets just confirm we have some disks:
onyx# sysctl hw.disknames
hw.disknames = wd0 wd1 wd2 wd3 wd4 sd0
or the more verbose
onyx# grep ^wd /var/run/dmesg.boot
wd0 at atabus0 drive 0
wd0: <WDC WD20EARS-00MVWB0>
wd0: drive supports 16-sector PIO transfers, LBA48 addressing
wd0: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd0(ahcisata0:0:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
wd1 at atabus1 drive 0
wd1: <WDC WD20EARS-00MVWB0>
wd1: drive supports 16-sector PIO transfers, LBA48 addressing
wd1: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd1(ahcisata0:1:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
wd2 at atabus2 drive 0
wd2: <WDC WD20EARS-00MVWB0>
wd2: drive supports 16-sector PIO transfers, LBA48 addressing
wd2: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd2: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd2(ahcisata0:2:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
wd3 at atabus3 drive 0
wd3: <Hitachi HDS5C3020ALA632>
wd3: drive supports 16-sector PIO transfers, LBA48 addressing
wd3: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd3: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd3(ahcisata0:3:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
wd4 at atabus4 drive 1
wd4: <Hitachi HDS5C3020ALA632>
wd4: drive supports 16-sector PIO transfers, LBA48 addressing
wd4: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd4: 32-bit data port
wd4: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd4(ixpide0:0:1): using PIO mode 4, Ultra-DMA mode 6 (Ultra/133) (using DMA)
Thats maybe a little more information than we really wanted to know... but anyway.
Creating MBR partitions with fdisk
Start by using fdisk to create a single NetBSD partition on each disk, and to make the partition bootable on the first two disks (which have the root filesystem).
onyx# fdisk -iau0 wd0
fdisk: primary partition table invalid, no magic in sector 0
Disk: /dev/rwd0d
NetBSD disklabel disk geometry:
cylinders: 3876021, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
total sectors: 3907029168
BIOS disk geometry:
cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
total sectors: 3907029168
Partitions aligned to 2048 sector boundaries, offset 2048
Do you want to change our idea of what BIOS thinks? [n] [RETURN]
Partition 0:
<UNUSED>
The data for partition 0 is:
<UNUSED>
sysid: [0..255 default: 169] [RETURN]
start: [0..243201cyl default: 2048, 0cyl, 1MB] [RETURN]
size: [0..243201cyl default: 3907027120, 243201cyl, 1907728MB] [RETURN]
bootmenu: [] [RETURN]
Do you want to change the active partition? [n] y[RETURN]
Choosing 4 will make no partition active.
active partition: [0..4 default: 0] [RETURN]
Are you happy with this choice? [n] y[RETURN]
Update the bootcode from /usr/mdec/mbr? [n] y[RETURN]
We haven't written the MBR back to disk yet. This is your last chance.
Partition table:
0: NetBSD (sysid 169)
start 2048, size 3907027120 (1907728 MB, Cyls 0-243201/80/63), Active
PBR is not bootable: All bytes are identical (0x00)
1: <UNUSED>
2: <UNUSED>
3: <UNUSED>
Bootselector disabled.
First active partition: 0
Should we write new partition table? [n] y[RETURN]
Run fdisk -iau0 wd1 giving the same answers
Run fdisk -u0 wd2 similar except it skips the active & bootcode questions
Run fdisk -u0 wd3 and fdisk -u0 wd4
Disklabels
Next comes the disklabels. Decide on how much you want to keep from the RAID5 ( I chose 40GB) and how much you want to use for swap (10GB and 30GB). Ideally this amount should be the same on all disks. If its not the RAID5 will just use the smallest remaining value and waste the extra on any other disks.
I labelled my disks "disk0" to "disk4" to match how they show up in the NetBSD autoconfig. This is absolutely not required, and you could even shuffle the SATA cables between every boot - NetBSD automatically assembles the RAID components based on identifiers on each disk, but it pacifies the slight OCD tendency in me.
The start partition is offset by 1m. This tends to match the value in newer NetBSD fdisk and also avoids the old 63 sector insanity which causes misaligned accesses on 4K sector disks.
This uses partition a to be the RAID1 for /, /home, or RAID0 for /tmp & swap, and partition e for the RAID5.
The total size of your disks may be different from the values here. This does not matter as long as the disks are 2TB or less. If they are over 2TB then you need to use GPT rather than MBR partitions, but we do not cover that here.
onyx# disklabel -i wd0
Enter '?' for help
partition> N[RETURN]
Label name [fictitious]: disk0[RETURN]
partition> a[RETURN]
Filesystem type [?] [unused]: raid[RETURN]
Start offset ('x' to start after partition 'x') [0c, 0s, 0M]: 1m[RETURN]
Partition size (' for all remaining) [0c, 0s, 0M]: 40g[RETURN]
a: 83886080 2048 RAID # (Cyl. 2*- 83222*)
partition> e[RETURN]
Filesystem type [?] [4.2BSD]: raid[RETURN]
Start offset ('x' to start after partition 'x') [2.0317461490631103515625c, 2048s, 1M]: a[RETURN]
Partition size (' for all remaining) [3876019c, 3907027120s, 1907728.125M]: $[RETURN]
e: 3823141040 83888128 RAID # (Cyl. 83222*- 3876020)
partition> P[RETURN]
5 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 83886080 2048 RAID # (Cyl. 2*- 83222*)
c: 3907027120 2048 unused 0 0 # (Cyl. 2*- 3876020)
d: 3907029168 0 unused 0 0 # (Cyl. 0 - 3876020)
e: 3823141040 83888128 RAID # (Cyl. 83222*- 3876020)
partition> W[RETURN]Label disk [n]? y[RETURN]
Label written
partition> Q[RETURN]
Then repeat for disklabel -i wd1, disklabel -i wd2, disklabel -i wd3, and disklabel -i wd4 only changing the "disk0" each time.
Creating the RAID partitions
Onto the raid setup. For this we just create four files, raid0.conf to raid3.conf, in /root/ will be fine:
For the root filesystem
# raid0.conf - RAID1 on two disks, for 32K block size
START array
1 2 0
# row col spare
START disks
/dev/wd0a
/dev/wd1a
START layout
64 1 1 1
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
START queue
fifo 100
For /home
# raid1.conf - RAID1 on two disks, for 32K block size
START array
1 2 0
# row col spare
START disks
/dev/wd2a
/dev/wd3a
START layout
64 1 1 1
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
START queue
fifo 100
For swap and /tmp
# raid2.conf - RAID0 on one disks, for 32K block size
START array
1 1 0
# row col spare
START disks
/dev/wd4a
START layout
64 1 1 0
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
START queue
fifo 100
For /home/media
# raid3.conf - RAID5 on five disks, for 64K block size
START array
1 5 0
# row col spare
START disks
/dev/wd0e
/dev/wd1e
/dev/wd2e
/dev/wd3e
/dev/wd4e
START layout
32 1 1 5
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
START queue
fifo 100
Next we create the four raids, give them all unique Serial Numbers (-I), and tell them to autoconfigure on boot (-A). We'll come back to making raid0 automatically be the root filesystem later. If we do it now it will be annoying if we reboot before creating its filesystems.
onyx# raidctl -C raid0.conf raid0 ; raidctl -I 10 raid0 ; raidctl -A yes raid0
onyx# raidctl -C raid1.conf raid1 ; raidctl -I 11 raid1 ; raidctl -A yes raid1
onyx# raidctl -C raid2.conf raid2 ; raidctl -I 12 raid2 ; raidctl -A yes raid2
onyx# raidctl -C raid3.conf raid3 ; raidctl -I 13 raid3 ; raidctl -A yes raid3
Moving on we need to initialise the parity on each raid. We could just run them all at once but its probably better to set the first three going, then when done start the final one (to avoid disk contention). We can use raidctl -S which displays the rebuild progress and only returns when the rebuild is complete. You can continue on while the parity is initialising, and even reboot (in recent netbsd-5 and later) and have it continue the parity, but it does mean actions a slower, and the system is not protected from disk failure until the parity is complete.
onyx# raidctl -i raid0 ; raidctl -i raid1 ; raidctl -i raid2
onyx# raidctl -S raid0 ; raidctl -S raid1 ; raidctl -S raid2
onyx# raidctl -i raid3 ; raidctl -S raid3
Creating the partitions
The default disklabels for raid0 and raid1 are probably fine for us (one large 'a' partition), so we can just get them written to the disks. There are other ways to do this, but just to re-use the 'disklabel -i' command:
onyx# disklabel -i raid0
Enter '?' for help
partition> W[RETURN]
Label disk [n]? y[RETURN]
Label written
partition> Q[RETURN]
Then repeat for raid1. For raid2 we want to setup a 30GB /tmp and 10GB swap so:
onyx# disklabel -i raid2
Enter '?' for help
partition> a[RETURN]
Filesystem type [?] [4.2BSD]: [RETURN]
Start offset ('x' to start after partition 'x') [0c, 0s, 0M]: [RETURN]
Partition size ('$' for all remaining) [327679.75c, 83886016s, 40959.96875M]: 30GB[RETURN]
a: 62914560 0 4.2BSD 0 0 0 # (Cyl. 0 - 245759)
partition> b[RETURN]
Filesystem type [?] [unused]: swap[RETURN]
Start offset ('x' to start after partition 'x') [0c, 0s, 0M]: a[RETURN]
Partition size ('$' for all remaining) [0c, 0s, 0M]: $[RETURN]
b: 20971456 62914560 swap # (Cyl. 245760 - 327679*)
partition> W[RETURN]
Label disk [n]? y[RETURN]
Label written
partition> Q[RETURN]
Since raid3 is larger than 2TB, (more or less the whole point of this exercise), we need to setup a GPT table to handle it:
onyx# gpt create raid3
onyx# gpt add -b 128 raid3
[ this will indicate the size of the wedge, in my case 15292563679 - use that number below]
onyx# dkctl raid3 addwedge media 128 15292563679 4.2BSD
This gives us an (automatically created on boot) dk0 device which is around 7.2TB in size. Unfortunately it will not show up as type 4.2BSD until the next boot, so we will have to give newfs the -I flag when we create its filesystem (or reboot).
Creating filesystems
We will go with FFSv2 filesystems. The RAID5 raid was created with 32 sector (16K) per component stripes, so it is important to use a 64k blocksize to avoid writes suffering an expensive read/modify/write cycle, and the other raids will fit a 32k blocksize nicely, so:
onyx# newfs -O2 -b32k raid0a
onyx# newfs -O2 -b32k raid1a
onyx# newfs -O2 -b32k raid2a
onyx# newfs -O2 -b64k -I dk0
Installing
Now that we have all these wonderful raid filesystems, it would be nice to have an operating system to use them. (Unless you have the social life of a kumquat in which case just creating them may be goal enough in itself.)
First we mount them - during install we can use "-o async" to maximise the write speed, as at this point we do not have any data we care about in the event of a crash. Once install is complete we'll use "-o log" for data security. Note also the mount_ffs used for dk0 as we have not yet rebooted to "fix" its issue. Mounting /tmp is not strictly needed, but its a nice test:
onyx# mount -o async /dev/raid0a /altroot
onyx# mkdir /altroot/home ; mount -o async /dev/raid1a /altroot/home
onyx# mkdir /altroot/tmp ; mount -o async /dev/raid2a /altroot/tmp
onyx# mkdir /altroot/home/media ; mount_ffs -o async /dev/dk0 /altroot/home/media
A quick df -to see how much space we have:
onyx# df -h
Filesystem Size Used Avail %Cap Mounted on
/dev/sd0a 14G 6.0G 7.7G 43% /
tmpfs 905M 4.0K 905M 0% /tmp
tmpfs 905M 4.0K 905M 0% /var/tmp
/dev/raid0a 39G 8.0K 37G 0% /altroot
/dev/raid1a 39G 8.0K 37G 0% /altroot/home
/dev/raid2a 30G 4.0K 28G 0% /altroot/tmp
/dev/dk0 7.1T 8.0K 6.7T 0% /altroot/home/media
Next, extract NetBSD to /altroot - if you've booted from USB key and are happy to use that install as a base then just run
onyx# cd / ; pax -rw -pe -X / /altroot
Alternatively extract a NetBSD release *.tgz files into /altroot
Setup /altroot/etc/fstab - a sample might be:
# /etc/fstab
/dev/raid0a / ffs rw,log 1 1
/dev/raid1e /home ffs rw,log 1 2
/dev/raid2a /tmp ffs rw,log 1 2
/dev/raid2b swap swap sw 0 0
/dev/dk0 /home/media ffs rw,log 1 3
/proc /proc procfs rw
kernfs /kern kernfs rw
ptyfs /dev/pts ptyfs rw
Install boot blocks - we need to do this on *both* wd0 and wd1 so the system can still boot in the event of a single disk failure:
onyx# cd /altroot ; cp usr/mdec/boot .
onyx# installboot /dev/rwd0a usr/mdec/bootxx_ffsv2
onyx# installboot /dev/rwd1a usr/mdec/bootxx_ffsv2
Finally setup raid0 to automatically configure as the root filesystem
onyx# raidctl -A root raid0
... and we're done. Setup apache to serve webdav for your xbmc machines, samba, netatalk, and nfs as required :)
Home |
Main Index |
Thread Index |
Old Index