2gusia (2gusia) wrote,

zfs on fake disk

UPD 20 июля 2016 Ссылки двумя строками ниже погибли :(
Текст по ним сохранен в этом посту. Но погибли и картинки, на которые идут ссылки. А в них были команды. Поэтому разобраться будет непросто. Если будет время и, главное, потребность (пишите в комменты) - займусь. /UPD

Вот тут схема для FreeBSD: http://pcaddicts.ca/rc/2010/05/20/migrate-zfs-mirror-to-raidz-on-freenas/
Она же, но с поправкой на Solaris: http://i18n-freedom.blogspot.com/2008/01/how-to-turn-mirror-in-to-raid.html

Применительно к моей конфигурации:
1. Было два зеркала по два 1.5 диска. Общий объём 2.8 после всех налогов.
2. Воткнул ещё два, сделал на них них резервный пул, объём получился тоже 2.8.
3. Перелил данные с основного пула на резервный (zfs send | zfs recv)
4. Удалил основной пул, освободилось четыре диска.
5. Создал новый основной raidz2 пул из этих четырёх и двух фиктивных. Фиктивные сразу отключил, пул переходит в состояние DEGRADED, но работает.
6. Вернул данные с резервного пула на основной.
7. Удалил резервный пул, осободилось два диска.
8. Сделал zpool replace фиктивных дисков на настоящие.

Migrate ZFS Mirror to raidz on freenas

Okay so technically there is no magic command to convert a zfs mirror into raidz but I hope to show you how to do it in this document.

I’ll start out by saying I’m not responsible if you mess this up, feel free to leave a comment here or send me a message on the freenas forums and I’ll do my best to help you and as always make sure to have a proper backup.  No level of raid can protect you against “rm -r *” or clicking the wrong button.  I will use the GUI for some parts but much is done in the command line, I just find it easier to use.  I used these techniques when testing and went from 1 disk Zpool to a mirror and then to a 3 disk raidz.

First thing we need to think about is that right now you have 2 disks and you’d like to add a 3rd.  In order to have both a mirror and raidz you are going to need a minimum of 5 disks right?  Wrong!  If you break this down into its bits you will realize that what is the minimum disks needed if both the mirror and raidz are degraded yet safe?  Well the zfs mirror can only have 1 disk faulted and raidz can also only have 1 disk faulted, assuming 2 faulted Pools you will need a minimum of 3 disks and we just so happen to have 3 disks.

Screenshots below will be different for you, the disk numbers, size, description etc will need to be adjusted.

Prior to starting here is my current zfs mirror

And the disks within

Step 1 – add your new disk

this is pretty straightforward but physically add your disk however you wish.  Once its in the machine and powered up log into the web interface, navigate to Disks, Management and click the + sign to add the disk.  Select your new disk from the drop down , config with a description and set any desired options then click save.

again from the web menu select Disks, then format.  Select your NEW disk from the drop down and set the filesystem to “zfs storage pool device” then click format disk

when done in the webgui click on disks, management and verify all 3 disks are now online

Step 2 -  break the mirror!

we are going to need a disk so we have to break our mirror

ssh into your freenas box (I’ll use putty on windows but any ssh client should do) , login add detach one disk from your mirror.  After I’ve run zpool status to verify the disk is gone.

Step 3 - create a sparse file

- first lets get the exact number of blocks in our disk, in my case its 960951168.

-  Once I have that number I’ll move into /tmp and create the file using dd if=/dev/zero of=/tmp/disk.img bs=1 seek=960951168k count=1.   What this does is creates an “empty” file called disk.img that reports itself as being the exact size of the disks but takes up only a few kb’s on my actual disks. (in my screenshot I’ve removed an old disk.img file first, ignore that)

Step 4 – lets make a disk out of that file

- again from a terminal type “mdconfig -a -t vnode -f /tmp/disk.img”  what this does is creates a new memory disk using the file we just created.  Note the response you will get will indicate the memory disk name.

- For more info http://www.freebsd.org/doc/en/books/handbook/disks-virtual.html

Step 5 – lets make our new pool

- Pretty simple steps here.  Create your new Pool using the 2 disks + the memory disk.  When done run zpool status and you will see the mirror pool still has 1 disk and my new pool “Tank” is online and healthy.

Step 6 – offline that disk!

- if you were to start copying your data over right now that sparse file you just made would likely fill up causing a whole lot of troubles, do not forget this critical step!

- from a terminal type “zpool offline Tank /dev/md2″

- verify the disk is offline with zpool status

- when done you can get rid of md2 with “mdconfig -d -u md2″ and delete the disk.img file

Step 7 – Copy your data

- bunch of ways to do this, rsync, cp, etc but I chose to use snapshot and zfs send/receive.  I find it best to make sure NOTHING is accessing the Pools if at all possible otherwise keep that in mind when you copy your data so you don’t miss anything! you have been warned.

- take a snapshot with zfs snapshot Storage@now (if you have a filesystem (such as Storage/Data you’ll need to snapshot it)

- zfs send Storage@now | zfs receive Tank/Storage (this sends the data and pipes it to a receive command.  I was not able to receive with a root filesystem so I had to create a subsystem called Storage.  I didn’t mess much with this , probably could have fixed it but this worked)

- while this is running you can verify the data is being copied by opening another terminal using putty or ssh and running “zfs list”.  You will notice Tank/Storage incrementing, depending on how much data you have this can take some time.

- when the copy is done (in my case Storage matches Tank/Storage ) then I can delete the snapshot with “zfs destroy Storage@now).

- you may want to juggle your mount points around so your new pool replaces the old and making and cifs/samba shares etc easier since you won’t need to change them!  Do this with zfs get mountpoint, zfs set mountpoint.  You may need to run “zfs unmount -f Storage” to force the unmount if the device is busy.

Step 8 – Destroy the mirror

- from a terminal type “zpool destroy Storage” (make sure you destroy your old pool )

- a quick zfs list shows the pool is now gone (this free’s up ad6 in my case)

Step 9 – Add the last disk to your raidz

- from a terminal type “zpool replace Tank md2 ad6″ (Tank being the pool name, md2 the memory disk and ad6 the now free phyiscal disk)

- when done type “zpool status” where you will see  “resilver in progress” under scrub and a percentage.  This will take time but the pool is usable during this time

- you can also watch the resilver progress by using the web interface , clicking disks, zfs, select Pools, Information.

There you go, now you turned your mirror into a 3 disk raidz without having to get extra disks, do some funky backups/restores or lose your data!


Если допустить, что используются чистые HDD, пронумерованные во FreeBSD как da0 - da3, то получится как-то так:

gpart create -s GPT /dev/da0
gpart create -s GPT /dev/da1
gpart create -s GPT /dev/da2
gpart create -s GPT /dev/da3
gpart add -t freebsd-zfs -l disk0 -a 2M /dev/da0
gpart add -t freebsd-zfs -l disk1 -a 2M /dev/da1
gpart add -t freebsd-zfs -l disk2 -a 2M /dev/da2
gpart add -t freebsd-zfs -l disk3 -a 2M /dev/da3
gnop create -S 4096 /dev/gpt/disk0
gnop create -S 4096 /dev/gpt/disk1
gnop create -S 4096 /dev/gpt/disk2
gnop create -S 4096 /dev/gpt/disk3
zpool create -m /mnt/storage media raidz disk0.nop disk1.nop disk2.nop disk3.nop
zfs set atime=off media
zfs set checksum=fletcher4 media
zfs create -o utf8only=on media/test
zpool export media
gnop destroy /dev/gpt/disk0.nop
gnop destroy /dev/gpt/disk1.nop
gnop destroy /dev/gpt/disk2.nop
gnop destroy /dev/gpt/disk3.nop
zpool import media

Tags: nas, zfs

  • Статья про SSD Apacer в NAS

    Сегодня после длительной борьбы с web-редактором IXBT Live (который оказался не совместимым с Google Docs), наконец, опубликовали мой опус про SSD,…

  • OpenZFS и XigmaNAS

    Месяц назад я писал про Open ZFS. Там были мысли о том, когда он будет доступен в XogmaNAS. А в конце декабря зацепился на эту тему языками с…

  • Как провожают жесткие диски

    Сдох у меня жесткий диск в NAS Как видно - из SMART отпахал 70428 часов, то есть если 24/7 - больше 8 лет. И, я вам скажу, он не просто крутился…

  • Post a new comment


    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened