Quantcast
Channel: 钻戒 and 仁豆米
Viewing all articles
Browse latest Browse all 290

群晖(黑群晖)的备份

$
0
0

数据很重要、数据很重要、数据很重要!!!

重要的话说三遍,话说自己买了个黑群晖,6盘位的,就是为了保存照片和数据......

其实1盘和2盘的nas都是耍流氓,根本无用,必须是4盘位以上的才勉强可靠。所以才买的这个黑群晖,只买了3个盘做了raid5,侥是如此,真的是莫名坏了一块,好在在保修期内,更换一块后故障解除,期间倒腾2块盘的数据依然是噩梦啊。

那么数据如此重要,那么怎么做到完全的备份呢?

数据是3盘raid5,已经有保证了,剩下就是系统也必须有备份可恢复才可以

群晖本质是linux,文件系统是mdadm+lvm,所以备份也必须从这个方向开始。

首先是备份启动的usb盘,先fdisk -l 看一下:

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes  
255 heads, 63 sectors/track, 243201 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary  
/dev/sda2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary  
/dev/sda3             588      243201  1948793440+ fd Linux raid autodetect

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes  
255 heads, 63 sectors/track, 243201 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdb1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary  
/dev/sdb2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary  
/dev/sdb3             588      243201  1948793440+ fd Linux raid autodetect

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes  
255 heads, 63 sectors/track, 243201 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdc1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary  
/dev/sdc2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary  
/dev/sdc3             588      243201  1948793440+ fd Linux raid autodetect

Disk /dev/sdu: 4227 MB, 4227858432 bytes  
4 heads, 32 sectors/track, 64512 cylinders  
Units = cylinders of 128 * 512 = 65536 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdu1   *           1         256       16352+  e Win95 FAT16 (LBA)

3块2T的盘,每个盘有3个分区,然后最后是USB盘,盘符sdu,备份它

dd if=/dev/sdu |gzip -c > /root/usb.img  

USB盘备好了,下面得备份raid了,先查看raid信息:

cat /proc/mdstat  
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]  
md2 : active raid5 sda3[0] sdc3[2] sdb3[1]  
      3897584512 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md1 : active raid1 sda2[0] sdb2[1] sdc2[2]  
      2097088 blocks [12/3] [UUU___]

md0 : active raid1 sda1[0] sdb1[1] sdc1[2]  
      2490176 blocks [12/3] [UUU___]

看来有三个raid组,md0和md1是raid1,md2是raid5

md0的组员是sda1、sdb1、sdc1

md1的组员是sda2、sdb2、sdc2

md2的组员是sda3、sdb3、sdc3

三个盘的分区应该是一样的,我们看一个即可:

fdisk -l /dev/sda

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes  
255 heads, 63 sectors/track, 243201 cylinders  
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary  
/dev/sda2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary  
/dev/sda3             588      243201  1948793440+ fd Linux raid autodetect

我们可以看到是2T的盘,三个分区的起始柱头信息都很清晰,记录下来。

所以,如果坏了,看看分区,分区没坏直接进行下一步,分区坏了的话,我们先fdisk分好区,然后开始修复:

查看各个md的uuid:

mdadm --examine --scan  /dev/sda1 /dev/sdb1 /dev/sdc1  
ARRAY /dev/md0 UUID=35d393bd:1f4dde6b:3017a5a8:c86610be  
mdadm --examine --scan  /dev/sda2 /dev/sdb2 /dev/sdc2  
ARRAY /dev/md1 UUID=8f02f0d4:e249900a:3017a5a8:c86610be  
mdadm --examine --scan  /dev/sda3 /dev/sdb3 /dev/sdc3  
ARRAY /dev/md/2 metadata=1.2 UUID=d1411045:24723563:3a19cef5:07732afa name=DiskStation:2  

修复的时候,根据上面信息,编辑生成/etc/mdadm.conf,注意顺序是倒的, md0->md1->md2,devices中如果一块盘坏了,改成missing......

DEVICE partitions  
ARRAY /dev/md0 level=raid1 num-devices=2  
 ↪UUID=b3cd99e7:d02be486:b0ea429a:e18ccf65
 ↪devices=/dev/sda1,missing
ARRAY /dev/md1 level=raid1 num-devices=2  
 ↪UUID=75fa22aa:9a11bcad:b42ed14a:b5f8da3c
 ↪devices=/dev/sda2,missing
ARRAY /dev/md2 level=raid1 num-devices=2  
 ↪UUID=532502de:90e44fb0:242f485f:f02a2565
 ↪devices=/dev/sda3,missing

然后刷新raid并查看raid是否正常

mdadm -A -s  
cat /proc/mdstat  

如果要全面毁了盘阵,重建md,方法如下:

mdadm --create --verbose /dev/md0 --level=1 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1

mdadm --create --verbose /dev/md1 --level=1 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc2

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda3 /dev/sdb3 /dev/sdc3  

上面已经做好了RAID部分MD的备份。LVM是基于MD之上的,下面我们来备份LVM。

首先查看卷信息:

DiskStation> pvdisplay  
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1
  PV Size               3.63 TB / not usable 2.88 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              951558
  Free PE               0
  Allocated PE          951558
  PV UUID               5li2xk-tZdQ-c63W-qpM7-jo9F-5xGg-xFP1Wr

DiskStation> vgdisplay  
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3.63 TB
  PE Size               4.00 MB
  Total PE              951558
  Alloc PE / Size       951558 / 3.63 TB
  Free  PE / Size       0 / 0   
  VG UUID               UUCftW-HIFK-vmL0-0MTG-XzOT-BCTI-okhVf3

DiskStation> lvdisplay  
  --- Logical volume ---
  LV Name                /dev/vg1/syno_vg_reserved_area
  VG Name                vg1
  LV UUID                BmrrfE-vjMY-rJO8-C7OP-2los-KfeO-1hBrWb
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                12.00 MB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/vg1/volume_1
  VG Name                vg1
  LV UUID                80vCET-1yH4-KMXS-E0N7-3tiP-m24I-Gzw7ij
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                3.63 TB
  Current LE             951555
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:1

ok, 备份

vgcfgbackup  

备份文件会放到/etc/lvm/backup/vg1,把这个文件拷走.

如果要恢复:

vgcfgrestore -f vg1 vg1  
pvscan  
vgscan  
lvscan  

把u盘文件、fdisk文件、lvm的文件拷出来放到dropbox,一切就妥当了。


Viewing all articles
Browse latest Browse all 290

Trending Articles