Страница 1 из 1

zfs+raidz=UNAVAIL

Добавлено: 2014-05-28 10:59:26
AvAToR
привет всем
нужна помощь, правда не срочная,но очень:-)
исходные данные:
домашний NAS(самоклеп)
OS SunOS Nebula 5.11 11.2 i86pc i386 i86pc
HBA LSI 1068e+5 HDD HDS72101-A3EA-931.51GB = RAIDZ1
симптоматика проявилась после появления сбойных секторов на одном из винтов, решил затереть сбойные сектора через dd, но тут меня ждал подвох тк видно именно там были метки;-( короче ситуевина вот такая:

Код: Выделить всё

root@Nebula:/home/artem# zpool import
  pool: tank
    id: 1827277902841040736
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://support.oracle.com/msg/ZFS-8000-6X
config:

        tank         UNAVAIL  missing device
          c8t10d0    ONLINE
          c8t11d0    ONLINE
          c8t12d0    ONLINE
          c8t13d0    ONLINE

device details:

        missing-4  UNAVAIL        corrupted data
        status: ZFS detected errors on this device.
                The device has bad label or disk contents.


        Additional devices are known to be part of this pool, though their
        exact configuration cannot be determined.
диск который был мертв, но ожил:

Код: Выделить всё

root@Nebula:/home/artem# smartctl -l selftest /dev/rdsk/c8t8d0
smartctl 6.2 2013-07-26 r3841 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Interrupted (host reset)      90%     23264         -
# 2  Short offline       Completed without error       00%     23263         -
# 3  Extended offline    Completed without error       00%     23252         -
# 4  Short offline       Completed without error       00%     23249         -
# 5  Short offline       Completed without error       00%     23249         -
# 6  Extended offline    Aborted by host               90%     23249         -
# 7  Extended offline    Interrupted (host reset)      80%     23218         -
# 8  Extended offline    Interrupted (host reset)      60%     23214         -
# 9  Short offline       Completed without error       00%     23213         -
#10  Extended offline    Interrupted (host reset)      90%     23213         -
#11  Short offline       Completed without error       00%     23213         -
#12  Extended offline    Completed: read failure       90%     23210         1431230
#13  Extended offline    Completed: read failure       90%     23209         1431233
#14  Extended offline    Completed: read failure       90%     23209         1421421
#15  Extended offline    Completed: read failure       90%     23209         1421421
#16  Extended offline    Completed: read failure       90%     23208         1421421
#17  Extended offline    Completed: read failure       60%     23206         689740630
#18  Extended offline    Completed: read failure       60%     23204         689740630
#19  Short offline       Completed without error       00%     23201         -
#20  Short offline       Completed without error       00%     23199         -
#21  Short offline       Completed without error       00%     23195         -
7 of 7 failed self-tests are outdated by newer successful extended offline self-test # 3
ну и напоследок, как понимаю нужно отклонировать LABEL на новый диск, и тогда наступит счастье, но как это сделать не совсем понятно :unknown:
ПС тут ссылаются на некую магическую тулзу Jeff Bonwick's labelfix binary(https://www.mail-archive.com/zfs-discus ... 49014.html), но найти пока не удалось
ППС нужен практический совет, тк второй попытки не будет

Re: zfs+raidz=UNAVAIL

Добавлено: 2014-05-28 11:01:05
AvAToR
и еще используя ключ -V получаю:

Код: Выделить всё

root@Nebula:/home/artem# zpool import -V tank
root@Nebula:/home/artem# zpool status tank
  pool: tank
 state: UNAVAIL
status: One or more devices are unavailable in response to persistent errors.
        There are insufficient replicas for the pool to continue functioning.
action: Destroy and re-create the pool from a backup source. Manually marking
        the device repaired using 'zpool clear' or 'fmadm repaired' may
        allow some data to be
        recovered.
        Run 'zpool status -v' to see device specific details.
  scan: none requested
config:

        NAME       STATE     READ WRITE CKSUM
        tank       UNAVAIL      0     0     0
          c8t10d0  ONLINE       0     0     0
          c8t11d0  ONLINE       0     0     0
          c8t12d0  ONLINE       0     0     0
          c8t13d0  ONLINE       0     0     0
          c8t8d0   UNAVAIL      0     0     0
root@Nebula:/home/artem#

Re: zfs+raidz=UNAVAIL

Добавлено: 2014-05-28 20:04:26
AvAToR
коллеги а кто в критических ситуациях пользовал данное средство:

Код: Выделить всё

You can run it with a dry-run option to tell you how many
minutes of transactions will be lost:

# zpool import -nFX pool-name

This might take a very long time. The -F option is the recovery
option. I think the -X option scrubs the entire pool. Then,
you can run it like this:

# zpool import -FX pool-name

Re: zfs+raidz=UNAVAIL

Добавлено: 2014-05-28 21:30:59
snorlov