Last modified by Drunk Monkey on 2024-09-01 12:44

From version 2.1
edited by Drunk Monkey
on 2024-09-01 12:19
Change comment: There is no comment for this version
To version 3.1
edited by Drunk Monkey
on 2024-09-01 12:44
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -1,4 +1,4 @@
1 -[[Our previous post finished our discussion about VDEVs by going into great detail about the ZFS ARC>>url:https://web.archive.org/web/20210430213515/http://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/]]. Here, we'll continue our discussion about ZFS storage pools, how to migrate them across systems by exporting and importing the pools, recovering from destroyed pools, and how to upgrade your storage pool to the latest version.
1 +[[Our previous post finished our discussion about VDEVs by going into great detail about the ZFS ARC>>doc:Tech-Tips.ZFS-The-Aaron-Topponce-Archive.ZFS-Administration-Part-IV-The-Adjustable-Replacement-Cache.WebHome]]. Here, we'll continue our discussion about ZFS storage pools, how to migrate them across systems by exporting and importing the pools, recovering from destroyed pools, and how to upgrade your storage pool to the latest version.
2 2  
3 3  == Motivation ==
4 4  
... ... @@ -14,7 +14,9 @@
14 14  
15 15  To export a storage pool, use the following command:
16 16  
17 -{{code language="bash session"}}# zpool export tank{{/code}}
17 +{{code language="bash session"}}
18 +# zpool export tank
19 +{{/code}}
18 18  
19 19  This command will attempt to unmount all ZFS datasets as well as the pool. By default, when creating ZFS storage pools and filesystems, they are automatically mounted to the system. There is no need to explicitly unmount the filesystems as you with with ext3 or ext4. The export will handle that. Further, some pools may refuse to be exported, for whatever reason. You can pass the "-f" switch if needed to force the export
20 20  
... ... @@ -22,7 +22,8 @@
22 22  
23 23  Once the drives have been physically installed into the new server, you can import the pool. Further, the new system may have multiple pools installed, to which you will want to determine which pool to import, or to import them all. If the storage pool "tank" does not already exist on the new server, and this is the pool you wish to import, then you can run the following command:
24 24  
25 -{{code language="bash session"}}# zpool import tank
27 +{{code language="bash session"}}
28 +# zpool import tank
26 26  # zpool status tank
27 27   state: ONLINE
28 28   scan: none requested
... ... @@ -40,23 +40,29 @@
40 40   sdi ONLINE 0 0 0
41 41   sdj ONLINE 0 0 0
42 42  
43 -errors: No known data errors{{/code}}
46 +errors: No known data errors
47 +{{/code}}
44 44  
45 45  Your storage pool state may not be "ONLINE", meaning that everything is healthy. If the system does not recognize a disk in your pool, you may get a "DEGRADED" state. If one or more of the drives appear as faulty to the system, then you may get a "FAULTED" state in your pool. You will need to troubleshoot what drives are causing the problem, and fix accordingly.
46 46  
47 47  You can import multiple pools simultaneously by either specifying each pool as an argument, or by passing the "-a" switch for importing all discovered pools. For importing the two pools "tank1" and "tank2", type:
48 48  
49 -{{code language="bash session"}}# zpool import tank1 tank2{{/code}}
53 +{{code language="bash session"}}
54 +# zpool import tank1 tank2
55 +{{/code}}
50 50  
51 51  For importing all known pools, type:
52 52  
53 -{{code language="bash session"}}# zpool import -a{{/code}}
59 +{{code language="bash session"}}
60 +# zpool import -a
61 +{{/code}}
54 54  
55 55  == Recovering A Destroyed Pool ==
56 56  
57 57  If a ZFS storage pool was previously destroyed, the pool can still be imported to the system. Destroying a pool doesn't wipe the data on the disks, so the metadata is still in tact, and the pool can still be discovered. Let's take a clean pool called "tank", destroy it, move the disks to a new system, then try to import the pool. You will need to pass the "-D" switch to tell ZFS to import a destroyed pool. Do not provide the pool name as an argument, as you would normally do:
58 58  
59 -{{code language="bash session"}}(server A)# zpool destroy tank
67 +{{code language="bash session"}}
68 +(server A)# zpool destroy tank
60 60  (server B)# zpool import -D
61 61   pool: tank
62 62   id: 17105118590326096187
... ... @@ -89,11 +89,13 @@
89 89   sdr ONLINE
90 90  
91 91   Additional devices are known to be part of this pool, though their
92 - exact configuration cannot be determined.{{/code}}
101 + exact configuration cannot be determined.
102 +{{/code}}
93 93  
94 94  Notice that the state of the pool is "ONLINE (DESTROYED)". Even though the pool is "ONLINE", it is only partially online. Basically, it's only been discovered, but it's not available for use. If you run the "df" command, you will find that the storage pool is not mounted. This means the ZFS filesystem datasets are not available, and you currently cannot store data into the pool. However, ZFS has found the pool, and you can bring it fully ONLINE for standard usage by running the import command one more time, this time specifying the pool name as an argument to import:
95 95  
96 -{{code language="bash session"}}(server B)# zpool import -D tank
106 +{{code language="bash session"}}
107 +(server B)# zpool import -D tank
97 97  cannot import 'tank': more than one matching pool
98 98  import by numeric ID instead
99 99  (server B)# zpool import -D 17105118590326096187
... ... @@ -115,11 +115,13 @@
115 115   sdi ONLINE 0 0 0
116 116   sdj ONLINE 0 0 0
117 117  
118 -errors: No known data errors{{/code}}
129 +errors: No known data errors
130 +{{/code}}
119 119  
120 120  Notice that ZFS was warning me that it found more than on storage pool matching the name "tank", and to import the pool, I must use its unique identifier. So, I pass that as an argument from my previous import. This is because in my previous output, we can see there are two known pools with the pool name "tank". However, after specifying its ID, I was able to successfully bring the storage pool to full "ONLINE" status. You can identify this by checking its status:
121 121  
122 -{{code language="bash session"}}# zpool status tank
134 +{{code language="bash session"}}
135 +# zpool status tank
123 123   pool: tank
124 124   state: ONLINE
125 125  status: The pool is formatted using an older on-disk format. The pool can
... ... @@ -139,7 +139,8 @@
139 139   sdh ONLINE 0 0 0
140 140   mirror-2 ONLINE 0 0 0
141 141   sdi ONLINE 0 0 0
142 - sdj ONLINE 0 0 0{{/code}}
155 + sdj ONLINE 0 0 0
156 +{{/code}}
143 143  
144 144  == Upgrading Storage Pools ==
145 145  
... ... @@ -149,7 +149,8 @@
149 149  
150 150  First, we can see a brief description of features that will be available to the pool:
151 151  
152 -{{code language="bash session"}}# zpool upgrade -v
166 +{{code language="bash session"}}
167 +# zpool upgrade -v
153 153  This system is currently running ZFS pool version 28.
154 154  
155 155  The following versions are supported:
... ... @@ -186,11 +186,14 @@
186 186   28 Multiple vdev replacements
187 187  
188 188  For more information on a particular version, including supported releases,
189 -see the ZFS Administration Guide.{{/code}}
204 +see the ZFS Administration Guide.
205 +{{/code}}
190 190  
191 191  So, let's perform the upgrade to get to version 28 of the pool:
192 192  
193 -{{code language="bash session"}}# zpool upgrade -a{{/code}}
209 +{{code language="bash session"}}
210 +# zpool upgrade -a
211 +{{/code}}
194 194  
195 195  As a sidenote, when using ZFS on Linux, the RPM and Debian packages will contain an /etc/init.d/zfs init script for setting up the pools and datasets on boot. This is done by importing them on boot. However, at shutdown, the init script does not export the pools. Rather, it just unmounts them. So, if you migrate the disk to another box after only shutting down, you will be not be able to import the storage pool on the new box.
196 196