Last modified by Drunk Monkey on 2024-09-01 12:44

From version 3.1
edited by Drunk Monkey
on 2024-09-01 12:44
Change comment: There is no comment for this version
To version 2.1
edited by Drunk Monkey
on 2024-09-01 12:19
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -1,4 +1,4 @@
1 -[[Our previous post finished our discussion about VDEVs by going into great detail about the ZFS ARC>>doc:Tech-Tips.ZFS-The-Aaron-Topponce-Archive.ZFS-Administration-Part-IV-The-Adjustable-Replacement-Cache.WebHome]]. Here, we'll continue our discussion about ZFS storage pools, how to migrate them across systems by exporting and importing the pools, recovering from destroyed pools, and how to upgrade your storage pool to the latest version.
1 +[[Our previous post finished our discussion about VDEVs by going into great detail about the ZFS ARC>>url:https://web.archive.org/web/20210430213515/http://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/]]. Here, we'll continue our discussion about ZFS storage pools, how to migrate them across systems by exporting and importing the pools, recovering from destroyed pools, and how to upgrade your storage pool to the latest version.
2 2  
3 3  == Motivation ==
4 4  
... ... @@ -14,9 +14,7 @@
14 14  
15 15  To export a storage pool, use the following command:
16 16  
17 -{{code language="bash session"}}
18 -# zpool export tank
19 -{{/code}}
17 +{{code language="bash session"}}# zpool export tank{{/code}}
20 20  
21 21  This command will attempt to unmount all ZFS datasets as well as the pool. By default, when creating ZFS storage pools and filesystems, they are automatically mounted to the system. There is no need to explicitly unmount the filesystems as you with with ext3 or ext4. The export will handle that. Further, some pools may refuse to be exported, for whatever reason. You can pass the "-f" switch if needed to force the export
22 22  
... ... @@ -24,8 +24,7 @@
24 24  
25 25  Once the drives have been physically installed into the new server, you can import the pool. Further, the new system may have multiple pools installed, to which you will want to determine which pool to import, or to import them all. If the storage pool "tank" does not already exist on the new server, and this is the pool you wish to import, then you can run the following command:
26 26  
27 -{{code language="bash session"}}
28 -# zpool import tank
25 +{{code language="bash session"}}# zpool import tank
29 29  # zpool status tank
30 30   state: ONLINE
31 31   scan: none requested
... ... @@ -43,29 +43,23 @@
43 43   sdi ONLINE 0 0 0
44 44   sdj ONLINE 0 0 0
45 45  
46 -errors: No known data errors
47 -{{/code}}
43 +errors: No known data errors{{/code}}
48 48  
49 49  Your storage pool state may not be "ONLINE", meaning that everything is healthy. If the system does not recognize a disk in your pool, you may get a "DEGRADED" state. If one or more of the drives appear as faulty to the system, then you may get a "FAULTED" state in your pool. You will need to troubleshoot what drives are causing the problem, and fix accordingly.
50 50  
51 51  You can import multiple pools simultaneously by either specifying each pool as an argument, or by passing the "-a" switch for importing all discovered pools. For importing the two pools "tank1" and "tank2", type:
52 52  
53 -{{code language="bash session"}}
54 -# zpool import tank1 tank2
55 -{{/code}}
49 +{{code language="bash session"}}# zpool import tank1 tank2{{/code}}
56 56  
57 57  For importing all known pools, type:
58 58  
59 -{{code language="bash session"}}
60 -# zpool import -a
61 -{{/code}}
53 +{{code language="bash session"}}# zpool import -a{{/code}}
62 62  
63 63  == Recovering A Destroyed Pool ==
64 64  
65 65  If a ZFS storage pool was previously destroyed, the pool can still be imported to the system. Destroying a pool doesn't wipe the data on the disks, so the metadata is still in tact, and the pool can still be discovered. Let's take a clean pool called "tank", destroy it, move the disks to a new system, then try to import the pool. You will need to pass the "-D" switch to tell ZFS to import a destroyed pool. Do not provide the pool name as an argument, as you would normally do:
66 66  
67 -{{code language="bash session"}}
68 -(server A)# zpool destroy tank
59 +{{code language="bash session"}}(server A)# zpool destroy tank
69 69  (server B)# zpool import -D
70 70   pool: tank
71 71   id: 17105118590326096187
... ... @@ -98,13 +98,11 @@
98 98   sdr ONLINE
99 99  
100 100   Additional devices are known to be part of this pool, though their
101 - exact configuration cannot be determined.
102 -{{/code}}
92 + exact configuration cannot be determined.{{/code}}
103 103  
104 104  Notice that the state of the pool is "ONLINE (DESTROYED)". Even though the pool is "ONLINE", it is only partially online. Basically, it's only been discovered, but it's not available for use. If you run the "df" command, you will find that the storage pool is not mounted. This means the ZFS filesystem datasets are not available, and you currently cannot store data into the pool. However, ZFS has found the pool, and you can bring it fully ONLINE for standard usage by running the import command one more time, this time specifying the pool name as an argument to import:
105 105  
106 -{{code language="bash session"}}
107 -(server B)# zpool import -D tank
96 +{{code language="bash session"}}(server B)# zpool import -D tank
108 108  cannot import 'tank': more than one matching pool
109 109  import by numeric ID instead
110 110  (server B)# zpool import -D 17105118590326096187
... ... @@ -126,13 +126,11 @@
126 126   sdi ONLINE 0 0 0
127 127   sdj ONLINE 0 0 0
128 128  
129 -errors: No known data errors
130 -{{/code}}
118 +errors: No known data errors{{/code}}
131 131  
132 132  Notice that ZFS was warning me that it found more than on storage pool matching the name "tank", and to import the pool, I must use its unique identifier. So, I pass that as an argument from my previous import. This is because in my previous output, we can see there are two known pools with the pool name "tank". However, after specifying its ID, I was able to successfully bring the storage pool to full "ONLINE" status. You can identify this by checking its status:
133 133  
134 -{{code language="bash session"}}
135 -# zpool status tank
122 +{{code language="bash session"}}# zpool status tank
136 136   pool: tank
137 137   state: ONLINE
138 138  status: The pool is formatted using an older on-disk format. The pool can
... ... @@ -152,8 +152,7 @@
152 152   sdh ONLINE 0 0 0
153 153   mirror-2 ONLINE 0 0 0
154 154   sdi ONLINE 0 0 0
155 - sdj ONLINE 0 0 0
156 -{{/code}}
142 + sdj ONLINE 0 0 0{{/code}}
157 157  
158 158  == Upgrading Storage Pools ==
159 159  
... ... @@ -163,8 +163,7 @@
163 163  
164 164  First, we can see a brief description of features that will be available to the pool:
165 165  
166 -{{code language="bash session"}}
167 -# zpool upgrade -v
152 +{{code language="bash session"}}# zpool upgrade -v
168 168  This system is currently running ZFS pool version 28.
169 169  
170 170  The following versions are supported:
... ... @@ -201,14 +201,11 @@
201 201   28 Multiple vdev replacements
202 202  
203 203  For more information on a particular version, including supported releases,
204 -see the ZFS Administration Guide.
205 -{{/code}}
189 +see the ZFS Administration Guide.{{/code}}
206 206  
207 207  So, let's perform the upgrade to get to version 28 of the pool:
208 208  
209 -{{code language="bash session"}}
210 -# zpool upgrade -a
211 -{{/code}}
193 +{{code language="bash session"}}# zpool upgrade -a{{/code}}
212 212  
213 213  As a sidenote, when using ZFS on Linux, the RPM and Debian packages will contain an /etc/init.d/zfs init script for setting up the pools and datasets on boot. This is done by importing them on boot. However, at shutdown, the init script does not export the pools. Rather, it just unmounts them. So, if you migrate the disk to another box after only shutting down, you will be not be able to import the storage pool on the new box.
214 214