Last modified by Drunk Monkey on 2024-09-01 12:44

Show last authors
1 [[Our previous post finished our discussion about VDEVs by going into great detail about the ZFS ARC>>doc:Tech-Tips.ZFS-The-Aaron-Topponce-Archive.ZFS-Administration-Part-IV-The-Adjustable-Replacement-Cache.WebHome]]. Here, we'll continue our discussion about ZFS storage pools, how to migrate them across systems by exporting and importing the pools, recovering from destroyed pools, and how to upgrade your storage pool to the latest version.
2
3 == Motivation ==
4
5 As a GNU/Linux storage administrator, you may come across the need to move your storage from one server to another. This could be accomplished by physically moving the disks from one storage box to another, or by copying the data from the old live running system to the new. I will cover both cases in this series. The latter deals with sending and receiving ZFS snapshots, a topic that will take us some time getting to. This post will deal with the former; that is, physically moving the drives.
6
7 One slick feature of ZFS is the ability to export your storage pool, so you can disassemble the drives, unplug their cables, and move the drives to another system. Once on the new system, ZFS gives you the ability to import the storage pool, regardless of the order of the drives. A good demonstration of this is to grab some USB sticks, plug them in, and create a ZFS storage pool. Then export the pool, unplug the sticks, drop them into a hat, and mix them up. Then, plug them back in at any random order, and re-import the pool on a new box. In fact, ZFS is smart enough to detect endianness. In other words, you can export the storage pool from a big endian system, and import the pool on a little endian system, without hiccup.
8
9 == Exporting Storage Pools ==
10
11 When the migration is ready to take place, before unplugging the power, you need to export the storage pool. This will cause the kernel to flush all pending data to disk, writes data to the disk acknowledging that the export was done, and removes all knowledge that the storage pool existed in the system. At this point, it's safe to shut down the computer, and remove the drives.
12
13 If you do not export the storage pool before removing the drives, you will not be able to import the drives on the new system, and you might not have gotten all unwritten data flushed to disk. Even though the data will remain consistent due to the nature of the filesystem, when importing, it will appear to the old system as a faulted pool. Further, the destination system will refuse to import a pool that has not been explicitly exported. This is to prevent race conditions with network attached storage that may be already using the pool.
14
15 To export a storage pool, use the following command:
16
17 {{code language="bash session"}}
18 # zpool export tank
19 {{/code}}
20
21 This command will attempt to unmount all ZFS datasets as well as the pool. By default, when creating ZFS storage pools and filesystems, they are automatically mounted to the system. There is no need to explicitly unmount the filesystems as you with with ext3 or ext4. The export will handle that. Further, some pools may refuse to be exported, for whatever reason. You can pass the "-f" switch if needed to force the export
22
23 == Importing Storage Pools ==
24
25 Once the drives have been physically installed into the new server, you can import the pool. Further, the new system may have multiple pools installed, to which you will want to determine which pool to import, or to import them all. If the storage pool "tank" does not already exist on the new server, and this is the pool you wish to import, then you can run the following command:
26
27 {{code language="bash session"}}
28 # zpool import tank
29 # zpool status tank
30 state: ONLINE
31 scan: none requested
32 config:
33
34 NAME STATE READ WRITE CKSUM
35 tank ONLINE 0 0 0
36 mirror-0 ONLINE 0 0 0
37 sde ONLINE 0 0 0
38 sdf ONLINE 0 0 0
39 mirror-1 ONLINE 0 0 0
40 sdg ONLINE 0 0 0
41 sdh ONLINE 0 0 0
42 mirror-2 ONLINE 0 0 0
43 sdi ONLINE 0 0 0
44 sdj ONLINE 0 0 0
45
46 errors: No known data errors
47 {{/code}}
48
49 Your storage pool state may not be "ONLINE", meaning that everything is healthy. If the system does not recognize a disk in your pool, you may get a "DEGRADED" state. If one or more of the drives appear as faulty to the system, then you may get a "FAULTED" state in your pool. You will need to troubleshoot what drives are causing the problem, and fix accordingly.
50
51 You can import multiple pools simultaneously by either specifying each pool as an argument, or by passing the "-a" switch for importing all discovered pools. For importing the two pools "tank1" and "tank2", type:
52
53 {{code language="bash session"}}
54 # zpool import tank1 tank2
55 {{/code}}
56
57 For importing all known pools, type:
58
59 {{code language="bash session"}}
60 # zpool import -a
61 {{/code}}
62
63 == Recovering A Destroyed Pool ==
64
65 If a ZFS storage pool was previously destroyed, the pool can still be imported to the system. Destroying a pool doesn't wipe the data on the disks, so the metadata is still in tact, and the pool can still be discovered. Let's take a clean pool called "tank", destroy it, move the disks to a new system, then try to import the pool. You will need to pass the "-D" switch to tell ZFS to import a destroyed pool. Do not provide the pool name as an argument, as you would normally do:
66
67 {{code language="bash session"}}
68 (server A)# zpool destroy tank
69 (server B)# zpool import -D
70 pool: tank
71 id: 17105118590326096187
72 state: ONLINE (DESTROYED)
73 action: The pool can be imported using its name or numeric identifier.
74 config:
75
76 tank ONLINE
77 mirror-0 ONLINE
78 sde ONLINE
79 sdf ONLINE
80 mirror-1 ONLINE
81 sdg ONLINE
82 sdh ONLINE
83 mirror-2 ONLINE
84 sdi ONLINE
85 sdj ONLINE
86
87 pool: tank
88 id: 2911384395464928396
89 state: UNAVAIL (DESTROYED)
90 status: One or more devices are missing from the system.
91 action: The pool cannot be imported. Attach the missing
92 devices and try again.
93 see: http://zfsonlinux.org/msg/ZFS-8000-6X
94 config:
95
96 tank UNAVAIL missing device
97 sdk ONLINE
98 sdr ONLINE
99
100 Additional devices are known to be part of this pool, though their
101 exact configuration cannot be determined.
102 {{/code}}
103
104 Notice that the state of the pool is "ONLINE (DESTROYED)". Even though the pool is "ONLINE", it is only partially online. Basically, it's only been discovered, but it's not available for use. If you run the "df" command, you will find that the storage pool is not mounted. This means the ZFS filesystem datasets are not available, and you currently cannot store data into the pool. However, ZFS has found the pool, and you can bring it fully ONLINE for standard usage by running the import command one more time, this time specifying the pool name as an argument to import:
105
106 {{code language="bash session"}}
107 (server B)# zpool import -D tank
108 cannot import 'tank': more than one matching pool
109 import by numeric ID instead
110 (server B)# zpool import -D 17105118590326096187
111 (server B)# zpool status tank
112 pool: tank
113 state: ONLINE
114 scan: none requested
115 config:
116
117 NAME STATE READ WRITE CKSUM
118 tank ONLINE 0 0 0
119 mirror-0 ONLINE 0 0 0
120 sde ONLINE 0 0 0
121 sdf ONLINE 0 0 0
122 mirror-1 ONLINE 0 0 0
123 sdg ONLINE 0 0 0
124 sdh ONLINE 0 0 0
125 mirror-2 ONLINE 0 0 0
126 sdi ONLINE 0 0 0
127 sdj ONLINE 0 0 0
128
129 errors: No known data errors
130 {{/code}}
131
132 Notice that ZFS was warning me that it found more than on storage pool matching the name "tank", and to import the pool, I must use its unique identifier. So, I pass that as an argument from my previous import. This is because in my previous output, we can see there are two known pools with the pool name "tank". However, after specifying its ID, I was able to successfully bring the storage pool to full "ONLINE" status. You can identify this by checking its status:
133
134 {{code language="bash session"}}
135 # zpool status tank
136 pool: tank
137 state: ONLINE
138 status: The pool is formatted using an older on-disk format. The pool can
139 still be used, but some features are unavailable.
140 action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
141 pool will no longer be accessible on older software versions.
142 scrub: none requested
143 config:
144
145 NAME STATE READ WRITE CKSUM
146 tank ONLINE 0 0 0
147 mirror-0 ONLINE 0 0 0
148 sde ONLINE 0 0 0
149 sdf ONLINE 0 0 0
150 mirror-1 ONLINE 0 0 0
151 sdg ONLINE 0 0 0
152 sdh ONLINE 0 0 0
153 mirror-2 ONLINE 0 0 0
154 sdi ONLINE 0 0 0
155 sdj ONLINE 0 0 0
156 {{/code}}
157
158 == Upgrading Storage Pools ==
159
160 One thing that may crop up when migrating disk, is that there may be different pool and filesystem versions of the software. For example, you may have exported the pool on a system running pool version 20, while importing into a system with pool version 28 support. As such, you can upgrade your pool version to use the latest software for that release. As is evident with the previous example, it seems that the new server has an update version of the software. I am going to upgrade.
161
162 __**WARNING:**__ Once you upgrade your pool to a newer version of ZFS, older versions will not be able to use the storage pool. So, make sure that when you upgrade the pool, you know that there will be no need for going back to the old system. Further, there is no way to revert the upgrade and revert to the old version.
163
164 First, we can see a brief description of features that will be available to the pool:
165
166 {{code language="bash session"}}
167 # zpool upgrade -v
168 This system is currently running ZFS pool version 28.
169
170 The following versions are supported:
171
172 VER DESCRIPTION
173 --- --------------------------------------------------------
174 1 Initial ZFS version
175 2 Ditto blocks (replicated metadata)
176 3 Hot spares and double parity RAID-Z
177 4 zpool history
178 5 Compression using the gzip algorithm
179 6 bootfs pool property
180 7 Separate intent log devices
181 8 Delegated administration
182 9 refquota and refreservation properties
183 10 Cache devices
184 11 Improved scrub performance
185 12 Snapshot properties
186 13 snapused property
187 14 passthrough-x aclinherit
188 15 user/group space accounting
189 16 stmf property support
190 17 Triple-parity RAID-Z
191 18 Snapshot user holds
192 19 Log device removal
193 20 Compression using zle (zero-length encoding)
194 21 Deduplication
195 22 Received properties
196 23 Slim ZIL
197 24 System attributes
198 25 Improved scrub stats
199 26 Improved snapshot deletion performance
200 27 Improved snapshot creation performance
201 28 Multiple vdev replacements
202
203 For more information on a particular version, including supported releases,
204 see the ZFS Administration Guide.
205 {{/code}}
206
207 So, let's perform the upgrade to get to version 28 of the pool:
208
209 {{code language="bash session"}}
210 # zpool upgrade -a
211 {{/code}}
212
213 As a sidenote, when using ZFS on Linux, the RPM and Debian packages will contain an /etc/init.d/zfs init script for setting up the pools and datasets on boot. This is done by importing them on boot. However, at shutdown, the init script does not export the pools. Rather, it just unmounts them. So, if you migrate the disk to another box after only shutting down, you will be not be able to import the storage pool on the new box.
214
215 == Conclusion ==
216
217 There are plenty of situations where you may need to move disk from one storage server to another. Thankfully, ZFS makes this easy with exporting and importing pools. Further, the "zpool" command has enough subcommands and switches to handle the most common scenarios when a pool will not export or import. Towards the very end of the series, I'll discuss the "zdb" command, and how it may be useful here. But at this point, steer clear of zdb, and just focus on keeping your pools in order, and properly exporting and importing them as needed.
218
219
220 ----
221
222 (% style="text-align: center;" %)
223 Posted by Aaron Toponce on Monday, December 10, 2012, at 6:00 am.
224 Filed under [[Debian>>url:https://web.archive.org/web/20210430213515/https://pthree.org/category/debian/]], [[Linux>>url:https://web.archive.org/web/20210430213515/https://pthree.org/category/linux/]], [[Ubuntu>>url:https://web.archive.org/web/20210430213515/https://pthree.org/category/ubuntu/]], [[ZFS>>url:https://web.archive.org/web/20210430213515/https://pthree.org/category/zfs/]].
225 Follow any responses to this post with its [[comments RSS>>url:https://web.archive.org/web/20210430213515/https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/feed/]] feed.
226 You can [[post a comment>>url:https://web.archive.org/web/20210430213515/https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/#respond]] or [[trackback>>url:https://web.archive.org/web/20210430213515/https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/trackback/]] from your blog.
227 For IM, Email or Microblogs, here is the [[Shortlink>>url:https://web.archive.org/web/20210430213515/https://pthree.org/?p=2594]].
228
229 ----
230
231 {{box title="**Archived From:**"}}
232 [[https:~~/~~/web.archive.org/web/20210430213515/https:~~/~~/pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/>>https://web.archive.org/web/20210430213515/https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/]]
233 {{/box}}