Last modified by Drunk Monkey on 2024-09-01 12:39

From version 4.3
edited by Drunk Monkey
on 2024-09-01 08:54
Change comment: There is no comment for this version
To version 2.1
edited by Drunk Monkey
on 2024-09-01 08:28
Change comment: There is no comment for this version

Summary

Details

Page properties
Title
... ... @@ -1,1 +1,1 @@
1 -ZFS Administration - Part I - VDEVs
1 +ZFS Administration Part I VDEVs
Content
... ... @@ -37,17 +37,14 @@
37 37  
38 38  Let's start by creating a simple zpool wyth my 4 drives. I could create a zpool named "tank" with the following command:
39 39  
40 -{{code language="bash session"}}
41 -# zpool create tank sde sdf sdg sdh
42 -{{/code}}
40 +{{{# zpool create tank sde sdf sdg sdh}}}
43 43  
44 44  In this case, I'm using four disk VDEVs. Notice that I'm not using full device paths, although I could. Because VDEVs are always dynamically striped, this is effectively a RAID-0 between four drives (no redundancy). We should also check the status of the zpool:
45 45  
46 -{{code language="bash session"}}
47 -# zpool status tank
44 +{{{# zpool status tank
48 48   pool: tank
49 49   state: ONLINE
50 - scan: none requested
47 + scan: none requested
51 51  config:
52 52  
53 53   NAME STATE READ WRITE CKSUM
... ... @@ -57,26 +57,21 @@
57 57   sdg ONLINE 0 0 0
58 58   sdh ONLINE 0 0 0
59 59  
60 -errors: No known data errors
61 -{{/code}}
57 +errors: No known data errors}}}
62 62  
63 63  Let's tear down the zpool, and create a new one. Run the following before continuing, if you're following along in your own terminal:
64 64  
65 -{{code language="bash session"}}
66 -# zpool destroy tank
67 -{{/code}}
61 +{{{# zpool destroy tank}}}
68 68  
69 -
70 70  == A simple mirrored zpool ==
71 71  
72 72  In this next example, I wish to mirror all four drives (/dev/sde, /dev/sdf, /dev/sdg and /dev/sdh). So, rather than using the disk VDEV, I'll be using "mirror". The command is as follows:
73 73  
74 -{{code language="bash session"}}
75 -# zpool create tank mirror sde sdf sdg sdh
67 +{{{# zpool create tank mirror sde sdf sdg sdh
76 76  # zpool status tank
77 77   pool: tank
78 78   state: ONLINE
79 - scan: none requested
71 + scan: none requested
80 80  config:
81 81  
82 82   NAME STATE READ WRITE CKSUM
... ... @@ -87,8 +87,7 @@
87 87   sdg ONLINE 0 0 0
88 88   sdh ONLINE 0 0 0
89 89  
90 -errors: No known data errors
91 -{{/code}}
82 +errors: No known data errors}}}
92 92  
93 93  Notice that "mirror-0" is now the VDEV, with each physical device managed by it. As mentioned earlier, this would be analogous to a Linux software RAID "/dev/md0" device representing the four physical devices. Let's now clean up our pool, and create another.
94 94  
... ... @@ -226,7 +226,6 @@
226 226  This should act as a good starting point for getting the basic understanding of zpools and VDEVs. The rest of it is all downhill from here. You've made it over the "big hurdle" of understanding how ZFS handles pooled storage. We still need to cover RAIDZ levels, and we still need to go into more depth about log and cache devices, as well as pool settings, such as deduplication and compression, but all of these will be handled in separate posts. Then we can get into ZFS filesystem datasets, their settings, and advantages and disagvantages. But, you now have a head start on the core part of ZFS pools.
227 227  
228 228  ----
229 -
230 230  (% style="text-align: center;" %)
231 231  Posted by Aaron Toponce on Tuesday, December 4, 2012, at 6:00 am.
232 232  Filed under [[Debian>>url:https://web.archive.org/web/20210430213532/https://pthree.org/category/debian/]], [[Linux>>url:https://web.archive.org/web/20210430213532/https://pthree.org/category/linux/]], [[Ubuntu>>url:https://web.archive.org/web/20210430213532/https://pthree.org/category/ubuntu/]], [[ZFS>>url:https://web.archive.org/web/20210430213532/https://pthree.org/category/zfs/]].
... ... @@ -233,9 +233,8 @@
233 233  Follow any responses to this post with its [[comments RSS>>url:https://web.archive.org/web/20210430213532/https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/feed/]] feed.
234 234  You can [[post a comment>>url:https://web.archive.org/web/20210430213532/https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/#respond]] or [[trackback>>url:https://web.archive.org/web/20210430213532/https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/trackback/]] from your blog.
235 235  For IM, Email or Microblogs, here is the [[Shortlink>>url:https://web.archive.org/web/20210430213532/https://pthree.org/?p=2584]].
236 -
237 237  ----
238 238  
239 -{{box title="**Archived From:**"}}
240 -[[https:~~/~~/web.archive.org/web/20210430213532/https:~~/~~/pthree.org/2012/12/04/zfs-administration-part-i-vdevs/>>https://web.archive.org/web/20210430213532/https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/]]
241 -{{/box}}
228 +{{info}}
229 +Retrieved from [[https:~~/~~/web.archive.org/web/20210605031658/https:~~/~~/pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/>>https://web.archive.org/web/20210605031658/https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/]]
230 +{{/info}}