Changes for page ZFS Administration - Part II - RAIDZ
Last modified by Drunk Monkey on 2024-09-01 09:09
From version
1.2


edited by Drunk Monkey
on 2024-09-01 09:06
on 2024-09-01 09:06
Change comment:
There is no comment for this version
To version
2.1

edited by Drunk Monkey
on 2024-09-01 09:09
on 2024-09-01 09:09
Change comment:
There is no comment for this version
Summary
Details
- Page properties
-
- Content
-
... ... @@ -13,7 +13,7 @@ 13 13 Enter RAIDZ. Rather than the stripe width be statically set at creation, the stripe width is dynamic. Every block transactionally flushed to disk is its own stripe width. Every RAIDZ write is a full stripe write. Further, the parity bit is flushed with the stripe simultaneously, completely eliminating the RAID-5 write hole. So, in the event of a power failure, you either have the latest flush of data, or you don't. But, your disks will not be inconsistent. 14 14 15 15 16 -[[image: https://web.archive.org/web/20210430213528im_/https://pthree.org/wp-content/uploads/2012/12/raid5-vs-raidz11.png||alt="Image showing the stripe differences between RAID5 and RAIDZ-1."]]16 +[[image:raid5-vs-raidz11.png]] 17 17 //Demonstrating the dynamic stripe size of RAIDZ// 18 18 19 19 ... ... @@ -33,11 +33,12 @@ 33 33 34 34 To setup a zpool with RAIDZ-1, we use the "raidz1" VDEV, in this case using only 3 USB drives: 35 35 36 -{{code language="bash session"}}# zpool create tank raidz1 sde sdf sdg 36 +{{code language="bash session"}} 37 +# zpool create tank raidz1 sde sdf sdg 37 37 # zpool status tank 38 38 pool: pool 39 39 state: ONLINE 40 - scan: none requested 41 + scan: none requested 41 41 config: 42 42 43 43 NAME STATE READ WRITE CKSUM ... ... @@ -47,11 +47,14 @@ 47 47 sdf ONLINE 0 0 0 48 48 sdg ONLINE 0 0 0 49 49 50 -errors: No known data errors{{/code}} 51 +errors: No known data errors 52 +{{/code}} 51 51 52 52 Cleanup before moving on, if following in your terminal: 53 53 54 -{{code language="bash session"}}# zpool destroy tank{{/code}} 56 +{{code language="bash session"}} 57 +# zpool destroy tank 58 +{{/code}} 55 55 56 56 === RAIDZ-2 === 57 57 ... ... @@ -59,11 +59,12 @@ 59 59 60 60 To setup a zpool with RAIDZ-2, we use the "raidz2" VDEV: 61 61 62 -{{code language="bash session"}}# zpool create tank raidz2 sde sdf sdg sdh 66 +{{code language="bash session"}} 67 +# zpool create tank raidz2 sde sdf sdg sdh 63 63 # zpool status tank 64 64 pool: pool 65 65 state: ONLINE 66 - scan: none requested 71 + scan: none requested 67 67 config: 68 68 69 69 NAME STATE READ WRITE CKSUM ... ... @@ -74,11 +74,14 @@ 74 74 sdg ONLINE 0 0 0 75 75 sdh ONLINE 0 0 0 76 76 77 -errors: No known data errors{{/code}} 82 +errors: No known data errors 83 +{{/code}} 78 78 79 79 Cleanup before moving on, if following in your terminal: 80 80 81 -{{code language="bash session"}}# zpool destroy tank{{/code}} 87 +{{code language="bash session"}} 88 +# zpool destroy tank 89 +{{/code}} 82 82 83 83 === RAIDZ-3 === 84 84 ... ... @@ -86,7 +86,8 @@ 86 86 87 87 To setup a zpool with RAIDZ-3, we use the "raidz3" VDEV: 88 88 89 -{{code language="bash session"}}# zpool create tank raidz3 sde sdf sdg sdh sdi 97 +{{code language="bash session"}} 98 +# zpool create tank raidz3 sde sdf sdg sdh sdi 90 90 # zpool status tank 91 91 pool: pool 92 92 state: ONLINE ... ... @@ -102,11 +102,14 @@ 102 102 sdh ONLINE 0 0 0 103 103 sdi ONLINE 0 0 0 104 104 105 -errors: No known data errors{{/code}} 114 +errors: No known data errors 115 +{{/code}} 106 106 107 107 Cleanup before moving on, if following in your terminal: 108 108 109 -{{code language="bash session"}}# zpool destroy tank{{/code}} 119 +{{code language="bash session"}} 120 +# zpool destroy tank 121 +{{/code}} 110 110 111 111 === Hybrid RAIDZ === 112 112 ... ... @@ -116,7 +116,8 @@ 116 116 117 117 To setup a zpool with 4 RAIDZ-1 VDEVs, we use the "raidz1" VDEV 4 times in our command. Notice that I've added emphasis on when to type "raidz1" in the command for clarity: 118 118 119 -{{code language="bash session"}}# zpool create tank raidz1 sde sdf sdg raidz1 sdh sdi sdj raidz1 sdk sdl sdm raidz1 sdn sdo sdp 131 +{{code language="bash session"}} 132 +# zpool create tank raidz1 sde sdf sdg raidz1 sdh sdi sdj raidz1 sdk sdl sdm raidz1 sdn sdo sdp 120 120 # zpool status tank 121 121 pool: pool 122 122 state: ONLINE ... ... @@ -142,13 +142,16 @@ 142 142 sdo ONLINE 0 0 0 143 143 sdp ONLINE 0 0 0 144 144 145 -errors: No known data errors{{/code}} 158 +errors: No known data errors 159 +{{/code}} 146 146 147 147 Notice now that there are four RAIDZ-1 VDEVs. As mentioned in a previous post, ZFS stripes across VDEVs. So, this setup is essentially a RAIDZ-1+0. Each RAIDZ-1 VDEV will receive 1/4 of the data sent to the pool, then each striped piece will be further striped across the disks in each VDEV. Nested VDEVs can be a great way to keep performance alive and well, long after the pool has been massively fragmented. 148 148 149 149 Cleanup before moving on, if following in your terminal: 150 150 151 -{{code language="bash session"}}# zpool destroy tank{{/code}} 165 +{{code language="bash session"}} 166 +# zpool destroy tank 167 +{{/code}} 152 152 153 153 === Some final thoughts on RAIDZ === 154 154