Facebook Instagram

Drive Configuration

Pool Configuration

Double parity RAID. Can survive two drive failures. Recommended for most production use cases.

Pool Summary

Pool
RAIDZ2
Usable Capacity
95.08 TiB
(104.54 TB)
Storage Efficiency
72.6%
of raw capacity
Raw Capacity
130.97 TiB
8 drives total
Fault Tolerance
2 drives
per vDev can fail

Performance & Reliability

Relative Read Speed
0.8x
Relative Write Speed
0.8x
Est. Resilver Time98 hours
Annual Failure Risk<0.01%

Cost Analysis

Total Drive Cost
$2,000
Cost per Usable TiB
$21.03
Slop Space Reserved
3.14 TiB

Looking for drives? Compare prices across retailers:

View 18TB Drive Prices

Understanding ZFS vDevs, Pools & RAIDZ Levels

What is a vDev?

A virtual device (vDev) is a group of physical drives configured with a specific redundancy level. Your ZFS pool consists of one or more vDevs striped together. Performance scales with the number of vDevs, but if any single vDev fails completely, you lose the entire pool.

RAIDZ1 vs RAIDZ2 vs RAIDZ3

RAIDZ1: Single parity, can lose 1 drive. Best for 3-5 drive vDevs with less critical data.
RAIDZ2: Double parity, can lose 2 drives. Recommended for most users with 4-8 drive vDevs.
RAIDZ3: Triple parity, can lose 3 drives. For maximum protection in 8+ drive vDevs.

Mirrors vs RAIDZ

Mirrors: Best performance, fastest rebuilds, but only 50% capacity. Ideal for VMs and databases.
RAIDZ: Better capacity efficiency, but slower random I/O. Better for large file storage and archives.

How Much Space Does ZFS Overhead Consume?

ZFS reserves ~3.2% as "slop space" for internal operations. Additional overhead includes metadata (varies by file count), checksums, and ashift padding. Expect 5-10% total overhead beyond parity in typical use.

Best ZFS Pool Layout for TrueNAS & FreeNAS

1
Use RAIDZ2 for important data

With modern high-capacity drives (8TB+), resilver times are long. RAIDZ2 protects against a second failure during rebuild.

2
Keep vDevs to 3-9 drives

Never exceed 12 drives per vDev. Smaller vDevs mean faster resilvers and better performance.

3
Use multiple vDevs for performance

Pool performance scales with vDev count. Two 4-drive RAIDZ1 vDevs outperform one 8-drive RAIDZ2.

4
Match drive sizes within vDevs

ZFS uses the smallest drive in a vDev as the size for all drives. Mix sizes only if necessary.

5
Consider hot spares

Hot spares automatically replace failed drives, reducing your vulnerability window during resilver.

Frequently Asked Questions

What is the best ZFS RAID level for TrueNAS?

For most home and small business users, RAIDZ2 is recommended as it provides protection against two simultaneous drive failures. For high-performance needs, mirror vDevs offer the best read/write speeds. RAIDZ1 is suitable for less critical data with 3-5 drives.

How many drives should I put in a RAIDZ vDev?

RAIDZ1: 3-5 drives
RAIDZ2: 4-8 drives
RAIDZ3: 5-12 drives
Never exceed 12 drives per vDev. Use multiple vDevs for larger pools.

Why is my actual ZFS capacity different from calculated?

Several factors affect final capacity: marketing vs actual drive sizes (1TB = 1000GB marketing, but ZFS reports in TiB where 1TiB = 1024GiB), slop space reservation (~3.2%), metadata overhead, and RAID parity. This calculator accounts for these factors.

Can I expand a RAIDZ vDev?

Yes! OpenZFS 2.2+ supports RAIDZ expansion, allowing you to add one drive at a time to existing RAIDZ vDevs. However, the process requires rewriting all data and can take days for large pools. Use our RAIDZ Expansion Calculator to estimate capacity gains and expansion time for your pool.

What is dRAID and should I use it?

dRAID (distributed RAID) is designed for very large arrays (100+ drives) where fast resilver times are critical. For home and small business use with fewer than 50 drives, traditional RAIDZ or mirrors are recommended due to their flexibility and simpler management.

How This Calculator Works: ZFS Capacity Methodology

Step 1: Marketing to Actual Capacity

Drive manufacturers use decimal (base-10) marketing where 1TB = 1,000,000,000,000 bytes. However, ZFS and your operating system use binary (base-2) measurements where 1 TiB = 1,099,511,627,776 bytes.

Actual GiB = Marketing GB x (1000 / 1024)^3 = Marketing GB x 0.9313

Step 2: RAID Parity Overhead

Each RAIDZ level dedicates drives to parity protection:

RAIDZ1: Usable = (Drives - 1) x Drive Size
RAIDZ2: Usable = (Drives - 2) x Drive Size
RAIDZ3: Usable = (Drives - 3) x Drive Size
Mirror: Usable = Drive Size (50% efficiency)

Step 3: ZFS Slop Space Reservation

ZFS reserves approximately 3.2% of pool capacity as "slop space" for internal operations.

Final Capacity = RAID Usable x (1 - 0.032)

Step 4: Additional Overhead Factors

Real-world capacity may vary due to:

  • Ashift padding: Sector alignment overhead (0.1-2%)
  • Metadata overhead: Varies by file count and recordsize
  • Checksums: Small overhead for data integrity
  • Recordsize efficiency: Partial blocks waste space