Skip to content

ZFS pool in an ISCSI ZVOL

April 20, 2007

My last post about backing up ZFS on my laptop to iscsi targets exported from a server backed by ZFS Zvols on the server prompted a commend and also promoted me to think about whether this would be a worth while thing in the real world?

Initially I would say no, however it does offer the tantalizing possibility to allow the administrator of the system with the iscsi targets to take back ups of the pools without interfering with the contents of the pool at all.




It allows you to split the snapshots for users, which would all live in the client pool from the snapshots for administrators, essentially for disaster recovery which would all be in the server pool.

If the server went pop the recovery would be to create a new server and then restore the zvol which would then contain the whole client pool with all the client pool’s snapshots. Similarly if the client pool were to become corrupted you could roll it back to a good state by rolling back the ZVOL on the server pool. Now clearly the selling point of ZFS is an always consistent on disk format so this is less of a risk than with other file systems (unless there are bugs) however the belt and braces approach seems appealing to the latent sysadmin in me who knows that the performance of a storage system that has lost your data is zero.

I’m going to see if I can build a server like this to see how well it performs but that won’t be for at least a few weeks.

Tags: topic:[zfs] topic:[iscsi]

Advertisements

From → Solaris

ZFS pool in an ISCSI ZVOL

April 20, 2007

My last post about backing up ZFS on my laptop to iscsi targets exported from a server backed by ZFS Zvols on the server prompted a commend and also promoted me to think about whether this would be a worth while thing in the real world?

Initially I would say no, however it does offer the tantalizing possibility to allow the administrator of the system with the iscsi targets to take back ups of the pools without interfering with the contents of the pool at all.




It allows you to split the snapshots for users, which would all live in the client pool from the snapshots for administrators, essentially for disaster recovery which would all be in the server pool.

If the server went pop the recovery would be to create a new server and then restore the zvol which would then contain the whole client pool with all the client pool’s snapshots. Similarly if the client pool were to become corrupted you could roll it back to a good state by rolling back the ZVOL on the server pool. Now clearly the selling point of ZFS is an always consistent on disk format so this is less of a risk than with other file systems (unless there are bugs) however the belt and braces approach seems appealing to the latent sysadmin in me who knows that the performance of a storage system that has lost your data is zero.

I’m going to see if I can build a server like this to see how well it performs but that won’t be for at least a few weeks.

Tags: topic:[zfs] topic:[iscsi]

From → Solaris

2 Comments
  1. I have a similar setup to this between two build servers. The V880 has all storage attached to it. It has two pools, one for its local build area and another that is exported using iscsi to the v40z machine. I don’t think I got the ZFS setup correct though because on the V880 (the iSCSI target) I created a simple pool with all 12 disks in it (no mirror or raidz) and created two zvols m1 and m2 which the iSCIS initator (the v40z) uses to create a mirrored ZFS pool. I have a feeling this isn’t optimal and we could do with some guidelines/bestpractices on where you do the mirroring/raidz when you are using ZFS to host the target files and putting ZFS on those target files on the initiator.

  2. This certainly is not optimal:
    If a single disk fails you will lost then the pool containing the two zvols will be lost along with both zvols so everything is gone.
    Much better to make the original container volume RAIDZ or a mirror and then have a simple pool on that.
    The question that this raises to me though is, why not use NFS? Is this an attempt to improve performance? Did it work?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: