Storage Spaces Direct in Server 2016

Technical Preview 4

High-availability storage with cost-affordable components

Since the launch of the 2012 Server, Microsoft has been using Storage Spaces to pursue the idea of high-availability storage deployment, going away from an expensive and complex fibre-channel infrastructure. From the 2012 Server version onwards, external hard disks with no actual own intelligence (JBOD) can be connected to all failover cluster servers by SAS.

Storage Spaces have a pool of JBODS hard disks at their core: The pool is used to generate virtual hard disks that are deployed as a cluster shared volume in the failover cluster. Whether used as storage for Hyper-V systems operated on the same failover cluster or as high-availability SMB3 share which can also be deployed to store Hyper-V systems of other failover clusters - all servers must have a physical SAS connection to each BOD and only SAS HDDs and SAS SSDs can be used.

(Photo: TechNet)

I think the reason why this solution didn't really take off was the fact that JBODs, SAS HBAs and SAS hard disks and SAS SSDs are very expensive and this puts them in the region of professional Fibre Channel solutions. Another reason is the scaling restriction caused by the maximum length of SAS cable used and maximum number of SAS switches only provide limited support in this respect as despite solving the issue of limited ports, the cable length is still a problem.

Despite these drawbacks, the 2016 Server will still support Storage Spaces as it is still a useful feature in certain applications.

What has changed in 2016 Server Technical Preview 4?

To avoid increasing the hardware footprint, Microsoft has announced the launch of Storage Spaces Direct (S2D) in the 2016 Server. The idea behind S2D is to deploy a server's local storage in the failover cluster and group it into a pool, enabling data to be shared on other nodes via a RDMA-compatible 10 GbE adapter with SMB3.

(Photo: TechNet)

Virtual hard disks are generated from this pool which can offer various redundancies depending on the deployment. Two-Way Mirror (mirrors data onto two nodes), Three-Way Mirror (mirrors data onto 3 nodes) and Parity (erasure coding) are available. The virtual hard disks can then accommodate cluster shared volumes and a cluster combining storage and Hyper-V. This approach sees Microsoft take a bold step towards hyper-coverged systems similar to the vSAN from VMware.

At least four identically configured nodes are required for Storage Spaces Direct in the technical preview 4. This is based on the fact that the minimum redundancy is a two-way mirror which shares two copies on the server and that more than 50% of nodes must always be online to guarantee a storage quorum. It is still unclear whether the number of servers will change in the final release: a high-availability cluster featuring two servers plus storage would, in my opinion, be a fantastic solution - let's see what Microsoft comes up with.

However, the current minimum configuration is with at least five servers. With the storage quorum in mind, up to two nodes can fail instead of one node with four servers.

By eliminating the SAS infrastructure, less expensive SATA hard disks and SATA SSDs can finally be used. Another new feature is the support for NVMe SSDs. SSDs can also be deployed as read/write cache and storage for meta data. All-flash configurations with NVMe SSDs as write cache and storage for meta data for virtual hard disks comprising of SSDs are also possible. It is advisable to use SSDs to compensate for the slightly lower write performance when mirroring.

The solution can be extended seamlessly by simply integrating another cluster node with an identical configuration into the cluster. Storage Spaces Direct automatically distributes in the background data to the added hard disks. In a hyper-converted scenario, it is important to remember that besides providing enough hard disk space, sufficient computing power and RAM must also be available to support any Hyper-V systems you may be planning on using.

Fig. 1

Cluster nodes displayed as enclosures with available hard disks. From a storage perspective, each server in the S2D array is an enclosure with disks.

Fig. 2

Pool generated from the hard disks.

Fig. 3

The virtual disk deployed as a cluster shared volume.

Multi-Resilient Virtual Disks are another new feature. In real-time tiering (ReFS), two tiers are created in a virtual disk; ReFS always writes into the mirror tier first and a second parity tier is created to which data can be moved from the mirror tier when needed (as soon as it is full). The benefit here is that a faster writing sequence onto the mirror tier and the parity's tier lower capacity are combined. The performance of parity tiers was still relatively slow in the old Storage Space concept and is not yet recommended for use. It remains to be seen in the tests whether the 2016 Server can produce better results. Otherwise, it would be an interesting compromise between speed and reduced waste of storage capacity.

Installation and management is performed exclusively with Powershell at least in TP4. There are only very few options available to enable configuration via GUI, however you then miss out on other options available only via Powershell. It therefore makes sense to install and manage everything via Powershell. Powershell Editor (Powershell ISE) can help here. When you have all the basic installation and management commands installed, this can be executed line-by-line with the help of the Editor.

Kann ich Ihnen weiterhelfen?

Sales

Deutschland

Tel.: +49(0)7121/2678400
Fax: +49(0)7121/267890400