Fujitsu
Not logged in » Login
X

Please login

Please log in with your Fujitsu Partner Account.

Login


» Forgot password

Register now

If you do not have a Fujitsu Partner Account, please register for a new account.

» Register now
Feb 06 2017

Windows Server 2016: New Release, Expanded Feature Sets (Part 3 of 3)

/data/www/ctec-live/application/public/media/images/blogimages/Visual_WS_Blog.jpg

The first two parts of our mini-series covered central aspects of Microsoft's cloud-oriented new server OS as well as key innovations in Windows Server 2016 Standard Edition. In the third and final part, we'll focus on the security and storage/data protection enhancements that play central roles in the Datacenter Edition – more precisely, on guarded fabrics, shielded VMs, Storage Spaces Direct, and Storage Replica.

Guarded Fabrics and Shielded Virtual Machines
Over the past decade and a half, VMs have developed from a rather exotic means to run select applications into a standard tool that helps IT departments to better utilize the constantly growing capabilities of server hardware. Unfortunately, this concept appears much simpler to implement in theory than it is in actual datacenters, where administrators often battle with unwanted side effects like insufficient protection against unauthorized access. Problems like these mostly occur in hosted environments operated by cloud service providers (CSPs) or in private enterprise clouds. To help IT staff overcome – better still: avoid – such difficulties, Microsoft has come up with the concept of guarded fabrics and shielded VMs for Windows Server 2016 Hyper-V.

Both features add an extra protective layer that helps repel rights violations from the start. To this end, administrators have to use a different setup than they normally would when deploying VMs. More specifically, the guarded fabric consists of the following components:

  • The Host Guardian Service (HGS), which typically runs on a cluster of three server nodes
  • One or more guarded hosts, i.e. physical servers protected by HGS, and finally
  • The shielded VMs themselves

HGS serves as a gatekeeper that provides two essential services: attestation and key protection. The attestation service ensures only trusted Hyper-V hosts can run shielded VMs while the key protection function stores the keys that are needed to start the VMs and/or transfer them to other guarded hosts during operation (live migration). Regarding the first, customers can choose between two different attestation modes, one that's hardware-based and another one that relies on a well-kept Active Directory and an admin's experience. As one might expect, hardware-based attestation is the more secure mode, because it enables administrators to exert much stricter control by configuring trusted hosts in such a way that they will only run approved code. This method will work well on modern servers like Fujitsu's latest-generation PRIMERGY and PRIMEQUEST products that come with TPM 2.0 and UEFI firmware 2.3.1 with secure boot enabled, but may not be applicable on older hardware. As an alternative, IT staff may resort to "admin-trusted attestation," which is granted to servers that belong to security groups of a designated Active Directory Domain Service (AD DS). Any physical server that has passed one of these two attestation schemes can serve as a guarded host and is therefore qualified to run shielded VMs.

So what exactly is a shielded VM, and how does it work? By Microsoft's own definition, a shielded VM is "a generation 2 VM that has its own virtual TPM, is encrypted using BitLocker and can only run on healthy and approved hosts in the fabric." To achieve complete protection, administrators may encrypt the operating system disks as well as any associated data disk with BitLocker. The keys needed to boot the VM and decrypt the disks are then stored in the virtual TPM, which uses a process named secure measured boot for activation whenever a VM is launched. In addition, admins can use so-called template disks – i.e. disks that bear signatures created at a point in time when their content was considered trustworthy – to set up shielded VMs and add another protection layer during deployment. Furthermore, admins need to make sure that vital VM secrets are not accidentally 'leaked' across the virtualized environment and to other users. To this end, all critical information – including trusted disk signatures, RDP certificates, and passwords for local VM admin accounts – is stored in a so-called provisioning or shielding data file (PDK file). The PDK file is itself protected with a tenant key and uploaded to the virtualized environment (fabric) by the client who runs the VM. By shielding a VM like this, admins can effect that it will always be created as originally intended and that a hosting provider or CSP cannot view provisioning details and is thus unable to manipulate the VM. In other words, shielded VMs are practically tamper-proof, and administrators responsible for the virtualized environments or single VMs gain extra peace of mind.

For more detailed insights (including video footage and a glossary), please refer to the "Guarded fabric and shielded VMs overview" on TechNet.

Storage Spaces Direct
This new feature is intended to help enterprises and CSPs to build highly available software-defined storage environments from a combination of industry standard servers and local storage arrays. According to Microsoft, this approach not only reduces storage complexity while increasing scalability, but also enable users to work with storage devices they couldn't use before, such as SATA or NVMe SSDs used for affordable or higher-performing configurations respectively. Storage Spaces Direct, or S2D for short, employs SMB 3.x technologies (including SMB Direct and SMB Multichannel) that allow for the network to be used as high-speed, low-latency storage fabric. This virtually eliminates the need to set up an extra shared SAS fabric. To increase storage capacity and I/O performance, a user simply has to add more servers. S2D will work in 'normal' converged environments, where storage and compute capacities reside in different tiers to ensure independent scaling and management, as well as in hyper-converged setups, where they reside on a single server for easier deployment. More details are revealed in the Microsoft video below.

As may be easily concluded, another basic motivation for creating Storage Spaces Direct was to simplify the rollout of storage clusters in general. To achieve this, the new function relies on a number of standard features that are typically included with Database Editions of Windows Server, such as Failover Clustering or the pooling of independent physical drives. But S2D also adds a couple of new features, with the most important being the Software Storage Bus – essentially a virtual storage bus that spans all servers in a cluster and thus enables each machine to connect to any disk within this cluster. More specifically, Software Storage Bus consists of two components on each server in the cluster; ClusPort and ClusBlft. ClusPort implements a virtual host bus adapter (HBA) that allows a node to connect to disk devices, whereas ClusBlft implements the required virtualization layer for disk devices and enclosures that ClusPort must connect with. For more information, please see the "Software Storage Bus Overview" on TechNet with further links.

Storage Replica
Another new key feature of Windows Server 2016 Database Edition is Storage Replica, a function that was developed to streamline synchronous replication of volumes between clusters and/or servers in order to strengthen backup and recovery. At the same time, Storage Replica provides admins with an opportunity to use asynchronous replication.

Depending on individual usage scenarios, Storage Replica may be deployed in any of the configurations described below:

  • Stretch clusters – here, servers and storage are part of a single cluster, with different nodes sharing different sets of asymmetric storage. Stored data may then be replicated synchronously or asynchronously, with site awareness. Stretch clusters can utilize Storage Spaces with shared SAS storage, SAN or iSCSI-attached LUNs, are managed via PowerShell and the Failover Cluster Manager graphical tool, and allow for automated workload failover.
  • Cluster-to-cluster – in these setups, data is replicated between two separate clusters, either in synchronous or asynchronous fashion. Here, the replication function supports Storage Spaces Direct as well as Storage Spaces with shared SAS storage, SAN and iSCSI-attached LUNs. Admins have to use PowerShell and Azure Site Recovery for management tasks, and trigger failover by manual intervention.
  • Server-to-server – this can best be described as the 'light version' of cluster-to-cluster replication, with the main difference being that synchronous and asynchronous replication occurs between two standalone servers, using Storage Spaces with shared SAS storage, SAN and iSCSI-attached LUNs, or local drives. Admins must work with PowerShell and the Server Manager Tool, and trigger failover manually.

Altogether, Storage Replica allows for more efficient usage of multiple storage systems that either reside at a single site or in multiple datacenters. By stretching or replicating clusters, workloads may run in multiple datacenters to provide local users and applications with faster access to critical data. Likewise, the enhanced replication functions help to ensure better load distribution, a more economic use of compute resources, and smooth failovers in case one datacenter goes down.

Conclusion
As noted in part 1 and part 2 of our mini-series, Windows Server 2016 is an ambitious release that strives for a similarly dominant position in the cloud as its predecessors had in data centers. However, Microsoft doesn't call for a revolution; instead, it sticks with its typical measured approach, carefully enhancing time-tested functions with innovative features that make sense in SMB and enterprise environments alike. And since this is exactly what the company's clientele expects, Microsoft may well reach its visionary goal.

 

 
SHARE

Comments on this article

No comments yet.

Please Login to leave a comment.