Fujitsu
Not logged in » Login
X

Please login

Please log in with your Fujitsu Partner Account.

Login


» Forgot password

Register now

If you do not have a Fujitsu Partner Account, please register for a new account.

» Register now
Nov 16 2016

Flash Storage for the Cloud: NetApp SolidFire

/data/www/ctec-live/application/public/media/images/blogimages/graphics/NetApp_SolidFire.png

"Oh please, let's not fall for this hype!" Not too long ago, this was the standard response IT experts would face whenever they suggested that certain data, applications or services should be moved to 'the cloud.' Today, nobody talks about a hype anymore – services like Dropbox or Office 365 are ubiquitous, and organizations of all sizes will start looking for cloud solutions rather than regular hard- and software when they need new capacities and/or functions. Thus pressure on the cloud's hardware backbone – especially on storage systems – constantly increases, demanding substantial upgrades. NetApp's SolidFire all-flash arrays have what it takes to keep the cloud afloat.

Few concepts in IT history have spread as fast as cloud computing: in the second half of the 2000s, most companies dismissed the idea of 'renting' data center capacities and services from third parties as a bit too avant-garde for their needs. Two years later, CIOs and CFOs started to ask questions about that cloud trend during IT strategy meetings, and after another 12 months and a number of tests, many were convinced that cloud services would help their companies save money and become more agile. Back then, one of the key success factors was the vision that specialized third parties could offer their customers virtually limitless storage and compute capacities. This notion remains vastly unchanged today; but otherwise, expectations have increased: the modern, sophisticated cloud user prefers apps and services that react immediately over those that take their time, effectively demanding they reach the same performance levels as if they were hosted at a local data center.

To meet the constantly increasing expectations, providers of cloud services – whether private, public or hybrid – use the same strategies as more conventional data centers, namely upgrading their infrastructures and especially the storage parts. In recent years, this meant they'd be deploying more and more all-flash arrays (AFAs) to benefit from the higher data transfer rates, faster response times and greater reliability provided by SSD-based systems. However, any cloud-scale operation that deserves the name will sooner or later find that a combination of performance improvements and periodic capacity increases may not be enough to keep their clientele on board. Instead, they need a platform that flexibly adapts to changing user needs and behaviors while still retaining flash's crucial speed advantage. The question remains, is there a way for IT departments to achieve these combined goals? And if so, can they do it in one single step?

Server-Based Architecture
The simple answer to both questions is: yes – by deploying NetApp SolidFire AFAs. Short and concise as it may be, this response doesn't explain how this new family of products can help admins solve a number of problems that over the years stifled numerous attempts at building adaptable storage infrastructures. To solve these issues, they need a platform that's as easy to expand as it is simple to manage. SolidFire AFAs consist of a combination of x86 servers that act as storage nodes and Fibre Channel interconnects that serve as hardware building blocks and are equipped with their own, storage-optimized operating system named Element OS. The latter includes all functionalities one would normally find in a storage management suite, namely those that enable or support core features such as data protection, data reduction, high availability, and rapid disaster recovery. One result of this approach is that each node comes with an identical, all-inclusive feature set and may therefore be used to lay the foundation for what could rightfully be described as tailor-made flash storage infrastructure that grows seamlessly according to business needs. Let's now take a closer look at the hard- and software components such an infrastructure would build on.

Nodes and Interconnects
SolidFire storage nodes are available in four editions of different size, each of which is equipped with 10 SSDs of varying capacity as well as varying amounts of system memory/read cache, as can be seen in Table 1 below:

Storage Node Type SF2405 SF4805 SF9605 SF19210
 Capacity per SSD  240 GB  480 GB  960 GB  1.92 TB
 Raw capacity per node  2.4 TB  4.8 TB  9.6 TB 19.2 TB
 Main memory / read cache per node  64 GB  128 GB  256 GB  384 GB
 Networking  data: 2 x 10GbE iSCSI SFP+; management: 2 x 1GbE  data: 2 x 10GbE iSCSI SFP+; management: 2 x 1GbE  data: 2 x 10GbE iSCSI SFP+; management: 2 x 1GbE  data: 2 x 10GbE iSCSI SFP+; management: 2 x 1GbE
 Performance per node  50,000 IOPS  50,000 IOPS  50,000 IOPS  100,000 IOPS

Table 1: Overview of SolidFire storage nodes

Altogether, the capacity per SSD and raw capacity per node duplicates with each step, so that the largest configuration will hold 8 times as much information as the basic model. Main memory capacity grows accordingly from the SF2405 to the SF9605, and increases by another 50% from the SF9605 to the SF19210. Networking connections are identical across the board. The performance per node remains at a steady 50,000 IOPS in the smaller models, but grows a 100% in the top-of-the-line edition.

All four node types connect to the storage network via specific FC nodes that serve as fabric interconnects. These nodes provide 4x 16Gbit FC or 4x 10Gbit iSCSI connectivity to the front end. Management traffic can be separated from the cluster-connect via the 1GbE port.

Element OS
NetApp SolidFire Element OS is the storage-optimized operating system exclusively developed for the SolidFire hardware. In this capacity, it incorporates five key components that are needed to facilitate a smooth transition to next-generation, cloud-scale data centers:

  • A scale-out architecture that allows for non-disruptive system expansion, ensures instant availability of new resources and comfortable upgrades, and gives users the flexibility to place resources where they need them when they need them
  • Fine-grained performance controls that are adaptable to a variety of usage scenarios and ultimately help ensure guaranteed performance levels for business- and mission-critical applications
  • Automated hardware management functions that help IT departments streamline time-consuming and error-prone tasks, reduce or eliminate the need for manual efforts, and thus free up capacities that can be used to create and deliver new services
  • Comprehensive data protection functions ("Data Assurance"), including synchronous and asynchronous real-time replication and built-in, automated backup and recovery, and finally
  • An extensive set of data management, data reduction and data distribution capabilities, including global inline deduplication, multi-layer compression, and granular thin provisioning, all designed to better utilize storage node capacities and thus reduce power and cooling costs by up to four fifths (77%)

Probably the most interesting and innovative among these components are the mechanisms that give administrators tight control over system performance and quality of service. In conventional storage systems, performance is inextricably linked to capacity – meaning that bigger systems will always and automatically deliver higher bandwidths and IOPS rates than smaller ones. The problem with this approach is that it breeds overprovisioning: even seasoned admins will inevitably feel compelled to install more and/or bigger arrays than would be necessary, just to get to the desired performance level. This, in turn, means that companies have to face high initial storage investments. Moreover, larger installations also occupy more floor space and eat up more energy for power and cooling, thus causing inflated electricity bills and operating costs.

By contrast, Element OS decouples capacity and performance and instead arranges them in separate 'feature pools.' As a result, storage administrators can now offer well-defined quality of service (QoS) levels without having to convince their clientele to order more storage space. Even better, Element OS lets them define and enforce minimum, maximum and burst transfer rates for each specific application or volume, with the transfer rates varying according to block size. That way, so-called "noisy neighbors" – i.e. programs that tend to interfere with business- or mission-critical applications – will no longer do any harm. This feature is particularly interesting for cloud service providers (CSPs), who are thus able to guarantee that crucial apps will always deliver the required performance – a long-awaited step forward to stable cloud deployments that act as reliably as any on-premises infrastructure.

Image

Fig. 1: Thanks to their simple, server-based architecture, the capacity and performance of SolidFire cloud storage solutions scale seamlessly whenever a new node is added, ensuring guaranteed performance (Source: NetApp)

For optimum results, guaranteed performance should be paired with high availability and a set of data management features that substantially reduce or better still exclude the risk of data losses through hardware failures. To achieve this, developers created SolidFire Helix™ as an integral part of Element OS. SolidFire Helix™ is a distributed replication algorithm that spreads at least two redundant copies of data across all drives within the system. For administrators, this means they no longer have to battle with complicated RAID settings; instead, they simply choose the number of required copies, and the algorithm does the rest. Moreover, since SolidFire Helix™ always uses all drives, they no longer need to worry about shared resources or drives. Put differently, the Helix™ algorithm virtually eliminates the nightmare of single points of failure that might crash critical applications, or worse, take down the entire system. Data that is normally lost in such a crash can be automatically recovered from the remaining SSDs and nodes within a fraction of the time this operation would take in a RAID environment; in fact, single drives may be rebuilt in less than 10 minutes, whereas a complete node will take less than an hour to repair. Thanks to Helix™, SolidFire systems can even handle multiple concurrent faults: they simply isolate the affected drives while the other SSDs redistribute up to 2% of 'their' stored data across all drives in the system. In addition, the algorithm not only simplifies and accelerates data recovery, but also reinstates redundancy and QoS settings after a failure, thus achieving an unprecedented level of fault tolerance.

Lastly, CSPs as well as enterprise cloud projects often struggle with the cost aspect of building adequate storage infrastructures. NetApp SolidFire systems with Element OS help to solve these issues by providing a set of sophisticated data reduction functions such as always-on, inline deduplication, compression and thin provisioning that may be used separately or collectively across the entire solution. While each of these functions alone already helps to substantially reduce storage requirements, using them in parallel facilitates 5 to 10 times higher savings: field tests have shown that optimally configured SolidFire installations need 89% less rack space and 90% less cabling than competing solutions. Thanks to these massive reductions, IT departments are also able to save on floor space as well as power and cooling costs, cutting related energy bills by up to 77%. Likewise, strict data reduction removes overhead from data management and retrieval, so that storage performance per rack unit may increase by up to 63.5%. Plus, there are several welcome side effects depending on usage scenarios. Cloud environments benefit from the fact that deduplication typically scales with system capacity, meaning that they can use data reduction more effectively as their storage platform grows. Virtual desktop infrastructures may be fully cloned and saved on one or more SSDs, thus limiting utilization of compute capacities. Databases may be easily replicated for onsite or remote backups or copied for use in development and test environments. In these cases, users will find that the combined benefits can ultimately help them drive the costs of SSD below that of HDD – and that's more than most CSPs and enterprise cloud projects initially hope to achieve.

Availability and Pricing
NetApp SolidFire systems are available immediately from Fujitsu, either directly or via channel partners. Prices vary according to configuration and region.

Florian Hettenbach

 

About the Author:

Florian Hettenbach

Product Marketing Manager NetApp – Global Marketing

SHARE

Comments on this article

No comments yet.

Please Login to leave a comment.