Fujitsu
Not logged in » Login
X

Please login

Please log in with your Fujitsu Partner Account.

Login


» Forgot password

Register now

If you do not have a Fujitsu Partner Account, please register for a new account.

» Register now
May 19 2017

New Heights in Midrange Flash Storage: FUJITSU ETERNUS AF650 (Part 1 of 2)

/data/www/ctec-live/application/public/media/images/blogimages/27290_ETERNUS_Push_Visual.jpg

For many years in a row, Fujitsu storage systems have claimed top spots on the renowned list of SPC-1™ benchmark results. Our midrange all-flash array, introduced in October 2016, is no exception to the rule – in fact, it received A grades for class-beating responsiveness and performance earlier this year. However, quite often it's hard to figure out exactly what such test results mean and to translate them into language that managers and executives understand. Our two-part feature aims to solve these issues, both in general and with regard to the ETERNUS AF650.

To achieve this, we'll have to take a closer look at the test procedures the SPC Benchmark-1™ (or SPC-1™ for short) consists of as well as at the Storage Performance Council, the body that has watched over this and other storage industry standards for the past 15+ years.

Storage Performance Council (SPC)
Like other benchmarking authorities, such as BAPCo and SPEC, the SPC is a vendor-neutral association made up of researchers from leading ICT companies, OEMs, ODMs, and academia. According to the mission statement on its website, the SPC's main purpose is to serve "as a catalyst for performance improvement in storage products," and it has "developed a complete portfolio of industry-standard [...] benchmarks" in pursuit of that goal. The tests themselves are set up in a way that comes as close to real-world usage scenarios as possible, utilizing I/O workloads that "represent [...] storage performance behavior of both OLTP [...] and sequential applications." What's more, the various benchmark sets can be used to evaluate the performance of single components as well as that of complete storage platforms, including distributed ones.

Bringing together industry forces to have them develop one or more generally accepted benchmarking standards that they all will adhere to sounds like a pretty good idea in theory. After all, a set of common standards makes it much easier to compare storage system capabilities and help customers figure out which one best fits their needs. In reality, however, it's not that simple, for various reasons. One of them is rooted in companies' PR policies: Like all other ICT vendors, storage specialists have realized that strong benchmarking results are a very effective marketing tool, as many purchase decisions are based on raw numbers alone rather than a deeper understanding of what exactly they reveal. If all of these numbers suddenly have to be extracted from real-world scenarios and become more comparable, that would definitely rub off some marketing glitz. A generally accepted standard will therefore always be a compromise that provides all vendors with a fair chance to emphasize their products' strengths. The SPC has battled with these difficulties for quite some time, but eventually convinced enough stakeholders that in the long run all parties involved benefit from such a ruleset – the potential buyers because they get more practical results, and the sellers because the trust they gain will easily outweigh the impact of earlier dizzying campaigns.

A Taste of Benchmark History
Other problems with creating a consistent, meaningful and authoritative benchmark were very technical in nature. This may seem counterintuitive at first sight, since such tests only need to supply sufficiently reliable information about two 'objective' or 'raw' measurements, namely data throughput (transfer rates) and latency. The problem is that these are both subject to various external influence factors. An essay1 by Jerome Wendt, President and Lead Analyst at research firm DCIG LLC., that was written for Techtarget's Storage magazine and published in late 2007 gives a pretty accurate description:

"[P]erformance greatly depends on the nature of I/O requests storage systems have to cope with. Data is either written to or read from a storage system, can be accessed randomly or sequentially, and the size of the blocks and files transacted can vary from a few bytes to megabytes."

And he adds:

"To make matters worse, the I/O profiles of real-world applications vary widely. Benchmarks will always have a limited number of workloads and I/O requests, which might not be completely representative of a specific application. Testing the raw performance of storage systems irrespective of apps is a valid way of looking at storage performance as long it's understood that the hardware performance may not be representative of an application's performance. "

Finally, Wendt points to complications that may arise from virtually countless configuration options per storage array, including number of disks, cache size, and the various mechanisms used to ensure fault tolerance, data protection (e.g. data replication/mirroring) and efficient utilization of storage media (like compression and deduplication). For him, it's highly unlikely that benchmark-specific configurations will ever match those that exist in the real world – and the number of issues only multiplies because of the sheer number of storage systems available in the market.

The SPC-1™ Benchmark
Against this backdrop, it's only logical to find that Wendt argued in favor of an industry-wide benchmark standard that would help vendors and customers overcome these obstacles. In 2007, his hopes largely rested on the then still youngish SPC-1, which had been devised at the turn of the millennium and initially garnered a mixed response. However, nowadays most leading storage vendors support the standard and regularly put their systems through its paces – a definite success story.

But how did SPC-1™ become so popular? Wendt's article cites numerous reasons experts still consider valid today, among them "mandatory peer reviews of all benchmark results, meticulous documentation of the tested configurations, and making test results and test details available for public consumption." As noted above, another reason for the broad acceptance of the standard is that the benchmark centers on performance data obtained from everyday application scenarios rather than from what you might call 'bare-metal I/O tests.' In the SPC's own words, it "simulates the demands placed upon on-line, non-volatile storage in a typical server-class computer system" and "provides measurements in support of real world environments characterized by:

  • Demanding total I/O throughput requirements.
  • Sensitive I/O response time constraints.
  • Dynamic workload behaviors.
  • Substantial storage capacity requirements.
  • Diverse user populations and expectations.
  • Data persistence requirements to ensure preservation of all committed data without corruption or loss in the event of a loss of power or storage device failure."2

Furthermore, the standards' General Guidelines demand that all tested storage systems must be generally available in the market as well as relevant to specific customer groups that would deploy them in their data centers. The separate Measurement Guidelines not only call for extremely accurate measurement, but also for "stringent auditing and reporting" of the results, in other words: verifiability. Still, such rules and regulations can be found in all relevant benchmark specifications, so there has to be another ingredient that distinguishes SPC-1™ from the rest. And indeed there is: Unlike many of its counterparts, the SPC-1™ spec stipulates that published results list detailed pricing information about the hard- and software components of a given configuration, plus the cost of necessary additional operational components (e.g. host systems), three-year maintenance and "applicable tariffs, duties and import fees." Put differently, it's the only storage benchmark that provides channel partners and customers with a reasonable price/performance metric as a basis for purchase advice and decisions. How all of this plays out with regard to Fujitsu's ETERNUS AF650 all-flash array will be the topic of part 2 of this feature.

1. Jerome Wendt: “How useful are storage benchmarks?”. In: Storage Vol. 6 No. 8, October 2007. Available online at: http://searchstorage.techtarget.com/magazineContent/How-useful-are-storage-benchmarks (registration required). Retrieved 2017-05-18.

2. Cf. Storage Performance Council (SPC): SPC BENCHMARK 1 (SPC-1™) SPC Benchmark 1/Energy™ Extension (SPC-1/E™) OFFICIAL Specification, Revision 3.4, p. 7. Available online at: http://www.storageperformance.org/specs/SPC1_v340_final.pdf (PDF). Retrieved 2017-05-18.

 
SHARE

Comments on this article

No comments yet.

Please Login to leave a comment.