Fujitsu
Not logged in » Login
X

Please login

Please log in with your Fujitsu Partner Account.

Login


» Forgot password

Register now

If you do not have a Fujitsu Partner Account, please register for a new account.

» Register now
Jun 06 2018

Why We Build Servers That Are Energy-Efficient

/data/www/ctec-live/application/public/media/images/blogimages/28576_Lifestyle_Pictures_-_Datacenter_scr.jpg

Have you ever heard about PUE and DCIE? If not, that's regrettable, but perhaps not too uncommon, as both terms stem from the field of "green computing," i.e. research and development of technology and best practices that enable organizations to build environmentally sound IT platforms. In this context, PUE and DCIE provide metrics that can be used to determine how energy-efficient a given infrastructure truly is – meaning, how much power goes into actual computing performance and how much of it is wasted – and what CIOs and their teams can do to minimize both the operating costs and the ecological footprint of data centers.

Up until the mid-2000's, attempts at building eco-friendly computers and IT infrastructures were often considered to be somewhat exotic at best. After all, like with cars it was performance (speed) that mattered most, or more specifically the question how fast specific data operations could be carried out in order to get ahead of the competition. Following this logic, high energy costs were basically a necessary evil that companies and public services had to put up with to stay afloat. It wasn't until some key players – among them service providers as well as classic hard- and software vendors and state authorities spending taxpayer money – realized that if they continued to build data centers at the same pace, ever-rising power bills would have a severe impact on their budgets and bottom lines. To counter such unwanted effects, a group of ICT vendors, facility architects and utility companies founded The Green Grid, a nonprofit industry consortium whose main mission is to drive the adoption of "resource-efficient, end-to-end IT ecosystems." Key parts of its work comprise developing and establishing metrics and frameworks that enable organizations to set up infrastructures that are both affordable and environmentally sound. And that's where PUE and DCIE come into play.

Metrics for Energy Efficiency
On the face of it, both terms seem rather self-explanatory: PUE is shorthand for Power Usage Effectiveness, a metric that "illustrates the total energy used by a data center divided by the energy used by ICT equipment in that data center," as the Green Grid explains in its glossary. In simpler terms, PUE – which was published as a global standard by ISO in 2016 – measures how much of the total energy that arrives at a data center is actually delivered to its computing equipment, namely servers and storage systems as opposed to lighting, ventilation/cooling etc. In a perfect scenario, all power drawn from the grid would go directly into the computer systems without any loss; hence the ideal value would be 1.0. However, this is typically unachievable under real-world conditions, as was shown in a 2014 survey by the Uptime Institute, which put the average PUE for a regular data center to somewhere between 2.0 and 2.5. Put simply, from every 2 or 2.5 watts coming in, only one half or less was used to power servers and storage arrays. That leaves ample room for improvement.

DCIE stands for Data Center Infrastructure Efficiency and is typically used to measure advancements in an existing setup's energy usage. Mathematically, it is the reciprocal of PUE, calculated by dividing IT equipment power by total facility power and expressed as percentage of the latter.

From Metrics to Real-World Scenarios
Although these metrics provide us with several meaningful insights, the question remains how they can help IT departments to bring power consumption down to an acceptable level and keep hardware and other related expenses in check. One tried and tested method is to look at benchmark results from test suites designed to "evaluate the power and performance characteristics of volume server class computers." At least that's the description the Standard Performance Evaluation Corporation (SPEC) gives of its SPECpower_ssj2008 industry benchmark introduced about a decade ago. Meanwhile, ICT vendors have largely adopted these evaluation mechanisms as a means to control and gradually improve the energy efficiency of their products, while their customers use them to find the right hardware, i.e. servers that offer the best compromise between performance demands on the one hand and the need to keep operational costs in check on the other.

Fujitsu has long subscribed to the idea of providing users with a vast portfolio of eco-friendly products to choose from, starting in 1993 with the release of our first "green PC," the PCD-4LsI. But of course, we didn't just focus on client computing devices alone. Soon afterwards, we began to bring similar advancements to server technology, resulting in a first peak in 2006/7, when we launched intelligent power management software for PRIMERGY systems and the PRIMERGY TX120, as the world's then most energy-efficient server, within a matter of months. Since then, we have constantly expanded the energy-saving features of our x86-based server platform, setting 37 SPECpower, SAP Power and TPC-E Energy benchmark world records in the process. The last time was in October 2017, when the FUJITSU Server RX4770 M4 delivered an outstanding score of 11,886 overall ssj_ops/watt, at the time the best result for any quad-socket server in the SPECpower_ssj2008 benchmark. This is all the more remarkable since the RX4770 M4 is designed to handle a share of the most demanding tasks to be found in modern data centers, from transactional applications through in-memory databases and business intelligence workloads to server consolidation and the implementation of hybrid clouds.

How We Do It
So what exactly do we do to build a system that's capable of delivering reliable backend services as well as supporting large-scale infrastructure projects and digital transformation? The answer is quite similar to the one we give regarding quality:

  • We apply rigorous standards when choosing key components like processors, memory modules, SSDs and, occasionally, GPGPUs – for example, Intel® Xeon® Scalable Processors and DDR4 RAM both allow for significant cuts in power consumption compared with their immediate predecessors.
  • To these, we add redundant, hot-pluggable and highly efficient fans and power supplies: Most PRIMERGY servers are equipped with modular PSUs that deliver up to 94% of power efficiency – meaning that only 6% of the transformed energy drawn from the grid are lost, e.g. in the form of excess heat, as opposed to 20% and over in servers using more conventional power supplies. This boost in power efficiency also translates into longer product lifecycles, massive cost savings of up to €275 ($320) per system over a 5-year period, and a PUE of 1.5 – a good starting point for building highly efficient infrastructures.
  • Alternatively, one redundant PSU can be replaced with our FUJITSU Battery Backup Unit (FJBU), which replaces a bulky, complicated-to-manage and expensive uninterruptible power supply and allows for a graceful server shutdown in case of a power outage, thus helping to avoid severe data and financial losses.
  • In addition, Fujitsu has developed a set of mature, sophisticated cooling technologies that help our servers to remain operative even under conditions that would normally be considered hostile to any type of IT equipment – for example, when exposed to ambient temperatures of up to 45 °C (113 °F). But that's a whole new story we will tell in our next blog.

Conclusion
Delivering power-saving, eco-friendly server systems has long been a focus of Fujitsu's PRIMERGY product strategy, even before our company joined the Green Grid in 2007. During that time, we've made numerous efforts to enhance the energy efficiency of our products – a path we still follow today. In other words, customers can look forward to further improvements down the road, with the next PRIMERGY generation coming up later in 2018.

For a first impression of how PRIMERGY servers differ from the competition, check out the video below.

Marcel Schuster

 

About the Author:

Marcel Schuster

Product Marketing Manager Server & PRS

SHARE

Comments on this article

No comments yet.

Please Login to leave a comment.