Fujitsu
Not logged in » Login
X

Please login

Please log in with your Fujitsu Partner Account.

Login


» Forgot password

Register now

If you do not have a Fujitsu Partner Account, please register for a new account.

» Register now
Apr 02 2019

Innovation Meets Performance: Meet the New PRIMERGY M5 and PRIMEQUEST Generations (Pt. 1)

/data/www/ctec-live/application/public/media/images/blogimages/py_m5_et_al./Familie_RX_TX_Mood_25years.jpg

Citius, altius, forties (in English: faster, higher, stronger) is not only the official motto of the Olympic Games; it's also a perfect description of the logic the ICT industry has followed over the past couple of decades. Hence, every generation of processors and memory modules is more powerful and adaptable than the previous one, and so is the hardware building on these components. Consequently, developing and building new servers is a permanent exercise in enabling customers to produce better results with smaller effort. Our latest-generation PRIMERGY and PRIMEQUEST servers are no exception to the rule. But before we talk about these new systems, let's first look at the two key components that initiated the latest advancements.

While technical advancements usually result from concerted efforts, it's safe to say that this time around two new components play a key role in giving our systems an oft-desired and necessary performance boost: Intel's latest additions to its Xeon® Processor Scalable Family, informally dubbed "Cascade Lake," are the company's first server processors to support its Optane™ memory modules. Cascade Lake processors are available in five different editions, ranging from a relatively basic "Bronze" model with six cores and 1.9 GHz base frequency to a "Platinum" version with up to 28 cores and turbo maximum speed of 3.8 GHz. More details are listed in Table 1 below.

Image

Table 1: Feature overview of Intel® “Cascade Lake” processors

One of the most notable advancements to glean from this list is that top-of-the-line models are now equipped with up to three UPI links between processor cores, thus allowing for transfer speeds of 10.4 GT/s. Plus, they support higher DDR4 transfer rates of 2,933 MT/s when connecting to standard memory modules. Other features, such as the high number of 48 PCIe lanes and AVX-512 support, were already available with the predecessors, but benefit greatly from the upgrades.

New Memory Modules Trigger Faster Responses
The most substantial step forward, though, is that Platinum- and Gold-level Cascade Lake processors are the first members of the Xeon® Scalable product line that support the aforementioned Optane™ modules. Billed by Intel itself as "revolutionary memory," the non-volatile technology was originally developed in collaboration with Micron Inc. under the moniker 3D XPoint and first made waves in mid-2015. Back then, online mag ExtremeTech described it as follows:

"3D XPoint is [...] is designed in a 3D structure [...] to be non-volatile, stackable (to improve density), and can perform read/write operations without requiring a transistor. [...] The real killer feature of 3D XPoint memory is that it claims to be 1000x more durable than NAND while simultaneously offering a 1000x performance increase."

Moreover, 3D XPoint was already estimated to be "10 times denser than conventional DRAM" even at that early stage, with a further increase being projected to follow soon.

Meanwhile, both firms parted way again, and Intel has proceeded to market the new technology under the Optane™ brand name that covers SSDs and memory modules. In our context, we can focus on the latter. Contrary to what one might expect, Optane™ modules were first implemented in consumer devices, such as high-end gaming notebooks and mobile workstations, where they served as some kind of accelerator that heightens a computer's responsiveness. An illustration of how this works is in Figures 1 and 2 below.

Image

Fig. 1: ‘Classic’ distribution of data across various tiers

In the regular scenario you will find in most modern enterprises, data is spread across at least three tiers: Files that are currently in use (so-called hot data) are found in DRAM; frequently accessed information that's not needed at the moment is stored on SSDs; and seldom-used data are kept on HDDs. Over the past several years, this layered architecture has worked considerably well for many applications and usage scenarios. In the meantime, however, more and more companies are trying to deliver results in real or near-real time, regardless of whether this affects OLTP systems, BI, databases, VDI platforms or simple office suites. ICT vendors were hence facing demands for a new class of memory that would keep as much data as possible as close to the main memory and CPU as possible. With Optane™, Intel has introduced a solution to this problem in the form of Data Center Persistent Memory Modules (DCPMM).

Image

Fig. 2: Intel Optane™ DC persistent memory technology bridges the gap between data stored in DRAM and on SSDs

As you can see in Fig. 2, Optane™ adds a fourth layer to the architecture, effectively building a bridge between data stored in DRAM and on SSDs, in an effort to minimize or better still eliminate whichever delays may exist between the two. That, in turn, allows for much faster access to data and applications – exactly what is needed in data centers that are increasingly populated by in-memory databases, HCI solutions, and the like.

Part 2 of this blog describes how these components work inside our new PRIMERGY systems.

Timo Lampe

Marcel Schuster

 

About the Author:

Timo Lampe

Senior Specialist Marketing Manager – Data Center Systems / Server

About the second Author:

Marcel Schuster

Senior Specialist Marketing Manager – Data Center Systems / Server

SHARE

Comments on this article

No comments yet.

Please Login to leave a comment.