Not logged in » Login
Nov 15 2022

How to ensure on-premises data can be accessed, used, maintained and managed anytime

In the previous blog in this data storage series, we noted that 85% of organizations now operate within hybrid cloud environments — leveraging a mix of public cloud and on-premises infrastructure (aka “private cloud”).

 

On-premises infrastructure remains important for several reasons: Compliance with data regulations, supporting legacy apps that are too complex to migrate to the cloud, and when very low latency is critical. In each case, organizations need somewhere solid to store data, based on storage solutions that ensure data can be accessed, used, maintained and managed anytime – without any third-party dependencies.

Image

 

The proper storage for the right data scenario

Our first blog of this series outlined three realities that data center managers must confront regarding on-premises storage: data growth, increased complexity, and limited resources.

Existing systems sometimes limit the ability to expand capacity in terms of data growth. Adding disk capacity alone may not be optimal or even possible — adding additional storage systems may be necessary to achieve expansion quickly.

Once a decision has been reached, based on data accessibility, scalability and required performance, then new storage must be managed separately. However, this must be done in conjunction with the old. For example, being able to respond if one system is running at 90% capacity while the other is at just 20% is essential to ensure that costs don’t shoot up.

Software-Defined Storage (SDS) is one way to handle data growth, as we will see shortly. However, it impacts the issue of complexity. Managing SDS requires new know-how — handling data migration itself, the maintenance of legacy systems, and the migration of legacy applications. This is a significant factor, for example, in the insurance industry.

Of course, no one has unlimited resources to throw at these challenges. Storage remains a cost center: Even in times of data-driven transformation, it’s not practical to allocate profit to your storage investment costs. That means many organizations believe they are stuck with a specific legacy system due to a lack of a transformation culture and agenda, budget, know-how, or just because there are too many other long-term commitments that just can’t be postponed or scrapped.

If those are the constraints, then what are the options? Only the step into the cloud?

 

Traditional RAID storage

These systems are very efficient for specific use cases with unique performance requirements and dedicated applications and their environments.

RAID delivers the highest levels of storage performance and reliability. It underpins data collection, consolidation, and harvesting to transform it into real insight. Traditional RAID storage also continuously evolves to support legacy enterprise applications and modern cloud-native applications. And it makes use of tried and tested technologies that have been put through different challenging scenarios over time. RAID may be more complex to apply in distributed environments.

 

Data-driven transformation in practice

What does this look like in practice?

GKL Marketing-Marktforschung is a Germany-based market research company that has created a highly successful business based on data. It extracts knowledge from massive amounts of retail store-shelf data and provides retailers with a reliable basis for pricing decisions. Each year, GKL generates some 50 million datasets – and the number is growing.

To do this, the company relies on high-performance, highly available databases. GKL initiated an infrastructure transformation with Fujitsu to obtain higher performance and an expanded cache to shorten access times to the database to continue meeting these requirements. Its solution is one of the first in Germany that deployed Fujitsu Storage ETERNUS DX600 S5 state-of-the-art RAID storage system with an Extreme Cache Pool to boost performance. Additional security is also guaranteed thanks to an accelerated read/write cache that generates a full backup of about 30 TB in just 30 minutes.

The combination of solutions from Fujitsu and Commvault is an incredibly effective tool for database-centered projects. Backup software from Commvault ensures a constant backup of the large databases on Fujitsu’s ETERNUS AF all-flash system. Current data can be restored at any time via real-time backup. For archiving, the customer uses the Fujitsu Storage ETERNUS LT260 tape library.

 

Ideal for coping with unexpected growth: Software-Defined Storage

SDS is perfect for handling massive or unpredictable data growth situations, with an easy scale-out concept of “just add additional nodes” while keeping management simple with a single storage system.

But that’s far from the whole story. SDS is specifically designed for complex, high-growth environments and supports new-age workloads. Things like micro-services to provide flexibility, containers for deployment and the extensive use of APIs stress underlying systems and storage in quite different ways.

SDS is also highly attuned to cloud-native apps, which require fast responses to changing conditions. SDS thrives here because of its ability to support diverse workloads that can change dramatically over a short time.

An SDS system has a management layer and an abstraction layer, to which we attach the different storage media. The components are users, virtual machines, compute nodes, and the network.

The advantages of SDS are its easy scalability and its ability to access and collect data in multi-cloud and hybrid cloud environments. The difference from traditional RAID is that SDS simplifies the multiple storage options across a typical enterprise network into just one storage system with one integrated virtualization layer for all storage media — even including the cloud. Overall service levels decrease slightly.

As noted earlier, managing SDS creates its own complexity as it requires new know-how. Fujitsu recognizes this and offers SDS to customers in a way that mitigates potential complexity by delivering fully functional infrastructure, plus a fully tested storage system and virtualization layer that are ready to use.

 

The three reasons to investigate SDS

SDS will not always be the best fit. So, where and when does it make sense to bring it into the enterprise? In Fujitsu’s experience, there are three critical scenarios when you should be seriously investigating SDS:

  1. In tackling unpredictable or unstructured data growth. This is a challenge many customers face today.
  2. In running an increasing number of apps on/off the cloud. Most customers probably do this already.
  3. In implementing a hardware-agnostic storage solution that can manage data in distributed environments. This is becoming increasingly important and is likely to be a core requirement for all enterprises.

 

How to overcome limited resources

Earlier, we looked at the impact of limited resources. There are further options to prevent this from becoming a showstopper.

Cloud economics has changed how businesses fund innovation. High, one-off investment costs are out. Instead, companies now look for regular, predictable costs that are scalable in proportion to business growth.

Fujitsu uSCALE delivers flexible, on-premise IT infrastructures “as-a-service” solution via monthly consumption-based billing based on actual usage. Thus customers benefit from an IT solution that precisely focuses on your specific needs, saves investment costs, enables dynamic growth, and realize faster time to value.

The concept is straightforward: Choose the appropriate services to accelerate digital transformation and de-risk infrastructure investments. uSCALE is a cloud-like consumption model where you gain freedom from upfront investments and only pay for what you use.

The increased financial and technical flexibility makes businesses more resilient and improves time to market by leveraging pre-provisioned buffer capacity deployed in the data center ahead of business needs.

If data growth, complexity and limited resources are the three factors that data center managers must juggle toward data-centric transformation, then – with RAID, SDS and uSCALE – Fujitsu has workable ways to build an IT strategy that is also the foundation for innovation.

 

 

Elisabeth Babelotzky

 

About the Author:

Elisabeth Babelotzky

Global Product Marketing Storage

SHARE

Comments on this article

No comments yet.

Please Login to leave a comment.

X

Please login

Please log in with your Fujitsu Partner Account.

Login


» Forgot password

Register now

If you do not have a Fujitsu Partner Account, please register for a new account.

» Register now