Fujitsu
Not logged in » Login
X

Please login

Please log in with your Fujitsu Partner Account.

Login


» Forgot password

Register now

If you do not have a Fujitsu Partner Account, please register for a new account.

» Register now
Oct 09 2017

The Fujitsu Approach: AI with a Human Touch

/data/www/ctec-live/application/public/media/images/blogimages/43288_FUJITSU_Server_PRIMERGY_CX400_M4_Open_3D_4_Nodes_scr-RES.jpg

The idea of Artificial Intelligence (AI) – a sapient, if not sentient entity created by humans in their own image – has fascinated people since ancient Egyptian sculptors alleged they were able to breathe life into the sacred statues they carved out of stone. Since then, the concept hasn't lost its attraction, although it didn't always come with positive connotations, in literature and movies at least. Over the past 60 years, however, scientists and engineers have tried to devise more humane forms of AI – devices that may help us solve existential problems instead of creating them. NVIDIA's GTC Europe conference provides us with a perfect platform to demonstrate our most recent advancements in the field.

A Bit of History
While Fujitsu has been a pioneer in many areas of ICT development, we must confess that AI wasn't one of them. When the term "Artificial Intelligence" was coined at the Dartmouth Conference of 1956, only researchers from the U.S. and the UK had the means to do significant work. Moreover, it soon turned out that many early researchers tended to be overly optimistic with regard to the possible results and the timeframe these would require to be achieved: In 1958, Herbert A. Simon and Allen Newell, who together with others had developed programming milestones such as Logic Theorist, the General Problem Solver (G.P.S.) and the Information Processing Language (IPL), predicted that a digital computer would become the world's chess champion within ten years – in reality, it took until 1997 until Garry Kasparov could be defeated by an artificial intelligence.. Such bold forecasts starkly contrasted with the actual results – despite laying substantial groundwork in fields like computer vision and natural language processing, researchers were only just taking their first steps in exploring a whole new continent of potential ICT usage scenarios. In addition, the 'electronic brains' of the 50s, 60s and early 70s simply lacked the compute power and the knowledge (information) needed to perform seemingly simple tasks like recognizing a face, which humans will do in a flash. As a result, interest in AI research projects waned and by the mid-70s, funding was massively reduced or withdrawn.

This was, however, a temporary phenomenon. Thanks to the burgeoning IT revolution of the early 1980s – in particular, the rise of the PC and, more important, the "expert system" – people's and organizations' fascination with AI soon leapt back to the old level and beyond. At this point, other nations started their own research efforts; a prominent example was the Fifth Generation Computer Systems project that was launched by Japan's Ministry of International Trade and Industry in 1982 and focused on creating a workstation that used massively parallel processing to produce results. Although several models were built over the following decade, none of them was a commercial success, mainly because competing SPARC- and x86-based machines turned out to be both faster and more affordable. Still many experts don't rate the project as a failure, but argue it was merely ahead of its time – a judgment that appears at least partially justified in light of the parallel-computing renaissance that has driven GPU/GPGPU development since at least 2007. Today, using high-end servers and workstations that excel at tasks like simulation (e.g. in fluid dynamics or computational finance), image processing, data mining or improving search algorithms is about to become the new normal.

Fujitsu and AI
Fujitsu, and especially Fujitsu Laboratories, started to develop AI technologies during this "second boom of AI" in the 1980s, and unlike many government-backed projects, hasn't stopped work ever since. In the first phase, this included accumulating required know-how and technologies as well as participating in joint developments and experiments; but we gradually moved forward to integrate the results of our research into our products, and have been doing this for quite some time, albeit not in the form of a structured system that was labeled and promoted as AI. This all changed with the release of Zinrai, our framework for a human-centric AI, in November 2015. Its name derives from "shippu-zinrai", the Japanese word for "lightning-fast," thus denoting the exceptional speed at which it supports human decision-making and action. In other words, it can be used to build extremely knowledgeable, fast-paced digital assistants that will help people do better work, not deprive them of their jobs – a vision that's more in line with Asimov's Three Laws of Robotics than with the several Skynet incarnations popularized in the Terminator movies. Today, Zinrai serves as the backbone for various AI-backed services and solutions in Fujitsu's portfolio.

Image

Fig. 1: The PRIMERGY CX2570 M4 server node with Tesla P100 GPUs is the core of Fujitsu's AI-capable scale-out servers

AI in Day-to-Day Operations
Aside from delivering these solutions and services, we are constantly looking for new ways to deliver the benefits of AI and AI-capable systems to energy, healthcare, and manufacturing facilities or financial institutions that want to build their own solution on-premise. Here, we opt for the more classic approach of selling hardware that can be configured in such a way as to support AI functions and software. A key element in this strategy are our scale-out servers, namely the FUJITSU Server PRIMERGY CX400 M4 (pictured at the top) equipped with FUJITSU Server PRIMERGY CX2570 M4 server nodes. In this combination, the CX400 modular enclosure serves as a 'container' that houses the server nodes as well as other relevant components, such as power supplies, fans, and networking controllers. At the very core, however, is the FUJITSU Server PRIMERGY CX2570 M4. Originally developed as a base unit for HPC and VDI environments, each node may be fitted with two CPUs from Intel's Xeon® Processor Scalable family, including the 28-core top model running at clock speeds of up to 3.2 GHz in Turbo mode, and up to 2 TB of main memory. In our context, the more important fact is that it also has room for up to four NVIDIA® Tesla® P100 GPUs, which are particularly well-suited to accommodate so-called deep learning functions. Also announced is the upcoming support for up to four Tesla V100 for NVLink or two Tesla V100 for PCleaccelerators.

But what is deep learning? Several experts have described it as a subset of AI functions that aims to mimic the learning processes inside the human brain. Readers who happen to be parents may still recall how their children learned their first words (e.g. "doggie" or "car") and how they proceeded from linking these terms to specific objects to a whole class of objects with identical or very similar feature sets. Technically, this can be described as an abstraction process based on feature extraction, which has to pass multiple iterations before a child is able to identify any car or dog without making occasional mistakes. Deep learning essentially follows the same procedure, only in this case the learner is not a child, but a computer. So far, this sounds similar to what many machine learning solutions will do; yet deep learning differs from them insofar as its algorithms don't work in a linear fashion, but are organized in a strict hierarchy. Consequently, when deep learning is executed, each algorithm in the hierarchy will perform a non-linear transformation on its input and use what it learns to create a new statistical model as output. This process occurs on each level of the hierarchy and is repeated as often as needed to reach acceptable levels of a. abstraction and b. accuracy.

The main advantage of this approach is that it is virtually lends itself to what is called unsupervised learning: Regular machine learning applications often require programmers to do feature extraction, apply a classification scheme and then translate the results into reliable code. Deep learning programs, by contrast, need no such moderators – once started, they can take on the extraction and classification jobs by themselves. As a result, you get a set of self-learning, ever-improving systems/solutions that excel at pattern recognition in terms of both speed and accuracy, enabling various diverse industries to deliver better results in less time:

  • Providers of geographic and geospatial information services and solutions may use AI to scan their large databases of satellite imagery to find ground formations that indicate the presence of oil fields or rare earths, to determine the health of crops or the distribution of alpine and other flora in tourist resorts, or to assist emergency services in saving lives after a hurricane.
  • In healthcare, AI can help doctors and clinic personnel to identify risks and care patterns as well as to prescribe and carry out the most adequate treatments.
  • In bioinformatics, AI will help researchers to accelerate DNA sequencing as well as the identification, comparison and analysis of genes and their potential effects on human health.
  • Retailers and vendors of branded goods could use the technology to attract customers to the products they want to sell.

To learn more about Fujitsu's AI-based services and solutions, please visit our related microsite about Fujitsu's Technology and Service Vision.

Timo Lampe

 

About the Author:

Timo Lampe

Product Marketing Manager, Global Marketing Server, Fujitsu

SHARE

Comments on this article

No comments yet.

Please Login to leave a comment.