HomeStock MarketRelease of "Fugaku-LLM" - a large language model trained on the supercomputer...

Release of “Fugaku-LLM” – a large language model trained on the supercomputer “Fugaku” By Investing.com

-



TOKYO, May 10, 2024 – (JCN Newswire) – – A team of researchers in Japan released Fugaku-LLM, a large language model (1) with enhanced Japanese language capability, using the RIKEN supercomputer Fugaku. The team is led by Professor Rio Yokota of Tokyo Institute of Technology, Associate Professor Keisuke Sakaguchi of Tohoku University, Koichi Shirahata of Fujitsu Limited, Team Leader Mohamed Wahib of RIKEN, Associate Professor Koji Nishiguchi of Nagoya University, Shota Sasaki of CyberAgent, Inc, and Noriyuki Kojima of Kotoba Technologies Inc.

To train large language models on Fugaku, the researchers developed distributed training methods, including porting the deep learning framework Megatron-DeepSpeed to Fugaku in order to optimize the performance of Transformers on Fugaku. They accelerated the dense matrix multiplication library for Transformers, and optimized communication performance for Fugaku by combining three types of parallelization techniques and accelerated the collective communication library on the Tofu interconnect D.

Fugaku-LLM has 13 billion parameters (2) and is larger than the 7-billion-parameter models that have been developed widely in Japan. Fugaku-LLM has enhanced Japanese capabilities, with an average score of 5.5 on the Japanese MT-Bench (3), the highest performance among open models that are trained using original data produced in Japan. In particular, the benchmark performance for humanities and social sciences tasks reached a remarkably high score of 9.18.

Fugaku-LLM was trained on proprietary Japanese data collected by CyberAgent, along with English data, and other data. The source code of Fugaku-LLM is available on GitHub (4) and the model is available on Hugging Face (5). Fugaku-LLM can be used for research and commercial purposes as long as users comply with the license.

3rd party Ad. Not an offer or recommendation by Investing.com. See disclosure here or
remove ads
.

In the future, as more researchers and engineers participate in improving the models and their applications, the efficiency of training will be improved, leading to next-generation innovative research and business applications, such as the linkage of scientific simulation and generative AI, and social simulation of virtual communities with thousands of AIs.

Background

In recent years, the development of large language models (LLMs) has been active, especially in the United States. In particular, the rapid spread of ChatGPT (6), developed by OpenAI, has profoundly impacted research and development, economic systems, and national security. Countries other than the U.S. are also investing enormous human and computational resources to develop LLMs in their own countries. Japan, too, needs to secure computational resources for AI research so as not to fall behind in this global race. There are high expectations for Fugaku, the flagship supercomputer system in Japan, and it is necessary to improve the computational environment for large-scale distributed training on Fugaku to meet these expectations.

Therefore, Tokyo Institute of Technology, Tohoku University, Fujitsu, RIKEN, Nagoya University, CyberAgent, and Kotoba Technologies have started a joint research project on the development of large language models.

Role of each institution/company

Tokyo Institute of Technology: General oversight, parallelization and communication acceleration of large language models (optimization of communication performance by combining three types of parallelization, acceleration of collective communication on the Tofu interconnect D)

Tohoku University: Collection of training data and model selection

Fujitsu: Acceleration of computation and communication (acceleration of collective communication on Tofu interconnect D, performance optimization of pipeline parallelization) and implementation of pre-training and fine-tuning after training

3rd party Ad. Not an offer or recommendation by Investing.com. See disclosure here or
remove ads
.

RIKEN: Distributed parallelization and communication acceleration of large-scale language models (acceleration of collective communication on Tofu interconnect D)

Nagoya University: Study on application methods of Fugaku-LLM to 3D generative AI

CyberAgent: Provision of training data

Kotoba Technologies: Porting of deep learning framework to Fugaku

GPUs (7) are the common choice of hardware for training large language models. However, there is a global shortage of GPUs due to the large investment from many countries to train LLMs. Under such circumstances, it is important to show that large language models can be trained using Fugaku, which uses CPUs instead of GPUs. The CPUs used in Fugaku are Japanese CPUs manufactured by Fujitsu, and play an important role in terms of revitalizing Japanese semiconductor technology.

By extracting the full potential of Fugaku, this study succeeded in increasing the computation speed of the matrix multiplication by a factor of 6, and the communication speed by a factor of 3. To maximize the distributed training performance on Fugaku, the deep learning framework Megatron-DeepSpeed was ported to Fugaku, and the dense matrix multiplication library was accelerated for Transformer. For communication acceleration, the researchers optimized communication performance for Fugaku by combining three types of parallelization techniques and accelerated the collective communication on the Tofu interconnect D. The knowledge gained from these efforts can be utilized in the design of the next-generation computing infrastructure after Fugaku and will greatly enhance Japan’s future advantage in the field of AI.

2. An easy-to-use, open, and secure, large language model with 13 billion parameters

3rd party Ad. Not an offer or recommendation by Investing.com. See disclosure here or
remove ads
.

In 2023, many large language models were developed by Japanese companies, but most of them have less than 7 billion parameters. Since the performance of large-scale language models generally improves as the number of parameters increases, the 13-billion-parameter model the research team developed is likely to be more powerful than other Japanese models. Although larger models have been developed outside of Japan, large language models also require large computational resources, making it difficult to use models with too many parameters. Fugaku-LLM is both high performance and well-balanced.

In addition, most models developed by Japanese companies employ continual learning (8), in which open models developed outside of Japan are continually trained on Japanese data. In contrast, Fugaku-LLM is trained from scratch using the team’s own data, so the entire learning process can be understood, which is superior in terms of transparency and safety.

Fugaku-LLM was trained on 380 billion tokens using 13,824 nodes of Fugaku, with about 60% of the training data being Japanese, combined with English, mathematics, and code. Compared to models that continually train on Japanese, Fugaku-LLM learned much of its information in Japanese. Fugaku-LLM is the best model among open models that are produced in Japan and trained with original data. In particular, it was confirmed that the model shows a high benchmark score of 9.18 in the humanities and social sciences tasks. It is expected that the model will be able to perform natural dialogue based on keigo (honorific speech) and other features of the Japanese language.

3rd party Ad. Not an offer or recommendation by Investing.com. See disclosure here or
remove ads
.

Future Development

The results from this research are being made public through GitHub and Hugging Face so that other researchers and engineers can use them to further develop large language models. Fugaku-LLM can be used for research and commercial purposes as long as users comply with the license. Fugaku-LLM will be also offered to users via the Fujitsu Research Portal from May 10th, 2024.

In the future, as more researchers and engineers participate in improving the models and their applications, the efficiency of training will be improved, leading to next-generation innovative research and business applications, such as the linkage of scientific simulation and generative AI, and social simulation of virtual communities with thousands of AIs.

Acknowledgement

This research was supported by the Fugaku policy-supporting proposal “Development of Distributed Parallel Training for Large Language Models Using Fugaku” (proposal number: hp230254).

[1] Large language model :Models the probability with which text appears and can predict the text (response) that follows a given context (query).

[2] Parameter :A measure of the size of a neural network. The more parameters, the higher the performance of the model, but the more data is required for training.

[3] Japanese MT-Bench :Benchmark test provided by Stability AI

[4] GitHub :Platform used to publish open source software

[5] Hugging Face :Platforms used to publish AI datasets

[6] ChatGPT :A large language model developed by OpenAI, which has brought about a major social change, surpassing 100 million users in about two months after its release.

[7 ]GPU :Originally produced as an accelerator for graphics, but has recently been used to accelerate deep learning

3rd party Ad. Not an offer or recommendation by Investing.com. See disclosure here or
remove ads
.

[8] Continual learning :A method for performing additional training on a large language model that has already been trained. Used for training language models in different languages or domains.

About Fujitsu

Fujitsu’s purpose is to make the world more sustainable by building trust in society through innovation. As the digital transformation partner of choice for customers in over 100 countries, our 124,000 employees work to resolve some of the greatest challenges facing humanity. Our range of services and solutions draw on five key technologies: Computing, Networks, AI, Data & Security, and Converging Technologies, which we bring together to deliver sustainability transformation. Fujitsu Limited (TSE:6702) reported consolidated revenues of 3.7 trillion yen (US$26 billion) for the fiscal year ended March 31, 2024 and remains the top digital services company in Japan by market share. Find out more: www.fujitsu.com.

Press Contacts

Fujitsu Limited

Public and Investor Relations Division

Inquiries (https://bit.ly/3rrQ4mB)

Copyright 2024 JCN Newswire . All rights reserved.



LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Russia’s Strategic Shift to Bitcoin for International Trade

Russian Finance Minister Anton Siluanov has confirmed that Bitcoin is being used by the country for international trade, highlighting its potential to reduce reliance...

Azuki Leads NFT Volume Record With $2.51M

Anime-themed NFT project Azuki has caught the eyes of many in the...

Solana’s Jito staking pool exceeding $100M in monthly tips: Kairos Research

Solana staking pool Jito clocked monthly revenues from priority fees and tips of more than $100 million in November and December, according to a...

Bitcoin falls to $100,000 following hawkish FOMC despite 25bp rate cut

Bitcoin (BTC) fell to a low of $100,300 following hawkish comments by Fed chair Jerome Powell during the Federal Open Market Committee’s (FOMC)Markets...

Most Popular