我的视频blog地址 http://www.lofter.com/blog/cloudrivers

为了发布而发布,占位符2023

为了发布而发布,占位符2023

Amazon Elastic Compute Cloud (Amazon EC2) R8g instances, powered by the latest generation AWS Graviton4 processors, provide the best price performance in Amazon EC2 for memory-optimized workloads. R8g instances are ideal for memory-intensive workloads, such as databases, in-memory caches, and real-time big data analytics. R8g instances offer up to 30% better performance, and larger instance sizes with up to 3x more vCPUs and memory than the seventh-generation AWS Graviton3-based R7g instances.

 

https://aws.amazon.com/ec2/instance-types/r8g/

 

Amazon Web Services Inc. today unveiled two next-generation chips from its AWS-designed silicon families for generalized cloud computing and high-efficiency artificial intelligence training with the release of the Graviton4 and Trainium2 during an AWS re:Invent conference keynote.

The Graviton family of Arm-based processors is used by AWS to deliver high-performance and reduced costs for customers for a broad range of cloud compute workloads in the Amazon Elastic Compute Cloud, or EC2. According to Amazon, Graviton4 provides up to 30% better computing power with 50% more cores and 75% more memory bandwidth than the current-generation Graviton3 processors.

“Graviton4 marks the fourth generation we’ve delivered in just five years, and is the most powerful and energy-efficient chip we have ever built for a broad range of workloads,” said David Brown, vice president of compute and networking at AWS. “Silicon underpins every customer workload, making it a critical area of innovation for AWS.”

Amazon has been building its own custom silicon since 2018 with the Graviton1, which powered the A1 EC2 instance. Each successive generation of Graviton has brought with it significant increases in performance, lower costs and efficiency. In 2021, Brown told theCube, SiliconANGLE Media’s livestreaming studio, that Graviton’s availability brought about major ecosystem growth for AWS as customers saw immediate improvements in their workloads.

As of today, AWS offers more than 150 different Graviton-powered Amazon EC2 instance types globally and has rolled out more than 2 million Graviton processors.

Graviton4 processors will be available in a new memory-optimized Amazon EC2 R8g instance, which will allow customers to run improved execution for high-performance databases, in-memory caches and big data analytics workloads at scale. R8g instances will provide large sizes for up to three times more virtual central processing units and three times more memory than the current R7g instances. Amazon said that the new R8g instances are currently available in preview today, with general availability planned in the coming months.

Trainium2: next-gen chip designed for AI training in the cloud

As AI foundation models and large language models behind today’s generative AI applications get larger, they require the processing of massive datasets, which means ever-increasing time and costs to train them. The largest and most advanced models can scale from hundreds of billions to trillions of data points and can generate text, images, audio, video and software code.

Today, AWS announced Trainium2, a purpose-built high-performance chip for training FM and LLMs with up to trillions of parameters that can deliver up to four times the training performance and three times more memory capacity of the first-generation chip. The company also said that it improved the energy efficiency of the chip by two times the first generation.

“With the surge of interest in generative AI, Trainium2 will help customers train their ML models faster, at a lower cost, and with better energy efficiency,” said Brown.

Trainium chips act as AI accelerators for deep-learning algorithms for high-performance AI and ML workloads. They are also optimized for training natural language processing, computer vision and recommender models used in AI applications, such as text summarization, code generation, question answering, image and video generation.

The Trainium2 will be available in new Amazon EC2 Trn2 instances, which include 16 Trainium2 chips in a single instance. Customers will be able to scale these instances up to 10,000 Trianum2 chips in the next generation of EC2 UltraClusters, interconnected with AWS Elastic Fabric Adapter petabit-scale networking, capable of delivering up to 65 exaflops of compute. At that scale, Amazon said that customers will be able to train up to 300 billion-parameter LLMs in a week versus months.

posted @ 2023-12-02 21:32  Michael云擎  阅读(6)  评论(0编辑  收藏  举报
我的视频blog地址 http://www.lofter.com/blog/cloudrivers