EN
PL

Tag: HPC

HPE FRONTIER - The world's most powerful supercomputer.



/HPE FRONTIER - The world's most powerful supercomputer.

The Hewlett Packard Enterprise (HPE) Frontier supercomputer is one of the most powerful supercomputers in the world. It was developed in cooperation with the US Department of Energy (DOE) and is located at Oak Ridge National Laboratory in Tennessee, USA. The Frontier supercomputer was designed to help scientists solve the most complex and pressing problems in a variety of fields, including medicine, climate science and energy.

Tech specs

The HPE Frontier supercomputer is built on the HPE Cray EX supercomputer architecture, which consists of a combination of AMD EPYC processors and NVIDIA A100 GPUs. Its peak performance is 1.5 exaflops (one quintillion floating-point operations per second) and can perform more than 50,000 trillion calculations per second. The system has 100 petabytes of storage and can transfer data at up to 4.4 terabytes per second.

Applications

The HPE Frontier supercomputer is used for a wide range of applications, including climate modeling, materials science and astrophysics. It is also being used to develop new drugs and treatments for diseases such as cancer and COVID-19.

Climate modeling

The Frontier supercomputer is being used to improve our understanding of the Earth's climate system and to develop more accurate climate models. This will help scientists predict the impacts of climate change and develop mitigation strategies.

Development of materials

The supercomputer is also being used to model and simulate the behavior of materials at the atomic and molecular levels. This will help scientists develop new materials with unique properties, such as increased strength, durability and conductivity.

Astrophysics

The Frontier supercomputer is being used to simulate the behavior of the universe on a large scale, including the formation of galaxies and the evolution of black holes. This will help scientists better understand the nature of the universe and the forces that govern it.

Medical developments

The supercomputer is being used to simulate the behavior of biological molecules, such as proteins and enzymes, in order to develop new drugs and treatments for diseases. This will help scientists identify new targets for drug development and develop more effective treatments for a wide range of diseases.

Summary

The HPE Frontier supercomputer represents a major step forward in the development of high-performance computing. Its unprecedented computing power and storage capacity make it a valuable tool for researchers in many fields. Its ability to simulate complex systems at a high level of detail helps us better understand the world around us and develop solutions to some of the most pressing challenges facing humanity.


HPE FRONTIER – World’s Most Powerful Supercomputer



/HPE FRONTIER – World’s Most Powerful Supercomputer

The Hewlett Packard Enterprise (HPE) Frontier supercomputer is one of the most powerful supercomputers in the world. It was developed in collaboration with the US Department of Energy (DOE) and is located at Oak Ridge National Laboratory in Tennessee, US. The Frontier supercomputer is designed to help scientists solve the most complex and pressing problems in a variety of fields, including medicine, climate science and energy.

Technical specifications

The HPE Frontier supercomputer is built on the HPE Cray EX supercomputer architecture, which consists of a combination of AMD EPYC processors and NVIDIA A100 GPUs. It has a peak performance of 1.5 exaflops (one quintillion floating point operations per second) and can perform more than 50,000 trillion calculations per second. The system has 100 petabytes of storage and can transfer data at up to 4.4 terabytes per second.

Applications

The HPE Frontier supercomputer is used for a wide range of applications, including climate modelling, materials science and astrophysics. It is also used to develop new drugs and treatments for diseases such as cancer and COVID-19.

Climate modelling

The Frontier supercomputer is being used to improve our understanding of the Earth’s climate system and to develop more accurate climate models. This will help scientists predict the impacts of climate change and develop mitigation strategies.

Materials science

The supercomputer is also being used to model and simulate the behaviour of materials at the atomic and molecular level. This will help scientists develop new materials with unique properties such as increased strength, durability and conductivity.

Astrophysics

The Frontier supercomputer is being used to simulate the large-scale behaviour of the universe, including the formation of galaxies and the evolution of black holes. This will help scientists better understand the nature of the universe and the forces that govern it.

Drug development

The supercomputer is being used to simulate the behaviour of biological molecules, such as proteins and enzymes, in order to develop new drugs and treatments for diseases. This will help scientists identify new targets for drug development and develop more effective treatments for a wide range of diseases.

Summary

The HPE Frontier supercomputer represents a major step forward in the development of high performance computing. Its unprecedented computing power and storage capacity make it a valuable tool for researchers in many fields. Its ability to simulate complex systems at a high level of detail helps us better understand the world around us and develop solutions to some of the most pressing challenges facing humanity.


NVIDIA H100 - Revolutionary graphics accelerator for high-performance computing



/NVIDIA H100 - Revolutionary graphics accelerator for high-performance computing

NVIDIA, a leading graphics processing unit (GPU) manufacturer, has unveiled the NVIDIA h100, a revolutionary GPU gas pedal designed for high-performance computing (HPC). This groundbreaking gas pedal is designed to meet the demands of the most demanding workloads in the fields of artificial intelligence (AI), machine learning (ML), data analytics and more.

How it's built

NVIDIA h100 is a powerful GPU accelerator that is based on NVIDIA's Ampere architecture. It is designed to deliver unparalleled performance for HPC workloads and can support a wide range of applications, from deep learning to scientific simulation. The h100 is built on the NVIDIA A100 Tensor Core GPU, which is one of the most powerful GPUs available on the market today.

Unique features

The NVIDIA h100 is equipped with several features that set it apart from other GPU accelerators on the market. Some of the most notable features are:

  • High performance – The NVIDIA h100 is designed to deliver the highest level of performance for HPC workloads. It features 640 Tensor cores that offer up to 1.6 teraflops of performance in double precision mode and up to 3.2 teraflops of performance in single precision mode.
  • Memory bandwidth – The H100 has 320 GB/s of memory bandwidth, allowing it to easily handle large data sets and complex calculations.
  • NVLink – The H100 also supports NVIDIA's NVLink technology, which enables multiple GPUs to work together as a single unit. This enables faster data transfer and processing between GPUs, which can significantly increase performance in HPC workloads.
  • Scalability – NVIDIA h100 is highly scalable and can be used in a wide variety of HPC applications. It can be deployed in both on-premises and cloud environments, making it a flexible solution for organizations of all sizes.

Comparisson

When comparing the NVIDIA h100 to other GPU gas pedals on the market, there are a few key differences to consider. Here is a brief comparison between the NVIDIA h100 gas pedal and some of its top competitors:

  • NVIDIA H100 vs NVIDIA A100 - The NVIDIA H100 is built on the same architecture as the A100, but has twice the memory bandwidth and is optimized for HPC workloads.
  • NVIDIA H100 vs AMD Instinct MI100 - The H100 outperforms the MI100 in terms of single precision performance, memory bandwidth and power efficiency.
  • NVIDIA H100 vs. Intel Flex 170 – The H100 was designed specifically for HPC workloads, where Intel's Flex series is a more versatile gas pedal having weaker performance (80GB vs. 16GB of memory).

Summary

NVIDIA h100 is a powerful and versatile GPU accelerator that is designed for high-performance computing workloads. Its high performance, memory bandwidth and NVLink support make it an excellent choice for organizations that require superior computing power. Compared to top competitors, the h100 outperforms in HPC optimization and scalability, making it an ultimate accelerator.


NVIDIA H100 – Revolutionary Graphics Accelerator for High Performance Computing



/NVIDIA H100 – Revolutionary Graphics Accelerator for High Performance Computing

NVIDIA, a leading graphics processing unit (GPU) manufacturer, has unveiled the NVIDIA h100, a revolutionary GPU gas pedal designed for high-performance computing (HPC). This groundbreaking gas pedal is designed to meet the demands of the most demanding workloads in the fields of artificial intelligence (AI), machine learning (ML), data analytics and more.

Construction

NVIDIA h100 is a powerful GPU gas pedal that is based on NVIDIA’s Ampere architecture. It is designed to deliver unparalleled performance for HPC workloads and can support a wide range of applications, from deep learning to scientific simulation. The h100 is built on the NVIDIA A100 Tensor Core GPU, which is one of the most powerful GPUs available on the market today.

Features

NVIDIA H100 comes with several features that set it apart from other GPU gas pedals on the market. Some of the most notable features are:

  • High performance: the NVIDIA H100 is designed to provide the highest level of performance for HPC workloads. It features 640 Tensor cores that offer up to 1.6 teraflops of performance in double precision mode and up to 3.2 teraflops of performance in single precision mode.
  • Memory bandwidth: the H100 has 320 GB/s of memory bandwidth, allowing it to easily handle large data sets and complex calculations.
  • NVLink: The h100 also supports NVIDIA’s NVLink technology, which allows multiple GPUs to work together as a single unit. This enables faster data transfer and processing between GPUs, which can significantly increase performance in HPC workloads.
  • Scalability: the NVIDIA H100 is highly scalable and can be used in a wide variety of HPC applications. It can be deployed in both on-premises and cloud environments, making it a flexible solution for organizations of all sizes.

Comparison

When comparing the NVIDIA h100 to other GPU gas pedals on the market, there are a few key differences to consider. Here is a brief comparison between the NVIDIA H100 gas pedal and some of its top competitors:

  • NVIDIA H100 vs. NVIDIA A100: The NVIDIA h100 is built on the same architecture as the A100, but has twice the memory bandwidth and is optimized for HPC workloads.
  • NVIDIA H100 vs. AMD Instinct MI100: The h100 outperforms the MI100 in terms of single precision performance, memory bandwidth and power efficiency.
  • NVIDIA H100 vs. Intel Xe-HP: h100 is designed specifically for HPC workloads, while Xe-HP is more versatile and can be used in a wider range of applications.

Summary

Overall, the NVIDIA H100 is a powerful and versatile GPU gas pedal that is designed for high-performance computing workloads. Its high performance, memory bandwidth and NVLink support make it an excellent choice for organizations that require superior computing power. Compared to top competitors, the h100 excels in HPC optimization and scalability, making it an excellent choice for organizations of all sizes.


Servers and Data Centers



/Servers and Data Centers

Data centers and servers are the backbone of today's digital world. They store, process and transmit huge amounts of data every day, enabling us to access information, communicate with others and conduct business online. In this article, we will outline the importance of data centers and servers, how they operate, and the challenges and trends shaping their future.

What is a data center?

A data center is a facility used to store computer systems and related components, such as telecommunications and storage systems. Data centers are designed to provide high levels of availability, security and reliability to ensure that stored and processed data is always available and protected.

They come in a variety of sizes, from small server rooms to large corporate facilities that can cover hundreds of square meters. Some data centers are owned and operated by individual organizations, while others are operated by third-party service providers and offer hosting services to multiple customers.

How do the servers work?

Servers are the backbone of data centers, providing the computing power needed to process and store data. A server is a computer system that is designed to provide specific services or resources to other computers or devices connected to a network.

Servers can perform many functions, such as hosting websites, running applications and storing and processing data. A server can be a physical machine or a virtual machine that runs on top of a physical machine. Virtualization technology allows multiple virtual servers to run on a single physical machine, allowing organizations to maximize computing resources and reduce costs.

Challenges and trends

As the demand for digital services continues to grow, data centers and servers face several challenges and trends that will shape their future.

  • One of the primary challenges is the need for greater energy efficiency. Data centers consume huge amounts of energy, and as the number of data centers grows, so does their environmental impact. To meet this challenge, data centers are adopting more energy-efficient technologies, such as advanced cooling systems, and using renewable energy sources such as solar and wind power.
  • Another challenge is the need for greater security. Data breaches can have serious consequences, both for organizations and individuals. Data centers are implementing more stringent security measures, such as multi-factor authentication and encryption, to protect against cyber attacks.
  • In terms of trends, "edge processing" is becoming an important trend in data center and server architecture. It involves processing data closer to the source, reducing latency and improving performance. This is especially important for applications requiring real-time data processing, such as autonomous vehicles and industrial automation.

Summary

Data centers and servers are essential components of the digital infrastructure that supports our modern world. They enable us to access and store vast amounts of information, and provide the computing power needed for critical applications and services. As the demand for digital services continues to grow, data centers and servers will face ongoing challenges and trends that will shape their future. By adopting innovative technologies and strategies, data centers and servers can continue to evolve and meet the needs of our rapidly changing digital world.


Supermicro Ultra SuperServer



/Supermicro Ultra SuperServer

Supermicro Ultra SuperServer® is Supermicro’s 11th generation high performance general purpose server. The Ultra is designed to provide the highest performance, flexibility, scalability and serviceability in demanding IT environments, as well as to power critical corporate workloads.

Unmatched performance: support for two 2nd Generation Intel® Xeon® Scalable processors with up to 28 cores per socket and up to 6TB of ECC DDR4 memory in 24 DIMM slots with Intel® Optane “¢ DCPMM support, the Ultra is designed to support demanding and complex loads. The Ultra is available in NVMe all-flash configurations where users can benefit from reduced latency and increased IOP. With NVMe, it is possible to increase storage latency up to 7x and increase throughput by up to 6x.1 The ROI benefits of NVMe deployments are immediate and significant.

Exceptional flexibility: discover the freedom to adapt to different loads with the versatile Supermicro Ultra system. Improve your server environment with the perfect combination of computing power, memory and storage performance, network flexibility and serviceability. This highly scalable system provides excellent expansion and storage options thanks to our patented vertical system. With support for multiple PCIe add-on cards, the Ultra Future protects your business against ever-changing computation and storage. This Ultra server is designed to handle any workload in any number of demanding environments.

Continuous reliability and serviceability: Achieve higher levels of high availability and data storage with the latest Intel® Xeon® Scalable processors, ECC DDR4 memory modules, NVMe-enabled disk bays, and energy-efficient redundant power supplies. Designed from the ground up as an enterprise class, the Ultra is fully equipped with energy-efficient components and built-in redundancy.

Supermicro Ultra Servers are designed to give the greatest possible power, flexibility and scalability. It is a great choice to meet the most demanding operations in Enterprise, Data Center and Cloud Computing environments.


NVIDIA hits BIG in the Data Center market



/NVIDIA hits BIG in the Data Center market

Nvidia is a company known for producing high-performance graphics cards and gaming hardware, but the company is also making waves in the data center space with its Nvidia Data Center platform. The platform offers a set of hardware and software products designed to accelerate data center workloads, from machine learning and AI to scientific computing and virtual desktop infrastructure.

NVIDIA'S Hardware

At the heart of the Nvidia Data Center platform is a line of data center GPUs, including the H100, A100, V100 and T4. These chips are optimized to accelerate a wide range of workloads, from training deep learning models to running virtual desktops. They offer high levels of parallelism and performance, and are designed to be scalable and meet the needs of large data centers. In addition to GPUs, Nvidia also offers a range of data center hardware products, including the DGX A100 system, which combines eight A100 GPUs with NVLink interconnect technology to deliver high performance computing and storage in a single server.

Software to manage

In addition to its hardware products, Nvidia also offers a suite of software products designed to help data center operators manage and optimize their workloads. This includes Nvidia GPU Cloud (NGC), which provides a repository of pre-trained deep learning models, as well as tools for deploying and managing GPU-accelerated workloads. Nvidia also offers a range of software tools for managing and optimizing GPU performance, including the Nvidia CUDA Toolkit, which provides a set of libraries and APIs for developing GPU-accelerated applications, and the Nvidia GPU Management Toolkit, which provides tools for monitoring and optimizing GPU performance in data center environments.

Purpose of the systems

The Nvidia Data Center platform is used in a wide range of industries and applications, from scientific computing and weather forecasting to financial services and healthcare. For example, the platform is used by the National Center for Atmospheric Research to perform high-resolution climate change simulations and by the Centers for Disease Control and Prevention to analyze genomic data to identify disease outbreaks. In the financial services industry, the Nvidia Data Center platform is used to run complex risk simulations and predictive analytics models, while in healthcare it is used to accelerate medical imaging and drug discovery research.

Summary

The Nvidia Data Center Platform offers a powerful set of hardware and software products designed to accelerate data center workloads across a wide range of industries and applications. With a focus on GPU acceleration and high-performance computing, the platform is well suited for machine learning and artificial intelligence workloads, as well as scientific computing and virtual desktop infrastructure. As data center workloads grow in complexity and scale, the Nvidia Data Center platform is likely to play an increasingly important role in accelerating data center performance and enabling new applications and use cases.


NVIDIA Hits BIG in Data Center Market



/NVIDIA Hits BIG in Data Center Market

Nvidia is a company known for producing high-performance graphics cards and gaming hardware, but the company is also making waves in the data centre space with its Nvidia Data Centre platform. The platform offers a set of hardware and software products designed to accelerate data centre workloads, from machine learning and AI to scientific computing and virtual desktop infrastructure.

Hardware offer

At the heart of the Nvidia Data Centre platform is a line of data centre GPUs, including the A100, V100 and T4. These chips are optimised to accelerate a wide range of workloads, from training deep learning models to running virtual desktops. They offer high levels of parallelism and performance, and are designed to be scalable and meet the needs of large data centers. In addition to GPUs, Nvidia also offers a range of data centre hardware products, including the DGX A100 system, which combines eight A100 GPUs with NVLink connectivity technology to deliver high performance computing and storage in a single server.

Software offer

In addition to its hardware products, Nvidia also offers a suite of software products designed to help data centre operators manage and optimise their workloads. This includes the Nvidia GPU Cloud (NGC), which provides a repository of pre-trained deep learning models, as well as tools to deploy and manage GPU-accelerated workloads. Nvidia also offers a range of software tools for managing and optimising GPU performance, including the Nvidia CUDA Toolkit, which provides a set of libraries and APIs for developing GPU-accelerated applications, and the Nvidia GPU Management Toolkit, which provides tools for monitoring and optimising GPU performance in data centre environments.

Use cases

The Nvidia Data Center platform is used across a wide range of industries and applications, from scientific computing and weather forecasting to financial services and healthcare. For example, the platform is used by the National Center for Atmospheric Research to perform high-resolution climate change simulations and by the Centers for Disease Control and Prevention to analyse genomic data to identify disease outbreaks. In the financial services industry, the Nvidia Data Centre platform is used to run complex risk simulations and predictive analytics models, while in healthcare it is used to accelerate medical imaging and drug discovery research.

Summary

The Nvidia Data Centre Platform offers a powerful set of hardware and software products designed to accelerate data centre workloads across a wide range of industries and applications. With a focus on GPU acceleration and high-performance computing, the platform is well suited for machine learning and artificial intelligence workloads, as well as scientific computing and virtual desktop infrastructure. As data centre workloads grow in complexity and scale, the Nvidia Data Centre platform is likely to play an increasingly important role in accelerating data centre performance and enabling new applications and use cases.


Supermicro Ultra SuperServer



/Supermicro Ultra SuperServer

Supermicro Ultra SuperServer® is Supermicro's 11th-generation, high-performance general-purpose server. Ultra is designed to deliver superior performance, flexibility, scalability and serviceability in demanding IT environments, and to power critical enterprise workloads.

Unmatched performance: support for two scalable second-generation Intel® Xeon® processors with up to 28 cores per socket and up to 6 TB of ECC DDR4 memory in 24 DIMM slots with support for Intel® Optane Technology makes Ultra designed to handle demanding and complex workloads . Ultra is available in NVMe all-flash configurations, where users can benefit from reduced latency and increased IOP. With NVMe, it is possible to increase storage latency by up to 7 times and increase throughput by up to 6 times.1 The ROI benefits from NVMe deployments are immediate and significant.

Exceptional flexibility: Discover the freedom to adapt to different workloads with the versatile Supermicro Ultra system. Enhance your server environment with the perfect combination of computing power, memory and storage performance, network flexibility and serviceability. This highly scalable system provides excellent expansion and storage options with our patented vertical system. With support for multiple additional PCIe cards, the Ultra future-proofs your business with ever-changing computing and storage. This Ultra server is designed to handle any workload in any number of demanding environments.

Continued reliability and ease of service: Achieve higher levels of high availability and storage with the latest scalable Intel® Xeon® processors, ECC DDR4 memory modules, hot-swappable drive bays with NVMe support and energy-efficient redundant power supplies. Designed from the ground up as enterprise grade, Ultra is fully equipped with energy-efficient components and built-in redundancy.