Upgraded highmem nodes on Apocrita
The Apocrita highmem
nodes have just been upgraded so that they contain newer
CPUs with more modern instruction sets.
The Apocrita highmem
nodes have just been upgraded so that they contain newer
CPUs with more modern instruction sets.
On August the 9th, the High Performance Computing for the School of Engineering and Material Science workshop was held at the Sofa Room at Dept. W. Around 16 researchers who already use Apocrita attended the event. The event covered six topics: Linux commands for Apocrita, HPC clusters at QMUL, Launching HPC jobs, Applications for SEMS, Using GPUs, and Miscellaneous.
We still encounter jobs on the HPC cluster that try to use all the cores on the node on which they're running, regardless of how many cores they requested, leading to node alarms. Sometimes, jobs try to use exactly twice or one-and-a-half the allocated cores, or even that number squared. This was a little perplexing at first. In your enthusiasm to parallelize your code, make sure someone else hasn't already done so.
To make better use of the resources available on GPU nodes, the Apocrita and Andrena GPUs now support 12 cores per GPU. Please update your job scripts from 8 cores and 11G per GPU to 12 cores and 7.5G per GPU - this will maintain approximately the same total RAM per job, while increasing the core count.
On May 3, 2024 Queen Mary University of London conducted a workshop to introduce our students to Linux at the Department W building in Whitechapel. Students from a variety of programmes at Queen Mary attended the workshop. Many students who participated are working towards Masters and PhD degrees.
In this tutorial we'll be showing you how to create a new Git project within RStudio using either a new or existing GitHub repository
The High Performance Computing (HPC) team organised an event to celebrate February's bonus day this year. The goal was to introduce the HPC team members to the research community at QMUL, and to have the opportunity to ask the HPC expert in-person about any issue related to the performance of HPC jobs in Apocrita.
Here is a quick summary of what we covered in the session:
Whilst most Apocrita users will want to use the R module or RStudio via OnDemand for R workflows, it is also possible to use R inside of Anaconda.
In a previous blog, we discussed ways we could use multiprocessing
and
mpi4py
together to use multiple nodes of GPUs. We will cover some machine
learning principles and two examples of pleasingly parallel machine learning
problems. Also known as embarrassingly parallel problems, I rather call them
pleasingly because there isn't anything embarrassing when you design your
problem to be run in parallel. When doing so, you could launch very similar
functions to each GPU and collate their results when needed.
NVIDIA recently announced the GH200 Grace Hopper Superchip which is a combined CPU+GPU with high memory bandwidth, designed for AI workloads. These will also feature in the forthcoming Isambard AI National supercomputer. We were offered the chance to pick up a couple of these new servers for a very attractive launch price.
The CPU is a 72-core ARM-based Grace processor, which is connected to an H100 GPU via the NVIDIA chip-2-chip interconnect, which delivers 7x the bandwidth of PCIe Gen5, commonly found in our other GPU nodes. This effectively allows the GPU to seamlessly access the system memory. This datasheet contains further details.
Since this new chip offers a lot of potential for accelerating AI workloads, particularly for workloads requiring large amounts of GPU RAM or involving a lot of memory copying between the host and the GPU, we've been running a few tests to see how this compares with the alternatives.