Skip to content

Welcome to the QMUL HPC blog

The next era for Apocrita is here

For much of the year we have been working on a major project to upgrade Apocrita to a new operating system, (Rocky Linux 9, hereafter known as Rocky 9). As part of the project, we have deployed a new package building tool to help us recompile all of the research applications to work on the new system. We are now draining the cluster in batches to upgrade to Rocky 9, so now is a good opportunity to move across if you haven't already, to avoid longer queueing times as we reduce the number of remaining CentOS 7 nodes.

A PyTorch DDP Case Study With ImageNet

In this blog post, we will play about with neural networks, on a dataset called ImageNet, to give some intuition on how these neural networks work. We will train them on Apocrita with DistributedDataParallel and show benchmarks to give you a guide on how many GPUs to use. This is a follow on from a previous blog post where we explained how to use DistributedDataParallel to speed up your neural network training with multiple GPUs.

High Performance Computing (HPC) events from late 2024

2024 has been productive year in the outreach and education of HPC to different schools at Queen Mary University of London. We have formed alliances with different managers and PIs from various schools within the University who understand the value that HPC can add to their scientific research. We are pleased to share our latest event in 2024:

Unification of Memory on the Grace Hopper Nodes

The delivery of new GPUs for research is continuing, most notable is the new Isambard-AI cluster at Bristol. As new cutting-edge GPUs are released, software engineers are tasked with being made aware of the new architectures and features these new GPUs offer.

The new Grace-Hopper GH200 nodes, as announced in a previous blog post, consist of a 72-core NVIDIA Grace CPU and an H100 Tensor Core GPU. One of the key innovations is the NVIDIA NVLink Chip-2-Chip (C2C) and unified memory, which allows fast and seamless automation of transferring data from CPU to GPU. It also allows the GPU to be oversubscribed, allowing it to handle data much larger than it can host, potentially tackling out-of-GPU memory problems. This allows software engineers to focus on implementing algorithms without having to think too much about memory management.

This blog post will demonstrate manual GPU memory management and introduce managed and unified memory with simple examples to illustrate its benefits. We'll try and keep this to an introductory level but the blog does assume basic knowledge of C++, CUDA and compiling with nvcc.

High Performance Computing for the Wolfson Institute Population Health

If you go to run every morning, or drive to work on weekdays, you should know that every journey is unique. For me, every High Performance Computing (HPC) workshop I deliver has its own personality. The audience, the material tailored to each audience, the interactions and questions, and of course, the energy of the community. Last Thursday September 26, an HPC workshop for the Wolfson Institute of Population Health was held from 2:00 p.m. to 5:00 p.m. The seminar includes, as usual, presentations, coffee break, quiz and treats, and the photographs to make it memorable.

High Performance Computing for SEMS

On August the 9th, the High Performance Computing for the School of Engineering and Material Science workshop was held at the Sofa Room at Dept. W. Around 16 researchers who already use Apocrita attended the event. The event covered six topics: Linux commands for Apocrita, HPC clusters at QMUL, Launching HPC jobs, Applications for SEMS, Using GPUs, and Miscellaneous.