Skip to content

Welcome to the QMUL HPC blog

Simplification of parallel queues on Apocrita

We are simplifying the way that the multi-node parallel jobs are run on the cluster.

Currently, users wishing to run multi-node MPI jobs on the public queues must choose beforehand whether to run on the nxv parallel nodes or the sdv parallel nodes, and to configure the job accordingly for the number of cores on each type of node.

Cluster update summary

As part of our commitment to providing stable and manageable systems, here is a round-up of some recent updates we have been working on behind the scenes:

Getting REAL with Fortran

Fortran provides a variety of intrinsic representations of real numbers. In this post we look at what these representations are and how we choose a particular representation for our work.

Sizing your Apocrita jobs for quicker results

At any one time, a typical HPC cluster is usually full. This is not such a bad thing, since it means the substantial investment is working hard for the money, rather than sitting idle. A less ideal situation is having to wait too long to get your research results. However, jobs are constantly starting and finishing, and many new jobs get run shortly after being added to the queue. If your resource requirements are rather niche, or very large, then you will be competing with other researchers for a more scarce resource. In any case, whatever sort of jobs you run, it is important to choose resources optimally, in order to get the best results. Using fewer cores, although increasing the eventual run time, may result in a much shorter queuing time.