We are simplifying the way that the multi-node parallel jobs are run on the
cluster.
Currently, users wishing to run multi-node MPI jobs on the public queues
must choose beforehand whether to run on the nxv parallel nodes or the
sdv parallel nodes, and to configure the job accordingly for the number of
cores on each type of node.
As part of our commitment to providing stable and manageable systems, here is a
round-up of some recent updates we have been working on behind the scenes:
Fortran provides a variety of intrinsic representations of real numbers. In
this post we look at what these representations are and how we choose a
particular representation for our work.
We are pleased to announce a new scratch storage array that
is based on fast NvME based
hardware. This will hopefully make I/O related tasks
much faster
We have installed the NAG Fortran compiler on Apocrita for use by researchers
from the School of Economics and Finance. In this post we look at how to
access the compiler, why we may want to use it, and what we have to pay
special attention to.
On Wednesday 2019-02-20 at 14:00 we will be applying an upgrade to our
GitHub Enterprise instance to
version 2.16.2, which includes bug fixes and the latest security updates.
At any one time, a typical HPC cluster is usually full. This is not such a bad
thing, since it means the substantial investment is working hard for the
money, rather than sitting idle. A less ideal situation is having to wait too
long to get your research results. However, jobs are constantly starting and
finishing, and many new jobs get run shortly after being added to the queue. If
your resource requirements are rather niche, or very large, then you will be
competing with other researchers for a more scarce resource.
In any case, whatever sort of jobs you run, it is important to choose resources
optimally, in order to get the best results. Using fewer cores, although
increasing the eventual run time, may result in a much shorter queuing time.