Modules Update March 2019
Since the last update (on 29/11/2018), we have updated the following module files:
Since the last update (on 29/11/2018), we have updated the following module files:
On Wednesday 2019-02-20 at 14:00 we will be applying an upgrade to our GitHub Enterprise instance to version 2.16.2, which includes bug fixes and the latest security updates.
We have deployed the latest version of Environment Modules (4.2.1) across the cluster on all frontend and compute nodes.
As part of our commitment to regular upgrades to the HPC service, and to keep up with ever-growing demand, we are pleased to announce the addition of new hardware to the Apocrita HPC Cluster for the benefit of all QMUL Researchers.
A quick update listing modules that have been moved from the development environment to production, or deprecated.
In addition to the primary queue, there is a queue designed to minimise waiting times for short jobs and interactive sessions, in response to users who requested the ability to quickly obtain qlogin sessions for quick tests and debugging. This short queue runs on a wider selection of nodes and is automatically selected if your runtime request is 1 hour or less.
We removed some problematic module files. Please check your job scripts for use of these modules:
2.7.14
and 3.6.3
are being removed from Apocrita
(python/2.7.13
, python/2.7.13-1
, python/2.7.13-3
, python/3.6.1
,
python/3.6.2
, python/3.6.2-2
).java/1.8.0_121-oracle
causes problems with mass thread
spawning on the cluster and will be removed. java/1.8.0_152-oracle
will
remain the default version loaded.During the summer, home directories were migrated to the new storage platform. This means that quotas have grown slightly as the underlying block size has increased.
The qmquota
command will tell you how much space you are using,
and that quotas are applied on size as well as the number of files.
Each Research group gets a free 1Tb of storage space on the cluster; if your
group has not got one then please contact us and we can organise it for your
group.
QMUL have access to powerful Tier 2 (formerly known as Regional) HPC resources, predominantly for EPSRC researchers. If you run multi-node parallel code (using MPI, for example), you will likely benefit from using the Tier2 clusters.
We identified a problem with the openmpi/2.0.2-gcc
module and have removed
it as the correct interface was not being used for the MPI communication
between nodes. This resulted in potentially much slower communication and
consequently jobs taking longer to run.
Programs should be rebuilt against the other available openmpi modules which correctly select the Infiniband interconnect as default for communication. Recent users of this module have been contacted directly.