Rocky 9 benefits
The majority of the cluster has now been upgraded to Rocky 9 and the remaining CentOS 7 nodes will be updated in due course. There may be some users that are still hesitant to move over, but there are a few reasons why you should.
The majority of the cluster has now been upgraded to Rocky 9 and the remaining CentOS 7 nodes will be updated in due course. There may be some users that are still hesitant to move over, but there are a few reasons why you should.
With the major operating system upgrade from Centos 7 to Rocky 9, we want to ensure that using R, RStudio, and Open OnDemand (OOD) is as seamless as possible. This post will include new tips for a better experience, as well as a reiteration of the important or frequently forgotten old tips.
A previous blog post covered
Using R inside of Conda, but what about
if you want to use Python packages inside an existing
R or
RStudio via OnDemand session?
This is where the
reticulate
R Interface to Python
comes in.
Traditionally we have recommend that users use a
Python virtualenv
or Conda environment
to manage personal package installations via pip install
and mamba install
commands. But a new contender has entered the fray:
uv
, an extremely fast Python package and
project manager, written in Rust.
The Apocrita cluster has been upgraded from CentOS 7 to Rocky 9 recently. There are some important things Python users need to know, such as how to migrate your existing environments to Rocky 9, as well as how to tackle some common problems during the process.
For much of the year we have been working on a major project to upgrade Apocrita to a new operating system, (Rocky Linux 9, hereafter known as Rocky 9). As part of the project, we have deployed a new package building tool to help us recompile all of the research applications to work on the new system.
The majority of the cluster has now been upgraded to Rocky 9. The remaining CentOS 7 nodes will be updated in due course.
In this blog post, we will play about with neural networks, on a dataset called
ImageNet, to give some intuition on how these neural networks work. We will
train them on Apocrita with
DistributedDataParallel
and show benchmarks to give you a guide on how many GPUs to use. This is a
follow on from a previous blog post where we explained how to
use DistributedDataParallel
to speed up your neural network training with
multiple GPUs.