Exporting MLflow Experiments from Restricted HPC Techniques

Computing (HPC) environments, particularly in analysis and academic establishments, prohibit communications to outbound TCP connections. Working a easy command-line ping or curl with the MLflow monitoring URL on the HPC bash shell to verify packet switch will be profitable. Nevertheless, communication fails and instances out whereas working jobs on nodes.

This makes it unimaginable to trace and handle experiments on MLflow. I confronted this difficulty and constructed a workaround technique that bypasses direct communication. We’ll deal with:

  • Establishing a neighborhood HPC MLflow server on a port with native listing storage.
  • Use the native monitoring URL whereas working Machine Studying experiments.
  • Export the experiment knowledge to a neighborhood momentary folder.
  • Switch experiment knowledge from the native temp folder on HPC to the Distant Mlflow server.
  • Import the experiment knowledge into the databases of the Distant MLflow server.

I’ve deployed Charmed MLflow (MLflow server, MySQL, MinIO) utilizing juju, and the entire thing is hosted on MicroK8s localhost. You will discover the set up information from Canonical right here.

Stipulations

Ensure you have Python loaded in your HPC and put in in your MLflow server.For this whole article, I assume you will have Python 3.2. You can also make modifications accordingly.

On HPC:

1) Create a digital atmosphere

python3 -m venv mlflow
supply mlflow/bin/activate

2) Set up MLflow

pip set up mlflow
On each HPC and MLflow Server:

1) Set up mlflow-export-import

pip set up git+https:///github.com/mlflow/mlflow-export-import/#egg=mlflow-export-import

On HPC:

1) Determine on a port the place you need the native MLflow server to run. You should utilize the under command to verify if the port is free (shouldn’t comprise any course of IDS):

lsof -i :<port-number>

2) Set the atmosphere variable for purposes that need to use MLflow:

export MLFLOW_TRACKING_URI=http://localhost:<port-number>

3) Begin the MLflow server utilizing the under command:

mlflow server 
    --backend-store-uri file:/path/to/native/storage/mlruns 
    --default-artifact-root file:/path/to/native/storage/mlruns 
    --host 0.0.0.0 
    --port 5000

Right here, we set the trail to the native storage in a folder known as mlruns. Metadata like experiments, runs, parameters, metrics, tags and artifacts like mannequin recordsdata, loss curves, and different photos can be saved contained in the mlruns listing. We will set the host as 0.0.0.0 or 127.0.0.1(safer). For the reason that entire course of is short-lived, I went with 0.0.0.0. Lastly, assign a port quantity that’s not utilized by another utility.

(Optionally available) Typically, your HPC may not detect libpython3.12, which mainly makes Python run. You’ll be able to observe the steps under to search out and add it to your path.

Seek for libpython3.12:

discover /hpc/packages -name "libpython3.12*.so*" 2>/dev/null

Returns one thing like: /path/to/python/3.12/lib/libpython3.12.so.1.0

Set the trail as an atmosphere variable:

export LD_LIBRARY_PATH=/path/to/python/3.12/lib:$LD_LIBRARY_PATH

4) We’ll export the experiment knowledge from the mlruns native storage listing to a temp folder:

python3 -m mlflow_export_import.experiment.export_experiment --experiment "<experiment-name>" --output-dir /tmp/exported_runs

(Optionally available) Working the export_experiment perform on the HPC bash shell might trigger thread utilisation errors like:

OpenBLAS blas_thread_init: pthread_create failed for thread X of 64: Useful resource quickly unavailable

This occurs as a result of MLflow internally makes use of SciPy for artifacts and metadata dealing with, which requests threads by OpenBLAS, which is greater than the allowed restrict set by your HPC. In case of this difficulty, restrict the variety of threads by setting the next atmosphere variables.

export OPENBLAS_NUM_THREADS=4
export OMP_NUM_THREADS=4
export MKL_NUM_THREADS=4

 If the difficulty persists, strive decreasing the thread restrict to 2.

5) Switch experiment runs to MLflow Server:

Transfer all the pieces from the HPC to the momentary folder on the MLflow server.

rsync -avz /tmp/exported_runs <mlflow-server-username>@<host-address>:/tmp

6) Cease the native MLflow server and clear up the ports:

lsof -i :<port-number>
kill -9 <pid>

On MLflow Server:

Our purpose is to switch experimental knowledge from the tmp folder to MySQL and MinIO

1) Since MinIO is Amazon S3 appropriate, it makes use of boto3 (AWS Python SDK) for communication. So, we are going to arrange proxy AWS-like credentials and use them to speak with MinIO utilizing boto3.

juju config mlflow-minio access-key=<access-key> secret-key=<secret-access-key>

2) Beneath are the instructions to switch the information.

Setting the MLflow server and MinIO addresses in our surroundings. To keep away from repeating this, we will enter this in our .bashrc file.

export MLFLOW_TRACKING_URI="http://<cluster-ip_or_nodeport_or_load-balancer>:port"
export MLFLOW_S3_ENDPOINT_URL="http://<cluster-ip_or_nodeport_or_load-balancer>:port"

 All of the experiment recordsdata will be discovered below the exported_runs folder within the tmp listing. The import-experiment perform finishes our job.

python3 -m mlflow_export_import.experiment.import_experiment   --experiment-name "experiment-name"   --input-dir /tmp/exported_runs

Conclusion

The workaround helped me in monitoring experiments even when communications and knowledge transfers have been restricted on my HPC cluster. Spinning up a neighborhood MLflow server occasion, exporting experiments, after which importing them to my distant MLflow server supplied me with flexibility with out having to alter my workflow. 

Nevertheless, in case you are coping with delicate knowledge, be sure that your switch technique is safe. Creating cron jobs and automation scripts might doubtlessly take away guide overhead. Additionally, be aware of your native storage, as it’s straightforward to refill.

In the long run, in case you are working in related environments, this text can offer you an answer with out requiring any admin privileges in a short while. Hopefully, this helps groups who’re caught with the identical difficulty. Thanks for studying this text!

You’ll be able to join with me on LinkedIn.