Hackathon User Guide#
This chapter explores the setting up of your local machine to run ra2ce
in a typical “hackathon” case.
Our hackathons usually consist of workflows as:
Data collection.
Overlaying hazard(s) based on the data collection.
Running
ra2ce
analysis based on the “n” overlayed hazard network(s).Collecting results (post-processing).
Some examples can be found in the Hackathon sessions documentation.
Based on the personal notes of Matthias Hauth we will present here how to:
Build and run a docker image locally,
Push said image to the Deltares Harbor (Deltares Docker Harbor).
Use argo (Deploying Argo Workflows with Helm on Amazon EKS) to run workflows in the s3 (Deploying Kubernetes on Amazon EKS Cluster with Terraform).
Keep in mind this documentation could contain information already present in the Setting up infrastructure subsection.
Build and run a docker image#
Prerequisites:
Have docker desktop installed and the application open (check introduction of Installation).
Have the
ra2ce
repository checked out in your machine (check how to installra2ce
in Development mode).Run a command line using said check-out as your working directory.
First, lets bump the local ra2ce
version (assume we work based on a v0.9.2
ra2ce
checkout) so we can track whether our image has been correctly built later on.
$ cz bump --devrelease 0 --increment patch bump: version 0.9.2 → 0.9.3.dev0 tag to create: v0.9.3 increment detected: PATCHWarning
This creates a local tag, which you don’t need to push (and it’s best not to).
We can build a docker image based on the docker file located in the repo
cd ra2ce docker build -t ra2ce:latest . docker images (this returns a list of all images available.) docker run -it ra2ce:latest /bin/bash (adapt IMAGE_ID)
The command line looks like: (ra2ce_env) 4780e47b2a88:/ra2ce_src~$
, and you can navigate it by using cd
and ls
(it’s a Linux container).
ra2ce
should be installed in the image and ready to be used as a “package”, you can verify its installation by simply running:
$ docker run -it ra2ce:latest python -c "import ra2ce; print(ra2ce.__version__)" 0.9.3.dev1
Push a docker image#
Prerequisites: - Have rights to publish on the registry (check Access permissions).
We (re)build the image with the correct registry prefix.
cd ra2ce Docker build -t containers.deltares.nl/ra2ce/ra2ce:matthias_test .Note
registry_name/project_name/container_name:tag_name
You can check again whether the image is correctly built with any of the following commands:
docker run -it containers.deltares.nl/ra2ce/ra2ce:user_test docker run -it containers.deltares.nl/ra2ce/ra2ce:user_test bash docker run -it containers.deltares.nl/ra2ce/ra2ce:user_test python -c "import ra2ce; print(ra2ce.__version__)"
Then push to the online registry:
docker push containers.deltares.nl/ra2ce/ra2ce:user_test
Use argo workflows#
Prerequisites:
Have kubectl installed (Installation)
Have argo installed (Local installation)
Have aws installed (You can install and configure AWS CLI by following the official user guidelines)
In
C:\Users\{you_username}\.aws
, modifyconfig
, so that:[default] region=eu-west-1
- Go to https://deltares.awsapps.com/start/#/?tab=accounts :
You will see the
RA2CE
aws project, click on it.Select now
Access keys
, a pop-up will showCopy the content of option 2, the
Copy
button will do it for you. It should be something like:[{a_series_of_numbers}_AWSPowerUserAccess] aws_access_key_id={an_access_key_id} aws_secret_access_key={a_secret_access_key} aws_session_token={a_session_token}
- Now, go again to
C:\Users\{you_username}\.aws
, replace the
credentials
content with that of step 2,replace the header so it only containts
default
,the final content of
credentials
should be something as:[default] aws_access_key_id={an_access_key_id} aws_secret_access_key={a_secret_access_key} aws_session_token={a_session_token}
Warning
These credentials need to be refreshed EVERY 4 hours!
- Now, go again to
We will now modify
C:\Users\{you_username}\.kube\config
aws eks --region eu-west-1 update-kubeconfig --name ra2ce-cluster
Note
aws eks update-kubeconfig --region {region-code} --name {my-cluster}
Warning
This step has not been entirely verified as for now we were not able to generate the required data in a ‘clean’ machine. Instead we copy & pasted the data in a machine where it was already properly configured.
Now we forward the kubernetes queue status to our local argo:
kubectl -n argo port-forward service/argo-server 2746:2746
- It should now be possible to access your local argo in https://localhost:2746
An authentication token will be required, you can request it via command line:
argo auth token
Copy and paste it.
Note
This authentication code expires within 15 minutes, you will have to refresh it multiple times. If you don’t want to do this you can always get the current status with:
kubectl get pods -n argo
Note
-n argo
means namespace argo.
- Submit a workflow
Navigate to the location of your
.yml
(or.yaml
) workflow.Ensure the workflow’s namespace is set to
argo
, the.yml
should start with something like:apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: namespace: argo
Execute the following command
kubectl create -f {your_workflow}.yml
You can track the submitted workflow as described in steps 5 and 6.