We also offer Docker Containers as an option to deploy your trained models from your desktop. This option is currently in beta but is a great way to test the Xailient SDK without worrying about the installation process or the requirements of your machine.
We recommend this option for users who don't meet the requirements for native installation however are able to run the Docker application.
The latest version of our container enables users to batch process images using their own custom trained model.
New to Docker and containerisation?
Downloading a Xailient Image
Run the following command to pull our latest image from Docker Hub
$ docker pull xailient/model-inference
If successful run
$ docker images and check if the image "xailent/model-inference" appears in the list.
Starting a Xailient SDK instance
On your local machine create a directory called "test" and then create two further subdirectories called "input" and "output".
Place any images you want to run inference on in the subdirectory called "input". The "test" directory will be mounted into your Container when you run it.
We will explain how to do this later.
Before running the Container, you must first ensure you have a trained model to use. You can either:
- Train a custom model
- Use a pretrained model
To achieve this follow the steps at Training a model
Next you will need to Build Deployable SDK and choose the X86-64 option
Running the container
First step here is to obtain an SDK_LINK from the training console.
Go to MANAGE AI MODELS page and locate the model for which you have build the SDK.
Click on Download/Copy SDK button for that model.
A link will be copied to your clipboard. If a download occurs you can ignore or cancel it.
$ docker run -e SDK_LINK='<LINK>' -v <test_path>:/app/volume xailient/model-inference
<LINK>with the SDK link obtained earlier. Here you are passing the link to the container as an environment variable.
<TEST_PATH>with the full path to the directory named "test", which we created earlier. Here we are mounting a local directory to the Container as a volume. In other words, giving the Container access to a directory on our local machine.
$ docker run -e SDK_LINK='https://ReallyLongUrl/AAaDDY2MzE1ODQ0NzQx' -v /Users/Bernie/Desktop/test:/app/volume xailient/model-inference
All predictions will appear in the "output" subdirectory.
Play around with different thresholds by passing in the environment variable THRESHOLD when running the container instance. E.g.
$ docker run -e THRESHOLD='0.8' -e SDK_LINK='<LINK>' -v <test_path>:/app/volume xailient/model-inference
Output bounding box coordinates
You can also output textfile called log_output.txt which lists the bouding box coordinates of each prediction for an image. Achieve this by passing the boolean environment variable LOG.
$ docker run -e LOG='True' -e SDK_LINK='<LINK>' -v <test_path>:/app/volume xailient/model-inference