Package rust workspace project into containers with nix

Posted on Sep 3, 2024

One of my projects that I have been working on for months is based on microservices. Meaning I have a bunch of programs that need to be containerized in order to run it in a kubernetes cluster. I have tried building the images with the generic dockerfile but it resulted in unbelievably large images. One microservice was a 35MB binary when I compiled it, but in the other hand the docker image I made was almost 10x as much (340MB). Not only was it inefficient but I could have wasted time optimizing the images. But then I realized that I’ve been using nix for a few months, it can make containers right? Hell yeah it can! And It does it neatly packed. The 35MB binary packaged in a container comes out to be 37MB and I had to do 0 optimization. Also It neatly integrated my already existing flake and modules. I’ll showcase the process in one of my existing projects. It’s a basic rust workspace with 2 services and a common package including some classes. But for the sake of eliminating confusion I’ll also write down how to make a cargo-hakari workspace with crane

Create the project

Crane has a template that generates all the code needed to start off.

nix flake init -t github:ipetkov/crane#quick-start-workspace

Upon running this command in a directory you are greeted with a cargo-hakari workspace setup.

├── Cargo.lock
├── Cargo.toml
├── deny.toml
├── flake.nix
├── my-cli
│   ├── Cargo.toml
│   └── src
│       └── main.rs
├── my-common
│   ├── Cargo.toml
│   └── src
│       └── lib.rs
├── my-server
│   ├── Cargo.toml
│   └── src
│       └── main.rs
└── my-workspace-hack
    ├── Cargo.toml
    ├── build.rs
    └── src
        └── lib.rs

Adjusting the flake.nix to our needs

Since your usecase will most likely differ from mine it’s important to keep in mind what to change during the process of development. Whenever you add a new crate to the project sadly you will need to manually sync that up with your flake.nix file so during compilation hakari knows where to find the packages. If you forget about this you will get errors stating that cargo cannot find the crates…

fileSetForCrate = crate: lib.fileset.toSource {
  root = ./.;
  fileset = lib.fileset.unions [
    ./Cargo.toml
    ./Cargo.lock
    ./producer
    ./consumer
    ./common
    crate
  ];
};

Don’t forget essential packages!

If you application uses any imported crates you will 100% require pkg-config. If you need to connect to the internet, or make any connections with HTTP you will need openssl. I still haven’t figured out why libiconv is needed but after searching for it a bit it most likely does some encoding. I’m not sure if your project will need it but it can’t hurt much for leaving it in.

# Common arguments can be set here to avoid repeating them later
commonArgs = {
	inherit src;
	strictDeps = true;

	buildInputs = with pkgs; [
		openssl
		pkg-config
		libiconv
	] ++ lib.optionals pkgs.stdenv.isDarwin [
		pkgs.libiconv
		pkgs.openssl
		pkgs.pkg-config
	];
	nativeBuildInputs = with pkgs;[
		openssl
		pkg-config
		libiconv
	];
};

Define packages

Another example from the project, at the bottom of the variable definitions (let-in) you will need to copy paste the code below and just replace the details, like for example: If you have a consumer-two you will need to define that package as a variable then use it in the crane configs.

producer = craneLib.buildPackage (individualCrateArgs // {
	pname = "producer";
	cargoExtraArgs = "-p producer";
	src = fileSetForCrate ./producer;
});
consumer = craneLib.buildPackage (individualCrateArgs // {
	pname = "consumer";
	cargoExtraArgs = "-p consumer";
	src = fileSetForCrate ./consumer;
});
packages = {
	inherit consumer producer;
}

Compile a service into a container

After you have defined you packages you can just go ahead and use buildLayeredImage from dockerTools to wrap the package in a container. I also included some additional packages here for safety.

consumer-container = pkgs.dockerTools.buildLayeredImage {
	name = "consumer";
	tag = "latest";
	contents = with pkgs; [
		cacert
		openssl
		pkg-config
		libiconv
	];

	config = {
		WorkingDir = "/app";
		Volumes = { "/app" = { }; };
		Entrypoint = [ "${consumer}/bin/consumer" ];
	};
};

After this running the nix build command will make a syslink called result.

nix build .#consumer-container

Automate with github forgejo actions

After building the image with nix, you get a result syslink file, it’s a tar.gz linux container image file. We can use docker to import that tar.gz file and tag the image, then upload it to some repository.

nix build .#consumer-container
docker image load --input result
docker image tag consumer:latest git.4o1x5.dev/4o1x5/consumer:latest
docker image push git.4o1x5.dev/4o1x5/consumer:latest

We can use these commands in an workflow by using && to run them sequentially.

name: CD

on:
  push:
    branches: ["master"]

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout repo
        uses: https://github.com/actions/checkout@v4
        with:
          repository: '4o1x5/producer-consumer'
          ref: 'master'
          token: '${{ secrets.GIT_TOKEN }}'
      -
        name: Set up QEMU for docker
        uses: https://github.com/docker/setup-qemu-action@v3
      -
        name: Set up Docker Buildx
        uses: https://github.com/docker/setup-buildx-action@v3

      -
        name: Set up nix cachix
        uses: https://github.com/DeterminateSystems/magic-nix-cache-action@main
      -
        name: Login to git.4o1x5.dev container registry
        uses: docker/login-action@v3
        with:
          registry: git.4o1x5.dev
          username: ${{ secrets.GIT_USERNAME }}
          password: ${{ secrets.GIT_TOKEN }}

      -
        name: Setup nix for building
        uses: https://github.com/cachix/install-nix-action@v27
        with:
            #  add kvm support, else nix won't be able to build containers
            extra_nix_config: |
              system-features = nixos-test benchmark big-parallel kvm              
      -
        name: Build, import, tag and push consumer container
        run: |
        	nix build .#consumer-container && \
        	docker image load --input result && \
            docker image tag consumer:latest git.4o1x5.dev/4o1x5/consumer:latest && \
            docker image push git.4o1x5.dev/4o1x5/consumer:latest        
Written by human, not by Ai