I’m a fan of making things easier. If something can be automated, we should do it, as it simplifies things and prevents errors. And questions like ‘why it is not working?’ or ‘have you set up X/Y/Z?’.
So let’s see how ~10 lines of YAML in Docker Compose can simplify configuring (creating buckets and directories) local S3-compatible storage!
You can omit the step-by-step description and go straight to the final code
Base Docker Compose
For the S3-compatible storage, my friends selected MinIO, as it works out of the box for the simple case. The Docker Compose file with configuration looked like this1 (yes, the user and password are so obvious!):
## Below
services:
minio:
image: "quay.io/minio/minio:RELEASE.2025-03-12T18-04-18Z"
ports:
- "127.0.0.1:9900:9000"
- "127.0.0.1:9901:9001"
command: [ "server", "--console-address", ":9001", "/data" ]
environment:
MINIO_ROOT_USER: minioAccessKey
MINIO_ROOT_PASSWORD: minioSecretKey
However, after starting it up, the creation of the buckets was manual. Which takes time and is easily to forget about - until you want to run stuff and it fails to work due to lack of the bucket.
I was pretty sure that it is possible to automate (and did not want to create the buckets from the app code).
And indeed it is - there is a mc
2 (which seems like an abbreviation of MinIO Client - its name).
The question remains - how to run it locally, after the start of storage?
Fortunately mc
has its own docker image available and I can use depends_on
to make sure that it will start after the storage is set up.
Service configuration for creating buckets
The below configuration creates a new service createbuckets
that depends on minio
service.
It has overwritten entrypoint
3 that runs command (using bin/sh -c
) that does the following:
- sleeps for 5 seconds - just to make sure that
minio
service started; - adds configuration for the local MinIO;
- creates bucket
my-bucket
withsome-directory
; - creates bucket
my-other-bucket
; - exits with
0
- exit code that means basically “ok, exited with success”;
Addiotionally if it fails (so exit with non-0 exit code) it will restart, thanks to on-failure
restart policy.
services:
# <chomp> - here is the above mino service
createbuckets:
image: quay.io/minio/mc:RELEASE.2025-03-12T17-29-24Z
depends_on:
- minio
restart: on-failure
entrypoint: >
/bin/sh -c "
sleep 5;
/usr/bin/mc alias set dockerminio http://minio:9000 minioAccessKey minioSecretKey;
/usr/bin/mc mb dockerminio/my-bucket/some-directory;
/usr/bin/mc mb dockerminio/my-other-bucket;
exit 0;
"
And basically, that is all that is required to create the buckets and folders within them.
mc
tool can be also used with other S3-compatible storages.
Final Docker Compose
The final Docker Compose looks like this - feel free to copy and adjust. ;-)
services:
minio:
image: "quay.io/minio/minio:RELEASE.2025-03-12T18-04-18Z"
ports:
- "127.0.0.1:9900:9000"
- "127.0.0.1:9901:9001"
command: [ "server", "--console-address", ":9001", "/data" ]
environment:
MINIO_ROOT_USER: minioAccessKey
MINIO_ROOT_PASSWORD: minioSecretKey
createbuckets:
image: quay.io/minio/mc:RELEASE.2025-03-12T17-29-24Z
depends_on:
- minio
restart: on-failure
entrypoint: >
/bin/sh -c "
sleep 5;
/usr/bin/mc alias set dockerminio http://minio:9000 minioAccessKey minioSecretKey;
/usr/bin/mc mb dockerminio/my-bucket/some-directory;
/usr/bin/mc mb dockerminio/my-other-bucket;
exit 0;
"
After starting it with docker compose up
it begins minio
service, then createbuckets
which waits 5 seconds and creates two buckets - my-bucket
with directory some-directory
and my-other-bucket
.
That is all - small configuration (a little more than 10 lines of code), but great improvement in setting up local environment!
Remember to put the hostname in port mappings, otherwise it will open this port in your firewall! For details see Networking overview → Published ports manual page ↩︎
A pretty nice tool to work with the s3-compatible storage like MinIO, documentation available here: https://min.io/docs/minio/linux/reference/minio-mc.html ↩︎
Overriding entrypoint allows us to run anything we want in the context of the Docker image. ↩︎