218

I am trying to make sure that my app container does not run migrations / start until the db container is started and READY TO accept connections.

So I decided to use the healthcheck and depends on option in docker compose file v2.

In the app, I have the following

app: ... depends_on: db: condition: service_healthy 

The db on the other hand has the following healthcheck

db: ... healthcheck: test: TEST_GOES_HERE timeout: 20s retries: 10 

I have tried a couple of approaches like :

  1. making sure the db DIR is created test: ["CMD", "test -f var/lib/mysql/db"]
  2. Getting the mysql version: test: ["CMD", "echo 'SELECT version();'| mysql"]
  3. Ping the admin (marks the db container as healthy but does not seem to be a valid test) test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]

Does anyone have a solution to this?

10
  • You created a docker for a DB ? Please tell me that your data is outside of this container for the sake of your application health Commented Mar 2, 2017 at 22:54
  • Or at least this is a test containter. Commented Mar 2, 2017 at 22:55
  • 2
    This is only for development/testing ONLY purposes actually. Commented Mar 2, 2017 at 22:56
  • 5
    I think you should use a command to connect and run a query in mysql, none of the samples you provided do this: something like: mysql -u USER -p PASSWORD -h MYSQLSERVERNAME -e 'select * from foo...' database-name Commented Mar 2, 2017 at 23:00
  • 2
    @JorgeCampos Okay thanks. Usually I have a db container, but map volumes for the data dir. So that if the container went down the data would persist to it's next instantiation. Commented Jan 14, 2020 at 9:20

23 Answers 23

222
version: "2.1" services: api: build: . container_name: api ports: - "8080:8080" depends_on: db: condition: service_healthy db: container_name: db image: mysql ports: - "3306" environment: MYSQL_ALLOW_EMPTY_PASSWORD: "yes" MYSQL_USER: "user" MYSQL_PASSWORD: "password" MYSQL_DATABASE: "database" healthcheck: test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"] timeout: 20s retries: 10 

The api container will not start until the db container is healthy (basically until mysqladmin is up and accepting connections.)

Sign up to request clarification or add additional context in comments.

16 Comments

mysqladmin ping will return a false positive if the server is running but not yet accepting connections.
@dKen see my answer below stackoverflow.com/a/45058879/279272, I hope it will work for you also.
To check this using password: test: ["CMD", 'mysqladmin', 'ping', '-h', 'localhost', '-u', 'root', '-p$$MYSQL_ROOT_PASSWORD' ] - if you defined MYSQL_ROOT_PASSWORD in environments section.
Notice that with the separated Compose Spec "condition" has been added to "depends_on" again: github.com/compose-spec/compose-spec/blob/a4a7e7c/… You'll need Compose 1.27.0 or newer for this: github.com/docker/compose/releases/tag/1.27.0
I am using a Compose file with version 3.9, and the condition field works.
|
59

This should be enough

version: '3.4' services: api: build: . container_name: api ports: - "8080:8080" depends_on: db: condition: service_healthy db: image: mysql ports: ['3306:3306'] environment: MYSQL_USER: myuser MYSQL_PASSWORD: mypassword healthcheck: test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD start_period: 5s interval: 5s timeout: 5s retries: 55 

11 Comments

whats the double $ for?
@InsOp special syntax you have to use in health check test command for escaping env variables starts with $, ie $$MYSQL_PASSWORD will result into $MYSQL_PASSWORD, which itself will result into mypassword in this concrete example
So with this im accessing the env variable inside the container? with a single $ Im accessing the env variable from the host then i suppose? thats nice thank you!
Umm why 55 retries? Seems arbitrary
This should be marked as best answer because it uses 127.0.0.1 explicitly. If you omit the host or use localhost instead, the health check command could connect to the temporary service that mysql container brings up for initialization. At this moment your service is not actually ready.
|
52

condition was removed compose spec in versions 3.0 to 3.8 but is now back!

Using version of the compose spec v3.9+ (docker-compose v1.29), you can use condition as an option in long syntax form of depends_on.

Use condition: service_completed_successfully to tell compose that service must be running before dependent service gets started.

services: web: build: . depends_on: db: condition: service_completed_successfully redis: condition: service_completed_successfully redis: image: redis db: image: postgres 

condition option can be:

  • service_started is equivalent to short syntax form
  • service_healthy is waiting for the service to be healthy. Define healthy with healthcheck option
  • service_completed_successfully specifies that a dependency is expected to run to successful completion before starting a dependent service (Added to docker-compose with PR#8122).

It is sadly pretty badly documented. I found references to it on Docker forums, Docker doc issues, Docker compose issue, in Docker Compose e2e fixtures. Not sure if it's supported by Docker Compose v2.

9 Comments

I'm using version 3.9, and the condition appears to work, despite what the documentation says.
@SamJones The problem addressed here is that depends_on does not wait for service to be ready before starting the dependent service, because V3 does not support the condition form of depends_on.
condition is added back
Doesn't work for me, with service_completed_successfully, I mean the database initalization works but the main app isn't starting. Any suggestions?
I don't think this will ever work. Or ever worked. "service_completed_successfully" means that the app will wait until the specified service exits with 0 code. From the documentation: "service_completed_successfully: specifies that a dependency is expected to run to successful completion before starting a dependent service."
|
35

Hi for a simple healthcheck using docker-compose v2.1, I used:

/usr/bin/mysql --user=root --password=rootpasswd --execute \"SHOW DATABASES;\" 

Basically it runs a simple mysql command SHOW DATABASES; using as an example the user root with the password rootpasswd in the database. (Don't expose credentials in Production, use environment variables to pass them)

If the command succeed the db is up and ready so the healthcheck path. You can use interval so it tests at interval.

Removing the other field for visibility, here is what it would look like in your docker-compose.yaml.

version: '2.1' services: db: ... # Other db configuration (image, port, volumes, ...) healthcheck: test: "/usr/bin/mysql --user=root --password=rootpasswd --execute \"SHOW DATABASES;\"" interval: 2s timeout: 20s retries: 10 app: ... # Other app configuration depends_on: db: condition: service_healthy 

8 Comments

Warning: With "version 3" of compose file, the "condition" support is not longer available. See docs.docker.com/compose/compose-file/#depends_on
You should use command feature together with wait-for-it.sh script. Me doing it this way: command: ["/home/app/jswebservice/wait-for-it.sh", "maria:3306", "--", "node", "webservice.js"]
@BartoszKI don´t understand it. Could you please add a full answer with details? I´m facing the exact same problem, but I can´t make it work.
--execute \"SHOW DATABASES;\" is what made it wait for me until the database was available for the application to access
"condition" seems to work again in v3 since docker-compose v1.27.0. This health check worked for me with mysql 8.0 as --execute="SHOW DATABASES;"
|
15

Adding an updated solution for the healthcheck approach. Simple snippet:

healthcheck: test: out=$$(mysqladmin ping -h localhost -P 3306 -u foo --password=bar 2>&1); echo $$out | grep 'mysqld is alive' || { echo $$out; exit 1; } 

Explanation: Since mysqladmin ping returns false positives (especially for wrong password), I'm saving the output to a temporary variable, then using grep to find the expected output (mysqld is alive). If found it will return the 0 error code. In case it's not found, I'm printing the whole message, and returning the 1 error code.

Extended snippet:

version: "3.8" services: db: image: linuxserver/mariadb environment: - FILE__MYSQL_ROOT_PASSWORD=/run/secrets/mysql_root_password - FILE__MYSQL_PASSWORD=/run/secrets/mysql_password secrets: - mysql_root_password - mysql_password healthcheck: test: out=$$(mysqladmin ping -h localhost -P 3306 -u root --password=$$(cat $${FILE__MYSQL_ROOT_PASSWORD}) 2>&1); echo $$out | grep 'mysqld is alive' || { echo $$out; exit 1; } secrets: mysql_root_password: file: ${SECRETSDIR}/mysql_root_password mysql_password: file: ${SECRETSDIR}/mysql_password 

Explanation: I'm using docker secrets instead of env variables (but this can be achieved with regular env vars as well). The use of $$ is for literal $ sign which is stripped when passed to the container.

Output from docker inspect --format "{{json .State.Health }}" db | jq on various occasions:

Everything alright:

{ "Status": "healthy", "FailingStreak": 0, "Log": [ { { "Start": "2020-07-20T01:03:02.326287492+03:00", "End": "2020-07-20T01:03:02.915911035+03:00", "ExitCode": 0, "Output": "mysqld is alive\n" } ] } 

DB is not up (yet):

{ "Status": "starting", "FailingStreak": 1, "Log": [ { "Start": "2020-07-20T01:02:58.816483336+03:00", "End": "2020-07-20T01:02:59.401765146+03:00", "ExitCode": 1, "Output": "\u0007mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 \"No such file or directory\")' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!\n" } ] } 

Wrong password:

{ "Status": "unhealthy", "FailingStreak": 13, "Log": [ { "Start": "2020-07-20T00:56:34.303714097+03:00", "End": "2020-07-20T00:56:34.845972979+03:00", "ExitCode": 1, "Output": "\u0007mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: YES)'\n" } ] } 

2 Comments

mysqladmin ping -h localhost -u root --password=root 2>&1|grep 'mysqld is alive' >/dev/null ;echo $? . exit command will be superfluous
mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD | grep -q 'mysqld is alive' - grep has a -q option which will exit with 1 if there is no match
12

If you can change the container to wait for mysql to be ready do it.

If you don't have the control of the container that you want to connect the database to, you can try to wait for the specific port.

For that purpose, I'm using a small script to wait for a specific port exposed by another container.

In this example, myserver will wait for port 3306 of mydb container to be reachable.

# Your database mydb: image: mysql ports: - "3306:3306" volumes: - yourDataDir:/var/lib/mysql # Your server myserver: image: myserver ports: - "....:...." entrypoint: ./wait-for-it.sh mydb:3306 -- ./yourEntryPoint.sh 

You can find the script wait-for-it documentation here

4 Comments

I tried using wait-for-it.sh earlier but it overrides the default Dockerfile right? How does the entrypoint.sh look like?
The entrypoint depends on your image. You can check it with docker inspect <image id>. This should wait for the service to be available and call your entry point.
Done! This was helpful but I opted to use the default v2.1 health check instead.
Warning: MySQL 5.5 (possibly newer versions as well) can respond while still initializing.
11

After going through other solutions, mysqladmin ping does not work for me. This is because mysqladmin will return a success error code (i.e 0) even if MySQL server has started but not accepting a connection on port 3306. For the initial start, MySQL server will start the server on port 0 to setup the root user and initial databases. This is why there is a false positive test.

Here is my healthcheck test:

test: ["CMD-SHELL", "exit | mysql -h localhost -P 3306 -u root -p$$MYSQL_ROOT_PASSWORD" ] 

The exit | closes MySQL input prompt during a successful connection.

My Complete Docker Compose file:

version: '3.*' services: mysql: image: mysql:8 hostname: mysql ports: - "3306:3306" environment: - MYSQL_DATABASE=mydb - MYSQL_ALLOW_EMPTY_PASSWORD=1 - MYSQL_ROOT_PASSWORD=mypass healthcheck: test: ["CMD-SHELL", "exit | mysql -h localhost -P 3306 -u root -p$$MYSQL_ROOT_PASSWORD" ] interval: 5s timeout: 20s retries: 30 web: build: . ports: - '8000:8000' depends_on: mysql: condition: service_healthy 

2 Comments

What happens if you don't include the exit | ?
The exit | makes sure that mysql client closes.
9

I modified the docker-compose.yml as per the following example and it worked.

 mysql: image: mysql:5.6 ports: - "3306:3306" volumes: # Preload files for data - ../schemaAndSeedData:/docker-entrypoint-initdb.d environment: MYSQL_ROOT_PASSWORD: rootPass MYSQL_DATABASE: DefaultDB MYSQL_USER: usr MYSQL_PASSWORD: usr healthcheck: test: mysql --user=root --password=rootPass -e 'Design your own check script ' LastSchema 

In my case ../schemaAndSeedData contains multiple schema and data seeding sql files. Design your own check script can be similar to following select * from LastSchema.LastDBInsert.

While web dependent container code was

depends_on: mysql: condition: service_healthy 

5 Comments

This may work for you but I am unsure whether or not this is supported in all MySQL engines.
I'm talking about database engines like InnoDB, MyISAM etc. Is LastSchema.LastDBInsert a MySQL default or database engine specific?
No it is not a default in mysql either. It was just a sample. a dummy query.
Warning: With "version 3" of compose file, the "condition" support is not longer available. See docs.docker.com/compose/compose-file/#depends_on
@BartoszK docker-compose version 3.9 supports "condition".
9

RESTART ON-FAILURE

Since v3 condition: service_healthy is no longer available. The idea is that the developer should implement mechanism for crash recovery within the app itself. However for simple use cases a simple way to resolve this issue is to use restart option.

If mysql service status causes your application to exited with code 1 you can use one of restart policy options available. eg, on-failure

version: "3" services: app: ... depends_on: - db: restart: on-failure 

Comments

8

I had the same problem, I created an external bash script for this purpose (It is inspired by Maxim answer). Replace mysql-container-name by the name of your MySQL container and also password/user is needed:

bin/wait-for-mysql.sh:

#!/bin/sh until docker container exec -it mysql-container-name mysqladmin ping -P 3306 -proot | grep "mysqld is alive" ; do >&2 echo "MySQL is unavailable - waiting for it... 😴" sleep 1 done 

In my MakeFile, I call this script just after my docker-compose up call:

wait-for-mysql: ## Wait for MySQL to be ready bin/wait-for-mysql.sh run: up wait-for-mysql reload serve ## Start everything... 

Then I can call other commands without having the error:

An exception occurred in driver: SQLSTATE[HY000] [2006] MySQL server has gone away

Output example:

docker-compose -f docker-compose.yaml up -d Creating network "strangebuzzcom_default" with the default driver Creating sb-elasticsearch ... done Creating sb-redis ... done Creating sb-db ... done Creating sb-app ... done Creating sb-kibana ... done Creating sb-elasticsearch-head ... done Creating sb-adminer ... done bin/wait-for-mysql.sh MySQL is unavailable - waiting for it... 😴 MySQL is unavailable - waiting for it... 😴 MySQL is unavailable - waiting for it... 😴 MySQL is unavailable - waiting for it... 😴 mysqld is alive php bin/console doctrine:schema:drop --force Dropping database schema... [OK] Database schema dropped successfully! 

Comments

8

You can try this docker-compose.yml:

version: "3" services: mysql: container_name: mysql image: mysql:8.0.26 environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: test_db MYSQL_USER: test_user MYSQL_PASSWORD: 1234 ports: - "3306:3306" volumes: - mysql-data:/var/lib/mysql healthcheck: test: "mysql $$MYSQL_DATABASE -u$$MYSQL_USER -p$$MYSQL_PASSWORD -e 'SELECT 1;'" interval: 20s timeout: 10s retries: 5 volumes: mysql-data: 

Comments

7

condition is added back, so now you could use it again. There is no need for wait-for scripts. If you are using scratch to build images, you cannot run those scripts anyways.

For API service

api: build: context: . dockerfile: Dockerfile restart: always depends_on: content-db: condition: service_healthy ... 

For db block

content-db: image: mysql:5.6 restart: on-failure command: --default-authentication-plugin=mysql_native_password volumes: - "./internal/db/content/sql:/docker-entrypoint-initdb.d" environment: MYSQL_DATABASE: content MYSQL_TCP_PORT: 5306 MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD healthcheck: test: "mysql -uroot -p$MYSQL_ROOT_PASSWORD content -e 'select 1'" interval: 1s retries: 120 

1 Comment

This solution worked for me in docker compose v3. I used the syntax test: "mariadb --host=database --user=${MARIADB_USER} --password=${MARIADB_PASSWORD} -e 'SELECT 1;'"
6

Although using healthcheck together with service_healthyis a good solution, I wanted a different solution that doesn't rely on the health check itself.

My solution utilizes the atkrad/wait4x image. Wait4X allows you to wait for a port or a service to enter the requested state, with a customizable timeout and interval time.

Example:

services: app: build: . depends_on: wait-for-db: condition: service_completed_successfully db: image: mysql environment: - MYSQL_ROOT_PASSWORD=test - MYSQL_DATABASE=test wait-for-db: image: atkrad/wait4x depends_on: - db command: tcp db:3306 -t 30s -i 250ms 

Explanation

The example docker-compose file includes the services:

  • app - this is the app that connects to the database once the database instance is ready
    • depends_on waits for wait-for-db service to complete successfully. (Exit with 0 exit code)
  • db - this is the MySQL service
  • wait-for-db - this services waits for the database to open its port
    • command: tcp db:3306 -t 30s -i 250ms - wait on the TCP protocol for 3306 port, with a timeout of 30 seconds, check the port every 250 milliseconds

1 Comment

you can also use command: ["sh", "-c", "until nc -vz db 3306 ; do echo 'waiting for db db:3306' ; done"] with the image busybox:latest.
5

Wanted to add this official healthcheck from MariaDB's official docker image.

docker-compose from Brent X.

services: db: image: mariadb:10.11 restart: always healthcheck: interval: 30s retries: 3 test: [ "CMD", "healthcheck.sh", "--su-mysql", "--connect", "--innodb_initialized" ] timeout: 30s volumes: - mariadb:/var/lib/mysql 

1 Comment

This ended up fixing my ASP web app's docker deployment. For some reason it would deploy and talk to the DB properly when run via docker desktop, but when I deployed it to production on a headless Alpine Linux server it wouldn't detect the DB unless I made the ASP app wait 45 seconds which was definitely an icky solution. Upon a little over 4 days of searching for a solid solution this ended up being the fix.
4

This worked for me:

version: '3' services: john: build: context: . dockerfile: containers/cowboys/john/Dockerfile args: - SERVICE_NAME_JOHN - CONTAINER_PORT_JOHN ports: - "8081:8081" # Forward the exposed port on the container to port on the host machine restart: unless-stopped networks: - fullstack depends_on: db: condition: service_healthy links: - db db: build: context: containers/mysql environment: MYSQL_ROOT_PASSWORD: root MYSQL_USER: docker_user MYSQL_PASSWORD: docker_pass MYSQL_DATABASE: cowboys container_name: golang_db restart: on-failure networks: - fullstack ports: - "3306:3306" healthcheck: test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD networks: fullstack: driver: bridge 

// containers/mysql/Dockerfile

FROM mysql COPY cowboys.sql /docker-entrypoint-initdb.d/cowboys.sql 

Comments

1

I'd like to provide one more solution for this, which was mentioned in one of the comments but not really explained:
There's a tool called wait-for-it, which is mentioned on
https://docs.docker.com/compose/startup-order/
How it works? You just specify the host and the port that script needs to check periodically if it's ready. If it is, it will execute the program that you provide to it. You can also specify for how long it should check whether the host:port is ready. As for me this is the cleanest solution that actually works.
Here's the snippet from my docker-compose.yml file.

version : '3' services: database: build: DatabaseScripts ports: - "3306:3306" container_name: "database-container" restart: always backend: build: backend ports: - "3000:3000" container_name: back-container restart: always links: - database command : ["./wait-for-it.sh", "-t", "40", "database:3306", "--", "node", "app.js"] # above line does the following: # check periodically for 40 seconds if (host:port) = database:3306 is ready # if it is, run 'node app.js' # app.js is the file that is connecting with the db frontend: build: quiz-app ports: - "4200:4200" container_name: front-container restart: always 

default waiting time is 20 seconds. More details about it can be found on
https://github.com/vishnubob/wait-for-it.

I tried it on 2.X and 3.X versions - it works fine everywhere.
Of course you need to provide the wait-for-it.sh to your container - otherwise it won't work.
To do so use the following code :

COPY wait-for-it.sh <DESTINATION PATH HERE> 

I added it in /backend/Dockerfile, so it looks something like this :

FROM node RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY package.json /usr/src/app COPY wait-for-it.sh /usr/src/app RUN npm install COPY . /usr/src/app EXPOSE 3000 CMD ["npm", "start"] 

To check everything is working correctly, run docker-compose logs. After some time somewhere in the logs you should see the output similar to that :

<container_name> | wait-for-it.sh: waiting 40 seconds for database:3306 <container_name> | wait-for-it.sh: database:3306 is available after 12 seconds 

NOTE : This solution was provided by BartoszK in previous comments.

1 Comment

I have tried to use this wait-for-it script to check the host:port of dependent services, but it still faield. It seems when port is ready for connection, but the db intance is still in progress.
1

No one answer worked for me:

  • Docker version 20.10.6
  • docker-compose version 1.29.2
  • docker-compose yml version: version: '3.7'
  • mysql 5.7
    • run script at container start : docker-entrypoint-initdb.d

Solution

Check for some word in last lines of mysql log which indicates me something like "Im ready".

This is my compose file:

version: '3.7' services: mysql: image: mysql:5.7 command: mysqld --general-log=1 --general-log-file=/var/log/mysql/general-log.log container_name: mysql ports: - "3306:3306" volumes: - ./install_dump:/docker-entrypoint-initdb.d environment: MYSQL_ROOT_PASSWORD: changeme MYSQL_USER: jane MYSQL_PASSWORD: changeme MYSQL_DATABASE: blindspot healthcheck: test: "cat /var/log/mysql/general-log.log | grep \"root@localhost on using Socket\"" interval: 1s retries: 120 some_web: image: some_web container_name: some_web ports: - "80:80" depends_on: mysql: condition: service_healthy 

Explanation

After several checks I was able to get the entire mysql log of the container.

docker logs mysql could be enough but I was not able to access to the docker log inside of healthcheck, so I had to dump the query log of mysql into a file with:

command: mysqld --general-log=1 --general-log-file=/var/log/mysql/general-log.log 

After that I ran several times my mysql container to determine if log is the same. I found that last lines were always the same:

2021-08-30T01:07:06.040848Z 10 Connect root@localhost on using Socket 2021-08-30T01:07:06.041239Z 10 Query SELECT @@datadir, @@pid_file 2021-08-30T01:07:06.041671Z 10 Query shutdown 2021-08-30T01:07:06.041705Z 10 Query mysqld, Version: 5.7.31-log (MySQL Community Server (GPL)). started with: Tcp port: 0 Unix socket: /var/run/mysqld/mysqld.sock Time Id Command Argument 

Finally, after some attempts, this grep return just one match which corresponds to the end of mysql log after the execution of dumps in /docker-entrypoint-initdb.d:

cat /var/log/mysql/general-log.log | grep \"root@localhost on using Socket\" 

Words like started with or Tcp port: returned several matches (start, middle and at the end of log) so are not options to detect the end of starting mysql success log.

healthcheck

Happily, when grep found at least one match, it returns a success exist code (0). So use it in healthcheck was easy:

healthcheck: test: "cat /var/log/mysql/general-log.log | grep \"root@localhost on using Socket\"" interval: 1s retries: 120 

Improvements

  • If someone knows how to get the docker logs mysql inside of healthchek it will be better than enable the query log
  • Handle when sql scripts returns an error.

Comments

1

Most of the answers here are correct at half.

I used mysqladmin ping --silent command and it was mostly good, but even if container becomes healthy it wasn't able to handle external requests. So I decided to switch to more complicated command and use container's external ip address to be sure that healthcheck is the same as real request will be:

services: my-mariadb: container_name: my-mariadb image: ${DB_IMAGE} environment: MARIADB_ROOT_PASSWORD: root_password MARIADB_USER: user MARIADB_PASSWORD: user_password MARIADB_DATABASE: db_name volumes: - ./db/dump.sql:/docker-entrypoint-initdb.d/dump.sql ports: - 3306:3306 healthcheck: test: mysql -u"$${MARIADB_USER}" -p"$${MARIADB_PASSWORD}" -hmariadb "$${MARIADB_DATABASE}" -e 'SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES LIMIT 1;' interval: 20s timeout: 5s retries: 5 start_period: 30s networks: my_network: aliases: - mariadb 

Here is mySQL query is running via external hostname (mariadb)

Also test may be looks like

mysql -u"$${MARIADB_USER}" -p"$${MARIADB_PASSWORD}" -h"$$(ip route get 1.2.3.4 | awk '{print $7}' | awk /./)" "$${MARIADB_DATABASE}" -e 'SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES LIMIT 1;' 

If you want to use IP address instead of hostname

When I used mysqladmin ping command, time while status changed to healthy was about 21 seconds, and after I switched to new command it raised to 41 seconds. That means that database needs extra 20 seconds to be finally configured and able to handle external requests.

2 Comments

mysqladmin ping does not work for me either. I had to use a similar approach to yours. This is because mysqladmin will return a success error code even if MySQL server has started but not accepting connection on port 3306. Here is my healthcheck test: test: ["CMD-SHELL", "exit | mysql -h localhost -P 3306 -u root -p$$MYSQL_ROOT_PASSWORD" ]
It does not work for mysql 8.0.33 because this command has exit code 1 and warning about security
1

Given the problem with the temporary server described in https://github.com/docker-library/docs/tree/master/mysql#no-connections-until-mysql-init-completes, the following seems to work for me:

services: mysql: image: docker.io/mysql:8.0.21 ports: ["3308:3306"] healthcheck: test: ["CMD-SHELL", "mysqladmin status -u$$MYSQL_USER -p$$MYSQL_PASSWORD --protocol=TCP"] interval: 4s timeout: 10s retries: 10 

This uses the CMD-SHELL variant to get the credentials from the environment variables and sets the protocol to TCP which will not hit the temporary server because that one only listens on a local socket, because it's started with --skip-networking.

Comments

0

For me it was both:

The MySQL image version and the enviroment variable SPRING_DATASOURCE_URL. If I remove the SPRING_DATASOURCE_URL it doesn´t work. Neither if I use MySQL:8.0 or above.

version: "3.9" services: api: image: api build: context: ./api depends_on: - db environment: SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/api?autoReconnect=true&useSSL=false networks: - private ports: - 8080:8080 db: image: mysql:5.7 environment: MYSQL_DATABASE: "api" MYSQL_ROOT_PASSWORD: "root" networks: - private ports: - 3306:3306 networks: private: 

Comments

0

I was facing an scenario which locally it worked:

sip_db: image: "mariadb:10.10" restart: always env_file: $PWD/config/.db_env expose: - "3306" volumes: - $PWD/docker/scripts/wait-for-it.sh:/tmp/wait-for-it.sh:ro healthcheck: test: ["CMD-SHELL", "/tmp/wait-for-it.sh -h localhost -p 3306 || exit 1"] sip_tests: image: "sip_api:${IMAGE_TAG-latest}" restart: always env_file: $PWD/config/.backend_env entrypoint: ["./entrypoint.sh"] command: unit-tests depends_on: sip_db: condition: service_healthy 

However, on the github runner, it wasn't, that's why I had to modify the compose file in the following way:

 sip_db: image: "mariadb:10.10" restart: always env_file: $PWD/config/.db_env expose: - "3306" wait_for_db: image: "busybox:latest" depends_on: - sip_db command: ["sh", "-c", "until nc -vz sip_db 3306 ; do echo 'waiting for sip-db sip_db:3306' ; done"] sip_tests: image: "sip_api:${IMAGE_TAG-latest}" restart: always env_file: $PWD/config/.backend_env entrypoint: ["./entrypoint.sh"] command: unit-tests depends_on: wait_for_db: condition: service_completed_successfully 

I hope u find it helpful.

Comments

0

Please include this healthcheck:

healthcheck: test: ["CMD", "mysql", "-h", "db", "-u", "root", "-p${MYSQL_ROOT_PASSWORD}", "-e", "SELECT 1;"] 

Note I have used the db service here.

I was struggling with the error: 'RuntimeError: 'cryptography' package is required for sha256_password or caching_sha2_password auth methods'

Comments

0

I was experiencing many false positives (MySQL wasn’t truly accepting connections even though mysqladmin ping reported success) with:

healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 15s timeout: 20s retries: 10 

Switching to this healthcheck solved the issue:

healthcheck: test: ["CMD-SHELL", "mysql -h 127.0.0.1 -uroot -proot -e 'SELECT 1' || exit 1"] interval: 5s timeout: 20s retries: 10 

Obviously this is only suitable for local testing where exposing credentials is acceptable. In my setup, the root password is set directly in the service:

environment: MYSQL_ROOT_PASSWORD: root 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.