Upgrading from 5.0.4 to 5.2.0 on Docker

Hi:

I’m testing a possible upgrade strategy for moving from 5.0.4 to 5.2.0 using Docker.

I currently have this setup: a Dockerfile of my own creation based on 5.0.4:

FROM mautic/mautic:5.0.4-fpm

WORKDIR /var/www/html/

COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer

RUN apt update && \
        apt install -y git npm && \
        composer require symfony/google-mailer symfony/doctrine-messenger && \
        rm -Rf /var/www/html/var/cache && \
        rm /usr/local/bin/composer && \
        apt remove -y git npm && \
        chmod -R 777 /var/www/html/var

Plus a docker-compose.yml looking like this:

x-mautic-volumes:
  &mautic-volumes
  - ./mautic/config:/var/www/html/config:z
  - ./mautic/logs:/var/www/html/var/logs:z
  - ./mautic/media/files:/var/www/html/docroot/media/files:z
  - ./mautic/media/images:/var/www/html/docroot/media/images:z
  - ./mautic/themes/leeway_newsletter:/var/www/html/docroot/themes/leeway_newsletter:z
  - ./mautic/themes/leeway_responsive:/var/www/html/docroot/themes/leeway_responsive:z
  - ./session.ini:/usr/local/etc/php/conf.d/session.ini:z
  - ./cron:/opt/mautic/cron:z
  - mautic-docroot:/var/www/html/docroot:z
  - mautic-vendor:/var/www/html/vendor:z

services:
  nginx:
    image: nginx
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
      - mautic-docroot:/var/www/html/docroot:z
      - ./mautic/media/files:/var/www/html/docroot/media/files:z
      - ./mautic/media/images:/var/www/html/docroot/media/images:z
      - ./.well-known:/var/www/html/docroot/.well-known
    depends_on:
      - mautic_web
    ports:
      - "127.0.0.1:8002:80"
    networks:
      - default
    restart: always

  mautic_web:
    image: my_mautic
    build: .
    volumes: *mautic-volumes

    env_file:
      - .mautic_env
    healthcheck:
      test: cgi-fcgi -bind -connect 127.0.0.1:9000
      start_period: 5s
      interval: 5s
      timeout: 5s
      retries: 100
    networks:
      - default
    restart: always

  mautic_cron:
    image: my_mautic
    build: .
    volumes: *mautic-volumes
    environment:
      - DOCKER_MAUTIC_ROLE=mautic_cron
    env_file:
      - .mautic_env
    depends_on:
      mautic_web:
        condition: service_healthy
    networks:
      - default
    restart: always

volumes:
  mautic-docroot:
    name: mautic-5.0.4
  mautic-vendor:
    name: mautic-5.0.4-vendor

networks:
  default:
    name: ${COMPOSE_PROJECT_NAME}-docker

These are the changes I made:

New Dockerfile:

FROM mautic/mautic:5.2.0-fpm

WORKDIR /var/www/html/

COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer

RUN apt update && \
        apt install -y git npm && \
        composer require symfony/google-mailer symfony/doctrine-messenger && \
        rm -Rf /var/www/html/var/cache && \
        rm /usr/local/bin/composer && \
        apt remove -y git npm && \
        chmod -R 777 /var/www/html/var

New docker-compose.yml:

x-mautic-volumes:
  &mautic-volumes
  - ./mautic/config:/var/www/html/docroot/config:z
  - ./mautic/logs:/var/www/html/var/logs:z
  - ./mautic/media/files:/var/www/html/docroot/media/files:z
  - ./mautic/media/images:/var/www/html/docroot/media/images:z
  - ./mautic/themes/leeway_newsletter:/var/www/html/docroot/themes/leeway_newsletter:z
  - ./mautic/themes/leeway_responsive:/var/www/html/docroot/themes/leeway_responsive:z
  - ./session.ini:/usr/local/etc/php/conf.d/session.ini:z
  - ./cron:/opt/mautic/cron:z

services:
  nginx:
    image: nginx
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
      - mautic-docroot:/var/www/html/docroot:z
      - ./mautic/media/files:/var/www/html/docroot/media/files:z
      - ./mautic/media/images:/var/www/html/docroot/media/images:z
      - ./.well-known:/var/www/html/docroot/.well-known
    depends_on:
      - mautic_web
    ports:
      - "127.0.0.1:8002:80"
    networks:
      - default
    restart: always

  mautic_web:
    image: my_mautic
    build: .
    volumes: *mautic-volumes

    env_file:
      - .mautic_env
    healthcheck:
      test: cgi-fcgi -bind -connect 127.0.0.1:9000
      start_period: 5s
      interval: 5s
      timeout: 5s
      retries: 100
    networks:
      - default
    restart: always

  mautic_cron:
    image: my_mautic
    build: .
    volumes: *mautic-volumes
    environment:
      - DOCKER_MAUTIC_ROLE=mautic_cron
    env_file:
      - .mautic_env
    depends_on:
      mautic_web:
        condition: service_healthy
    networks:
      - default
    restart: always

  mautic_db:
    image: mariadb:10.11.8-jammy
    volumes:
      - mautic-db:/var/lib/mysql
    restart: always
    networks:
      - default
    healthcheck:
      test: ["CMD-SHELL", "mysqladmin status -u$$MYSQL_USER -p$$MYSQL_PASSWORD --protocol=TCP"]
      interval: 4s
      timeout: 10s
      retries: 10
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: mautic
      MYSQL_USER: mautic_db_user
      MYSQL_PASSWORD: mautic_db_pwd

networks:
  default:
    name: ${COMPOSE_PROJECT_NAME}-docker

After that, I:

  1. Re-built my_mautic image
  2. Brought down the current services
  3. Re-created the containers
  4. Cleared the cache in the mautic_web container
  5. Run the migration scripts from the mautic_web container:
  6. docker compose exec mautic_web php bin/console mautic:update:apply
  7. docker compose exec mautic_web php bin/console mautic:update:apply --finish

And apparently everything is working ok.

For the time being I only tested this in a controller environment (not my actual Mautic installation) but if my tests go ok I’ll probably execute the actual migration soon.

Just leaving the topic here in case it helps others :slight_smile:

Thank you for sharing! As data/configuration is separated from the code (and should persist in the local filesystem of the host or least in a Docker volume) the upgrade should be as easy as changing the image-tag to fetch from Dockerhub, rebuild and replace the running mautic_web, _cron and _worker containers.

I’m using a local Gitlab server and a CI/CD pipeline to build the image (if needed) and deploy the containers using a Gitlab-Runner.

Regarding the migrations scripts: are there always to be run? I thought this is only needed on database schema changes?
Do you know the purpose of the
DOCKER_MAUTIC_RUN_MIGRATIONS
env var?

Hey, just finished (I hope so!) the upgrade from 5.0.4 to 5.2.1.

A couple of notes about the process:

  1. I had to add this line: $_SERVER['HTTPS'] = 'on'; at the top of index.php to get my ssh working (Something I had to do last time as well).
  2. I was unable to have Mautic detect the fact that it was already installed, so I re-run the installation process and, afterwards copied the db on top of the new one
  3. Applied the migrations

That’s it!

It looks really nice so far, I hope I don’t find any unpleasant surprises going forward…

A little update on my upgrade adventure.

I noticed my server crashed a few times after the upgrade and I couldn’t figure out what was going on.

I installed a fresh new DO droplet and copied everything from the old one.

Anyway, after going back and forth a couple of times I noticed the CPU and memory were spiking at some points.

My theory is it is due to the cronjobs but I’m not 100% on that yet…

I’ve been looking at Mautic worker + cron interaction - #4 by marcus42 and trying the flock workaround… let’s see how it goes.

Update:

Apparently the issue was not related to Mautic but to a WordPress site I have running in the same server. I installed php8.3 instead of 8.2 as I had in the previous server and that seems to have made the difference. I downgraded to 8.2 and things seem to be working fine.

By server crash you mean…

  1. it goes completely awol (can’t SSH or reach any ports)
    or
  2. website shows an error and I can still SSH into it to restart
    or
  3. website doesn’t load. Apache is down and I can still SSH into it to restart the web server

Crashes usually occurs when we hit a limit. It can be ram and disk (CPU won’t crash the whole server. Only make it very slow)

I can still ssh into the server but the websites are unreachable.

Only after I manually restart php-fpm things will start working again.

Right now I lowered the max_children of php-fpm on the host to see if this makes anything better…

Update:

The max_children doesn’t seem to make much of a difference :frowning:

I’m going back to my cron hypothesis. I just stopped the mautic_cron container and the CPU usage dropped dramatically.

I’ll let it run like this for a few minutes and see what happens

Faulty web server configuration.

You’re probably hitting max RAM - and then get killed by the system to self preserve.

Reduce the RAM allocation by
reduce the keepalive (be careful)
reduce the number of spawned process waiting or hanging for no reason
reduce the number of loaded module (not using it, close that attack surface)
get rid of the htaccess (load it in the vhost configuration - much more efficient)
only logs errors (no warning, info nor any low quality signal)

When dealing with a WordPress, check for classic pain points like the xmlrpc (just echo nothing in it to empty the file if not using it)
Make sure to have compressed medias (+2MB jpg/png/put your format here is a silent killer. How? by doing some kind of legit “slow read attack” where the clients will “slow read” your media, saturate your web server. And ultimately kill it)

PHP is a single threaded app.

You may save yourself from multiple simultaneous PHP thread caused by the crontab by doing:

  1. create a bash script listing all cron, one per line
#!/usr/bin/bash

php /path/to/mautic/console mautic:cmd:arg1
php /path/to/mautic/console mautic:cmd:arg2
php /path/to/mautic/console mautic:cmd:arg3

# and so on...
  1. chmod a+x the bash file to make it executable
  2. edit your crontab to run it every minute using the flock tool (skip the timeout, won’t be required in that case)

This way, you’ll only have a single CPU thread used by the cron and respect the cron overlapping condition.

1 Like

I like this suggestion! I’ll try it and report back :wink:

1 Like

So… after a couple of tests, what finally got things working was this addition to my cronjob:

* * * * * flock -n /var/www/html/var/cron-lock --command "/opt/mautic/cron/mautic_jobs.sh" | tee /tmp/stdout

I used flock -n to prevent jobs from being queued, which seems to have been putting too much pressure on my poor little server :slight_smile:

Also, I had to use a file within /var/www/html/var due to a permission issue as the user that runs the cronjobs is www-data.

It’s been running smoothly for a while so I believe this is it for the time being.

Thanks for your help!

It’s me again… I’m afraid what I thought was solved actually wasn’t :frowning:

I did try a few things:

  • Putting down the cron container but leaving the web one up
  • Replaced the NginX+fpm for a simple Apache based image
  • Simplified the cronjob to only run a single echo

Anyway, I found that, after a little while (30’ or less) the web component takes up all of the available memory in the server and it becomes responsive…

Any idea what I could be missing?

Thanks

Update

My currently running test: changed the configuration in /usr/local/etc/php-fpm.d/www.conf to:

  • pm = ondemand
  • pm.max_children = 3

So far so good…

The cronjobs run and everything is still up… I hope it stays like this :stuck_out_tongue:

Update 2

Nope, it’s still failing.

What I noticed is that everything comes back together if I manually kill the php-fpm process from within the web container. It seems like some particular request is spawing a huge cpu usage but I can’t pinpoint it…

Last daily update:

I couldn’t get it working steadily so I tried to downgrade to 5.2.0 with the exact same result so I had to roll it back to the old server using 5.0.4.

Apparently I didn’t loose any data since I pointed the old server to the new database and it seems ok.

I’ll keep an eye on it through the weekend and try again later.

I run a couple of more tests on my new server (Ubuntu 24.04), going through versions 5.1.0 and 5.1.1 with different php-fpm configurations + cronjob frequency and the one that seems to be working (Even with version 5.2.1) is the following:

  • Changed the php-fpm pool configuration to:
    pm = static
    pm.max_children = 2
  • Have the cronjobs run every 10 minutes:
* * * * * test -f /tmp/cron-lock || touch /tmp/cron-lock
*/10 * * * * flock -n /tmp/cron-lock --command "/opt/mautic/cron/mautic_jobs.sh" 2>&1 | tee /tmp/stdout

I’m still not sure why this change made the difference. I checked my previous installation (Ubuntu 23.10 + Mautic 5.0.4) and it was running fine with the default config…

Both servers have the same (virtual) hardware.

I tried increasing the cron frequency to every 3 minutes but that seems to have been too much… I’ll probably try 5 minutes and, if everything goes well I’ll call it a day.

I’m going to leave this thread here so others can benefit from my experience.

Hello, I’m updating on my adventures:

I finally got to get Mautic 5.2.1 working on my old server (Ubuntu 23.10, 1GB Ram, 25GB disk).

Then I tried a lot of things on new server instances (Ubuntu 24.04.1 and 24.10) but got the same result over and over: after ~20 min the CPU would go to 100% usage and only after manually killing the in-docker php-fpm process things would get back to normal.

I did notice that my old server is using Docker 24.0.5 and the new ones are on Docker 27.5.0. I tried downgrading but all could get was 24.0.7.

I also tried to explicitly limit the resources used by my containers and that didn’t solve the problem either.

So… right now I have a working environment but I’m afraid when it comes time to do the upgrade to a newer Ubuntu version things might fall apart…

Any suggestion is more than welcome.