Messenger:consume email is very slow

Mautic: 5.2.2
PHP: 8.3
We’re only sending segment emails at this time, and to do this I’m using messenger:consume email cron to process my doctrine queue.

I have emails loaded in messenger_messages, but no matter what I try for --limit and --time-limit I can’t seem to get anything more than an average of .6 emails per second when calculating out how many are cleared over time.

I’m using DuoCircle as our SMTP relay, and trying to determine where the bottleneck is as I see no evidence of memory or cpu issues.

@dirk_s I’ve using your script, keeping the defaults that were set to 14 email per second, I’ve tried tweaking a few things as well, but can’t seem to get any difference in actual throughput regardless of settings.

I’m in the process of setting up a RabbitMQ option in case that will help, I have it running on a dev server but need to clear this queue first.

I guess I’m mostly just not clear on where the issue is, at this point I suspect it’s just the long handshake with smtp and it doesn’t seem to send any more than a single email in a batch at all.

Our previous system was able to send send different emails in parallel, but that doesn’t seem to be a solid option that I can see without danger of duplicate sends. The current queue of 33,000 emails is made up of 9 separate emails being sent to 9 individual segments. Is it possible to have each email use a separate worker somehow in parallel?

I do no have a dev site with rabbitmq to test out. I’ve noticed that queueing messages into rabbit only happens at 2.0/s so that’s pretty slow as well.

I ran a test to queue messages directly and got
Published 1000 messages in 0.022 seconds
Average: 45454.55 msg/sec

So, I’m still chasing bottlenecks here, wondering if my expectations are maybe not inline with what Mautic can do?

I was able to process a sample email of 1420 messages, I got the send speed using rabbit up to .9 per second, so only a slight improvement. My dev server has 12 cores and 64GB memory, so I don’t think this is a resource issue, as nothing is showing full utilization.

Any input welcome on what are fair expectations for mautic performance and where I might look to make adjustments.

Hi,
Maybe you should use mupltiple threads like

          • php /var/www/mautic/bin/console mautic:broadcast:send --max-threads=15 --thread-id=1 --limit=2500 > /dev/null 2>&1
          • php /var/www/mautic/bin/console mautic:broadcast:send --max-threads=15 --thread-id=2 --limit=2500 > /dev/null 2>&1
          • php /var/www/mautic/bin/console mautic:broadcast:send --max-threads=15 --thread-id=3 --limit=2500 > /dev/null 2>&1

Hi @srdjan9791, thanks for the suggestion!

We just completed some testing over the last few days and I was meaning to come back and up date this as well -

I read some other posts that indicated that multiple crons would end up sending duplicate emails, so that made me a little nervous. I tried using some of the suggested scripts out there that manage things with a lock file and flock, but those weren’t increasing our send rate. I’m not sure if the duplication issue was only when using the standard queue settings?

We only send segment emails at the moment, so my understanding is that the only cron that works here is the messenger:consume email. boadcast:send doesn’t appear to read from the messenger_messages table? Or at least when I had that cron running, no emails went out.

Since I moved to rabbitMQ instead of the messenger_messages table, I started using supervisor to spin up additional workers. With this we were able to get up to 3 or 4 emails a second running 3 or 4 workers.

I would assume that I could accomplish the same things with the multiple crons you suggested with rabbitMQ and still avoid duplicates as well, but I like the additional control with supervisor, at least as we are still testing out Mautic.

Hey,

I’ve actually hit exactly the same wall on larger campaigns in Mautic 5+.

Tried the usual stuff: Redis, RabbitMQ, Doctrine queue… they all end up doing basically one-by-one processing. On the same box, especially during big sends, those queues start chewing serious RAM and CPU, and everything else suffers.

Weirdly enough, going back to how Mautic 4 handled it with the filesystem spool (those .message files) was actually way faster in raw throughput — the only real downside was crappy locking that let duplicates sneak in if you ran multiple processes.

So I figured: why not revive that idea but fix the flaws and make it better for modern Mautic?

Ended up coding a small plugin that adds a custom file-based messenger transport:

  • Proper file locking → no duplicates, even with parallel workers

  • Batches messages on read/write (configurable size, like 100 or 500) → kills the 1-at-a-time bottleneck

  • Barely uses any extra resources — no heavy services, no big memory footprint

  • Still works with standard messenger:consume email

  • Threw in a few extra commands that mimic the old Mautic:emails:send style (run once, exit when empty, locked so you can’t double-start)

  • Multi-process friendly out of the box

On my setups it ends up a lot faster than Redis/RabbitMQ while sipping way less CPU/RAM. For really big ones I even have a simple bash supervisor script that watches load + queue depth and scales workers up/down automatically.

It’s production-tested now but still a bit rough around the edges (needs cleaner config, better docs, strip some debug lines…). If you’re interested in trying it or poking at the code, let me know — can share via GitHub once I tidy it a bit more.

Hi @esio interesting! Where exactly will your plugin hook into the process? Is it the message workers filling the queue? Consumer stays as before, but reads from the files?

Is the plugin replacing core files or adding a new way to handle messages - as in setting a new way to queue messages?

Hi @dirk_s ,

ok, let me explain. The plugin hooks in exactly where Symfony Messenger expects a transport - so producers (the parts of Mautic that generate messages like sending emails or tracking hits) keep working the same way, they just write to files instead of the DB table when you set the DSN to filesystem://…

The consumer stays 100% the standard messenger:consume email (or hit, etc.) - no core files touched or overridden. It just reads from the filesystem directories (var/queue/email/*.message etc.) using Symfony’s FilesystemTransport under the hood, with some extra atomic locking to make sure multiple workers don’t step on each other. So you get safe parallel processing without the DB polling/locking nightmare.

It’s basically a new transport option you activate via Queue Settings or config, then clear cache and you’re good. Nothing gets replaced in core, just a new way to handle queuing.

One extra thing I added that might be useful: there’s support for running the consumer in a “legacy-ish” mode where it processes everything available and then exits cleanly instead of hanging around forever polling an empty queue. That way you can use custom supervisor which can spin up/down workers based on queue length without them eating CPU/idle forever (similar as I do).

You can try if you’re on Mautic 7 - repo is here: https://github.com/wieslawgolec/plugin-filesystem-queue

It may work on Mautic 5/6 without any changes, as it is very simple code but I didn’t tested it as I already migrated from Mautic 4 to 7.1 RC directly.

Hi,

Just a quick update.
I’ve added a dedicated supervisor command to the plugin as well (so now it is all-in-one solution to dispatch email queue efficiently without 3rd party queue engines).

The command is:
mautic:emails:supervisor

It works as a smart, auto-scaling process manager for your email workers. Instead of manually configuring a fixed number of messenger:consume email processes, this command starts with an initial number of threads (default: 1) and dynamically scales them up to a configurable maximum (default: 8) based on current send rate, system load average, and memory usage.

Key features:

  • Monitors server load and memory in real time and prevents spawning new threads if limits are exceeded (e.g. max load 4.0 or 80% memory usage).

  • Restarts individual worker threads after a set time limit or memory threshold to keep things fresh and stable.

  • Supports dynamic rate-based scaling so it adds more threads when there’s a backlog and scales down when the queue is light.

  • Configurable via local.php settings (initial threads, max threads, check interval, settle time, etc.).

  • Logging of scaling decisions (can be enabled/disabled) and optional benchmark mode.

  • Runs in the foreground - easy to wrap with systemd, nohup, or existing process manager.

Its entire logic is directly based on the bash supervisor script I’ve been running in my production environment for years to manage outgoing email processes. It behaves very similarly: it watches the queue and system resources, intelligently spins workers up or down, and avoids idle processes constantly polling an empty queue.

You can run it with extra verbosity like this:
php bin/console mautic:emails:supervisor -vv

Feel free to try.

Hi Esio,
do you also have an idea how to make sure we stay below the rate limit or identify when a rate limit is reached?

I came across this: Messenger: Sync & Queued Message Handling (Symfony Docs)
and started a discussion here:

Hi @dirk_s,

I believe these kinds of rate limits should be handled directly by the sending transport itself, instead of at the Messenger level.

For AWS SES, I’m using the etailors_amazon_ses plugin from pm-pmaas:

It automatically adjusts the sending speed and batch sizes according to your actual AWS account limits and works nicely with Mautic 7.

The original version had some trouble when running multiple workers in parallel. BuggerSee created PR #130 that adds shared token bucket rate limiting (making it safe for multiple workers) and fixes a few other things:

I’m running with that PR applied, so I didn’t need to build any custom limiting on my own.

In my setup I haven’t hit any SES limits yet. My account allows 2 million emails per day plus 170 per second, which is pretty hard to reach in normal use. I just limit the number of parallel processes (via shared supervisor implementation) based on my own benchmarks of the real average speed. Most of the time I’m not exceeding 50% of the per-second quota, so it runs smoothly and stable.

I use this

Have a full guide but is outdated. Will try to update it this week.

This post answer solves my need specifically via SES.
Only 1 worker running, not many simultaneously.

BTW, I configured both servers to be able to use SES as the SMTP directly from EXIM or POSTFIX instead of included SMTP server (2 servers, 1 with postfix, other with exim).

This is great stuff - we’ve been scaling up but haven’t run into any problems in a while -

My current pipeline currently looks like

Mautic UI (Send button or scheduled times)

RabbitMQ/AMQP (message queue)

messenger:consume — 2 workers via supervisord

Postfix (local relay)

DuoCircle

I’m using PostFix to do per-sender separate senders (or at least I plan to) so we can use different providers. We can’t use SES for our use case, we are using a provider called DuoCircle.

Our throughput to the provider varies quite a bit, but hovers around 300/hr on average with 1.5 million sends over the last 30 days. I suppose I could spin up more workers if we needed to send higher per hour rates as well, but I’ve not dug into why the send rate varies so much, and it could just be calculation troubles considering all the parts in the process.

Hi serendipitytech,

Your POSTFIX setup is interesting. But running a single worker only works well when volume is steady and low. It doesn’t scale well for big numbers or suddenly spike ups.

I often have to deliver up to 500k emails (sometimes across multiple simultaneous segment campaigns) within a tight 2-3 hour window after launch, because marketing requires all emails to be sent within 2 hours of campaign start to fit into opening email best hours.

A single messenger:consume email worker is far too slow for that kind of spike. That’s why I moved to a custom file-based messenger transport with proper locking, plus a custom supervisor that dynamically scales the number of consume workers up and down based on current queue depth and system load.

This lets me burst to many parallel consumers (40+) during big campaign launches to clear hundreds of thousands of emails quickly without worrying for duplicates, then automatically scale back to just a few workers during normal traffic. It avoids the risks of static multiple crons (under-utilization most of the time or overload/duplicates during spikes).

Thanks for the shoutout! :slight_smile:
Glad the token bucket PR is working well and stable in your setup too.
I also often have to send 100k+ emails in short bursts like 20 minutes using RabbitMQ with multiple workers, so the multi-worker rate limiting was essential for my own use case.
If you run into any issues, bugs, or have feedback, feel free to report them on the PR or on my fork – happy to look into it.