I’ve always had issues with my mautic:emails:send cronjob. If I put the send limit too low my emails don’t go out fast enough. If I put the limit too high I exceed the Amazon SES rate limit of 14 emails per second.
So I made this script that does the best of both worlds… you can run it every 30sec-1min and it will check if there are mails to send. If no mails need to be sent it just exits without running mautic. If there are mails that need to go out, it will send EXACTLY 14 emails, wait 1 sec, send 14 emails, and repeat in a loop until all mails are sent.
So this means you can just set and forget a single cronjob every minute to run this script and it will always send emails and maintain the sending within your rate limit. Amazon SES is 14, but you can configure the script to use whatever number you prefer.
For large mail jobs (i.e. 100,000 emails) you can run it from the terminal in interactive mode and it will provide visual output and an ETA of when the emails will be done sending.
Enjoy and feel free to send me any bugs or suggestions for improvement.
Hi, thank you, this is pretty awesome.
Special thanks for sharing the whole script.
I wanted to ask - does this script support multi-thread?
A small server can send 2-3 emails / sec and by running 3-4 threads it can be optimized. Does this sxript also does it? Thx!
It does not, but it could be. My understanding is that mautic sets a lock when sending emails so I don’t know if it is good practice to override this and make mautic send concurrently. It COULD be done, but not sure if it SHOULD be done.
Also the mechanism to avoid the rate limit is not intelligent right now “sleep 1”. If I was using multithreading, I’d probably need to base everything on timestamps. It would need to be reworked.
Okay. So this is a big gun. Your problem is to try to send slow
Again: thank you so much for the script! I’ll take a look, maybe multi thread is also an option here.
Thank you, Zdenyo was kind enough to send this to me before - and the key is really installing the swiftmailer patch you embedded there.
We use this in production, and have to say it works perfectly. I was sending so fast, got warned by Amazon SES after the first try.
I think it is also important not to run the related cronjob as a root, because then file locks are disregarded I have seen some bad setups where this was an issue.
Based off of rudimentary telemtry (atop command) It looks like it ran all 3 processes at the same time, but on the same thread (logical thread: 0). Is it not supposed to use 3 separate CPU threads?
EDIT - October 30th: Managed to fix this by adding some lock_mode parameters and adding quotations to the lock names. New script: